ABSTRACT BLENDING-FUNCTION TECHNIQUES WITH APPLICATIONS TO DISCRETE LEAST SQUARES By Dale Russel Doty The theory of blending function spaces (bivariate interpolation) is developed in the general setting of interpolation spaces. In this setting it is shown that blending function spaces have the desirable quality of doubling the order of accuracy with less computation when compared to standard tensor product spaces. The dimensionality of discretized blending function spaces is derived, and several bases are explicitly constructed. The special example of Hermite spline blended piecewise polynomials is developed, Showing that these spaces have bases with small support which are easy to calculate. These spaces offer maximum order of convergence for a minimum number of basis elements. For example, linearly blended piecewise cubic polynomials offer a fourth order approxima - tion scheme, and cubic Hermite spline blended piecewise polynomials offer an eighth order scheme. Dale Russel Doty Next, using the exponential decay of the natural cubic cardinal splines and natural cubic spline blending, a derivative -free approxi- mation scheme is developed, which is eighth order in the interior of the domain. Algorithms with corresponding error estimates are given for solving the discrete least squares problem with unstructured data. For the univariate case, algorithms are developed using the space of cubic splines. The resulting error analysis indicates the necessary restrictions to be placed on the number and distribution of the data points to insure that the discrete least squares fit will be 0(hm) to a function fe CmEa, b] from which the data arises, where h is the mesh size and 1s m(4. An example is given to illustrate that the discrete least squares fit need not be close to f if these conditions are not realized. For the bivariate case, algorithms and error analyses are given for the spaces of bicubic splines and discretized blending function spaces. It is shown that the discrete least squares fit to a bivariate function f is of the same order accuracy as the corresponding interpolation accuracy. Discrete least squares is considered on general domains which have curved boundaries and are possibly multiply connected. This general domain is subdivided into ”standard" subdomains, and exPlicit mappings from the unit square to these standard subdomains are constructed which are one -one, onto, and have easily calculated Dale Rus sel Doty inverses. Thus, discrete least squares over general domains ' reduces to the cases previously considered. Finally, an extensive computational error analysis is given for a constrained least squares algorithm. BLENDING-FUNCTION TECHNIQUES WITH APPLICATIONS TO DISCRETE LEAST SQUARES BY Dale Russel Doty A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTO R OF PHILOSOPHY Deparh'nent of Mathematics 1975 ACKNOWLEDGMENTS I wish to express my appreciation to Professor Gerald D. Taylor for his encouragement and guidance during the preparation of this thesis. ii CHAPTER 1 CHAPTER 2 TABLE O F CONTENTS COMPUTATIONAL ERROR ANALYSIS FOR DISCRETE LEAST SQUARES . . . . . . . 1 Section 1 Section 2 Section 3 Section 4 Section 5 Section 6 Section 7 Section 8 Section 9 Least Squares with Constraints . . . . 2 Section 1.1 Pivoting . . . . ..... 5 Section 1. 2 A Constrained Least Squares Algorithm . . ...... 7 Defining the Perturbation Problem................lO Matrix Norm Inequalities 12 Condition Numbers of Non- square Matrices . . ..... 17 An Upper Bound for ”50". . . . 20 Basic Relations in the Perturba- tionalProblem . .. .. . . .. . .. 33 Effects of Perturbational Errors . . . 37 Rounding Errors in Computation . . . 45 Section 8.1 Vector Addition . . . . . 46 Section 8.2 Matrix Multiplication. . . 46 Section 8. 3 Solution of A Tri- angular Set of Equations . 47 Section 8. 4 Orthogonal Transfor- mation.......... 48 Rounding Error Analysis . . . . . . . 51 BLENDING FUNCTION THEORY . . . . . . . 65 Section 1 Section 2 Section 3 Section 4 Spline and Blending Function Interpolation . . . . . . . . . 66 Dimension of Discretized Blending FunctionSpaces............86 Natural Cubic Blending . . ..... 102 Exponential Decay of Natural Cubic Cardinal Splines . . . . . . . 109 iii CHAPTER 3 DISCRETE LEAST SQUARES ..... . . . . . 121 Section 1 Uniform Error Estimates ..... . 122 Section 2 A Uniform Bound ........... 129 Section 3 Univariate Discrete Least Squares .......... ..... 140 Section 4 Bivariate Least Squares withDataonMesh Lines . . . . . . . 155 Section 5 Bivariate Least Squares with Unstructured Data . . . . . . . 172 Section 5.1 Bicubic Splines . . . . . 172 Section 5. 2 Cubically Blended V Cubic Splines . . . . 178 Section 5. 3 Hermite Blended Piecewise Polynomials. . 180 Section 5.4 Linear Blending . . . . . 187 Section 6 Domain Transformations . . . . . .. . 199 Section 6. 1 Type 1 Domain Transformations . . . . . 200 Section 6. 2 Type 2 Transfor- mations.........211 Section 6.3 General Domains. . . . . 213 Section 6.4 Discrete Least Squares over I‘ ..... 215 BIBLIOGRAPHY. . . . . . . . . . . ............... 218 iv LIST OF FIGURES Page Figurel...... ........ . ...... ........108 Figure 2 The Region {2 ..................... 200 Figure 3 ....... . . . . . . ...... . ..... 214 Figure 4 .......... . . . . . ...... 214 Figure 5 .......... . ............ . . . 215 CHAPTER 1 COMPUTATIONAL ERROR ANALYSIS FOR DISCRETE LEAST SQUARES The following chapters will deal with solving the cOnstrained least squares problems which arise from applications of Blending Motion spaces. Due to computational considerations, it is necessary to use methods of orthogonal factorization and pseudo -inverses to obtain solutions to these problems. It is therefore necessary to perform a Perturbational and computational analysis of these methods to show how the numerical solutions are affected. We would hope that the condition numbers, 'X. associated with our least squares and con- 8traint equations, appear only to the first power in the error analysis, because our solutions contain only inverses to the first power. This, hOWe ver, has been shown not to be the case, quoting van der Sluis In]; "It caused something of a shock, therefore. when in 1966 Golub and Wilkinson asserted that already the multi- plications QA and QB may produce errors in the solu- tion containing a factor xz(A). " We will prove that the conditicm numbers associated with the cons trained least squares equations appear only linearly in the error analysis except for the coefficient of the residual. Section 1. Least Squares With Constraints We are given the two matrices (1.1) Aeernxn, and (1. 2) c 6 mm“, where rn . Then we seek a solution x e an which minimizes the norm of R (1.3) R=Ax-f, where felRm, and satisfies (1.4) Cx=g, where geer. We can think of (l. 3) as a least squares problem subject to the Constraint equations given in (1. 4). We will develop here the method of Halliday and Hayes [21] for finding a solution to (l. 3) and (l. 4). But first we need to state some of the pertinent theories of factorization by orthogonal traJilszformations... Householder [22] developed the theory of factori- zation into orthogonal transformations. Precisely, he has shown that if at step i the matrix C.1 has the form L.' o (1.5) C.= 1 9 1 T. 1 (i-l)x(i-l) (r-(i-l):)xn where LidR is a lower-triangular matrix, TielR 18 a. rectangular ,matrix. and C1 = C, then the orthogonal matrix Pie‘irum which introduces zeros into row i, from i+l to n of the matrix C , and leaves the first i-l columns of C1 unaltered is i+l given by i T (1.6) Pi=I-vivi IHi, where (l . 7) C. = C. P. . If we represent the entries of Ci by ck for 1 g j s r, ls k g n, then the vector vi e [Rn is given by T i I i i i (1‘ 8) Vi "‘°""' 0' cii+8gn('cii)si' ci.i+l’ ci.n)° «where sgn (° ) is a function of a real variable defined by +1 if x )0 (1.9) sgn(x)= . -I if x<0 Si is definedby n O . (1.10) S. 2 (642% . 1 5:1 1] Finany Hi is defined by (1.11) Hi=S§+lc2i|Si. AI'O. he has shown that c1+l I: S. ' ”'12) 1,1 1 or that the new diagonal element has the Euclidean length of the old row i, from i to n. This is aproperty 0‘ orthogonal transformations, that the Euclidean length of any transformed row remains invariant. Iffor some i, Si: 0, then row i of Ci is zero from i to n, and we take Pi = I. If this is not the case, then both Si and Hi are strictly positive. In either case, the factorization proceeds until the completion of step r, and we have a matrix, Cr+l’ which is lower tr iangular (1. 13) [Llo] = or”, where L: Lr+1. Define Q by 1.14 =P.p.....p ( ) Q l 2 r . . . —l T . Then Q is also an orthogonal matrix, 1.e. , Q = Q , and 1t follows from (1.7) that (1. 15a) [LlO]: C0, or (1.151)) LzCQl, Where Q1 is the first r columns of Q, I (1.1 _ _r 5c) Ql—Q[0]. Then we may write (1. 4) as (1.16) [Llo] QTx=g. Section 1.1. Pivoting Row pivoting can be included in the above algorithm for factoring C. This is done in the following manner. At step i of the factorization, row k, where 1g k g r, is chosen as the next pivotal row by some pivotal strategy. Then we premultiply by a . rxr . . . matr1x EieIR , wh1ch1nterchanges the rows 1 and k, see De skins [7 , p. 551]. We now factor Pi-l out of the matrix Ei Ci and define (1. 17) c:1+1 = (E1C1)P1 , where Pi is obtained from row i of E1 Ci. We can conclude from .(1. 4), (1.14) and (1.17) that II ["1 H111 0Q (1.18) [Mo] QTx Therefore, for theoretical purposes, we will assume that the matrix C and vector g with which we are working, have had their rows or eleIl‘lents permuted beforehand. So that, when we apply our pivotal Stra-tegy, the pivotal rows will be chosen sequentially from 1 to r. The type of pivoting that is usually used in practice is called "maximal row pivoting". At step i, this type of pivoting chooses the he)“: pivot by the criterion that Si is maximized. If two or more rOWS give the same value, then the first of these is chosen as the pivotal row. For this type of pivoting, Golub [11] has shown that (1.19) s )5 1 2 If for some i, Si = 0, then from (1.12), L will have a zero at position i on its diagonal, which means that the rank of L will T . be less than r. But from (1. 15a) we have C = [LID] Q wh1ch irnplies that the rank of C will also be less than r, see Deskins [7, p. 550]. Therefore, if we assume that C is of full rank r, then for l g ig r we must have (1.20) Si>0, and because the Si for 1g ig r are the absolute value of the diagonal entries of L we have (1. 21) L"1 exists. It should be noted that if r 2 n, then CT can be factored in the following way T C Pl 0 e e s e Pn 2'." [LI 0], Wher e by taking transposes and using P? = Pi e Rrxr W Pn e e s e 0 P1 C _ [-6—] , and Vv = LT is an upper-triangular matrix. We will use both types 0f factorizations in what follows. Section 1. 2. A Constrained Least Squares Algorithm Next, we shall describe an algorithm for the solution of con- strained least squares which was developed by Hayes and Halliday [2].]. This will be presented in detail, because we need their for- mulas for later reference.. Algorithm 1. l. (Hayes and Halliday [21]). If we have a least squares problem with constraints, as defined in (l. l)-(l.4), where C is of full rank, then the solution x is obtained as follows. Step 1: Let C be of full rank, then for 1\< i£ r, we have from (1. 20) that Si > 0 and L is nonsingular. Thus there exists a Q given by (l. 14) such that in (1. 16) (1.16) [L'O]QTx=g. SteE 2: Define the vectors dl‘ (- er and d2 g Inn" such that (10 23) l - QT x e (1' 24) d = L-1 g . §L°L3_ Solve for dz, which upon inverting (1. 23) will yield a "nation 1 u D (l. 25) x Toward this end, using (1. 3) and (l. 23) we have (1.26) _ Ax=AQQTx ' 'r =[B1ll32]Q x -'-.-12.1c11+iiz<12 =f+R, where B1 is the first r columns, and B2 the remaining n-r columns of the matrix A Q (l. 27) [31' Bz] = A Q. From (1. 26) we have 10 = - ( 28) BZdZ f Bldl+R' Let :32 be of full rank, then the factorization of 32' using Orthogonal transformations, proceeds in the following way. Define (l. 29) B2 1 ._._. B2 - and for 1‘ i‘n-r U3 (1' 30 = ' ’ B2,i+l i 2,i' where at step i. the orthogonal transformation Ui‘ [R g is defined by (1. 6), where B: 1 plays the role of Ci in (1.8)-(1. 12), and we are factoring from the left. Define (Ln) v=U ..”.U n-r 1 and w (1.329.) [ o ] - VBZ — BZ.m+l or (1.32b) w = v1 B2, where from (1. 20) and (l. 21), w£m(n-r)x(n-r) is an upper tri- angular nonsingular matrix. Then the solution of (1. 28) which minimizes R, [see 21], is given by -1 - (1.33) d2.- W V1{f-B1dl} . where V1 is the first n-r rows of V (1.34) v1 = [1n_r| o] v . It is shown by Peters and Wilkinson [28] that d as defined (1. 33) is 2 unique. Then by using (1. 24), (1. 33) and-(1. 25) we have the solution - *1 L"1 g (1.35) x = Q . w"l V1“ - Bl L"1 g) 10 Section 2. Defining the Perturbation Problem In this section, we shall derive an upper bound on the computa- tional errors of the solution defined by Algorithm 1. 1. Toward this end, we will deve10p a perturbational analysis of the problem defined in (l. l) to (1.4). 'Due to calculation and rounding errors, . the problem which is stored in the computer is not the exact matrices and vectors as defined in (1.3) and (l. 4). Instead, the original quantifies have been perturbed to give the new problem: (2. 1) i We are given the two matrices As Rm and CelRm' where (2.2) _ r 0. Because A A is a 15 square matrix, we can use pr0perty G of Wilki‘ison [34 , p. 290] and conclude our result. 2 n m 2 (3.4): If IIxII:1,then IIAxII = ZIE a_,x,:I i: J: lJJ n m m < EIZaZ 2x22]: IIAII , i=1 j:]_ 1.] j:]_.] E where we have used the Schwarz inequality on each term in brackets. Therefore, by using the definition of the spectral norm, we have IIAII< IIAIIE- To prove the left hand side of the inequality we use (3. l) and (3. 2) to obtain I IAI I2 = p(A AT). Because A ATe [Rnxn' we have n T n T from Marcus and Ming [26, p. 23] that Z) (A A )ii = 2 Xi (A A ). 1:]. 1:] Because xT A ATx : I IATxI I2 )0, and using [26 , p. 69] , we have that A AT is non-negative definite and Xi(A AT) 9 O for . T n T n T 1g1gn. Therefore n-p(AA )32 )t,(AA ) = 23 (AA ).. = 1:]. 1 1:1 11 n m 2 2 2‘. E a.. = IIAII , and we have proved (3.4). i=1 j=1 ‘3 E (3.5): Use (3. 1) and (3. 2a). (3. 6): Use the definition of I IAI IE . (3.7): From (3. 1) we have I IQI I2 = p(QT Q) = p(I) = 1, because all eigenvalues of the identity are 1. (3.8): | IQ AI |2 = p(Q A)TQ A) = p(AT A): IIAIIZ. (3.9): Because QT is orthogonal, use (3. 5) and (3. 8). (3. 10): Follows from the definition of I IAI I. 16 (3.11): Using Varga [32, p. 11], the norm of AB is obtained for an unit eigenvector x corresponding to p((AB)T(AB)). Then IIAB|I=IIAIl= IIA II-IIAII>1- If A is singular, then ’X(A) is undefined, (see [23, p. 81]). We would like to show that (4. 1) and (4. 2) are equivalent for m = n. This is clear, because if A is not of full rank, then ’X(A) is undefined in both (4. 1) and (4.2). If A is of full rank and square, then A.1 exists, and using property (I) of Wilkinson [34, p. 290] and(3.3)we have IIA'III2 = IIA-lA-TII -.-. II(A'T A)-1II = l/u(ATA). Thus we have that (4. 1) is equivalent to (4. 2). If Ae [Rm is a rectangular matrix, where m )n, then the condition number 060 ) of A is defined to be 18 (4.3) 7((A) = Sup IIAxII/ Inf IIAxII >,1, ||X||=1 ||X||=1 xe Rn erRn if A is of full rank n, otherwise ’)((A) is undefined, (see Bjorck I3 ]). To see that (4.1) and (4. 3) are equivalent for m; n, we have 1 from (3. 1) and Definition 3. 3 that Sup | IA xI | = p?- (AT A). Also, llxll=1 we have that AT Ae [Rnxn is of full rank, because if it were not, there would exist a xe [Rn such that x a? 0 and AT A x = 0. But this . . T T 2 . implles that x A Ax: IIAxII = O, andln turn that Ax: 0. This cannot happen if A is of full rank n and x 75 0. Therefore, we have that the smallest eigenvalue of AT A is non-zero and min 11(AT A) = ”(AT A). Varga [32, p. 11] , has shown that 1/IHk 2 A A +<|l3k|l/HkI°IHk-HkI/Ifi( ~k , ~k Mk s 4| lackl I[ 0. Because Sr is the absolute value of element r on the diagonal of L, which is a lower triangular matrix, we have [x r(L)| = Sr , where xr(L) is an eigenvalue of L. Therefore, there is a unit vector Xe IRr, such that Lx = k r(L)x . From (1. 21) L- exists, so we have L-lx = (1/1 r(L))X. Taking norms, we find (5.42) 1/sr= ||(1/xr(L))x|| - '1 ' ‘ I IL XI] -1 S IIL l I - If we use (3. l) we have [IL—1H2 = p(L-TL-l = p((L LT)-l) . Using Wilkinson [34, p. 290] property (F), because (L LT).1 is a . . T -l T -1 . symmetric matrix we have p((L L ) )= ”(L L ) I I . Using T -l - T (3.3), we have “(L L ) ||=1/.1(L L ), and from (5.5) and (3.2b) T T T . . we have p(L L )= ”(C C )= p(C C). This proves that (5.40) is true, using Definition 4.1 for the condition number 7((C) . We shall prove (5. 37), (5. 38) and (5. 39) simultaneously by induction on k, for l$ k g r, by using (5. 2a) and (5. 36). For k=1, using (5. 40), we have that (5. 37) is valid and hence also (5. 39) because <5-43> IIsciII/s1<3 K(K+1)J ||6C||+||6C|| i=1 g (K+1)k I|5c|| , where we have used (5. 1). To see that (5. 37) is valid for k+1 (5. 45) ”act: I I/Skfl 4mm“ I IacI I/I IcI II- [I IcI l/SkH] IIacIIIIIcIImcI + +e ix «n erIAII/IIB ll) IIIsAII/IIAII +K(r)¥(C) IIscII/I IcI IIMBZIII'RIl/I lfll. where the constants e2, e4, K(r), K2(n-r), K4(r) and K5(r) are given by (6. 5b), (7. 21), (5.48), (6. 7), (7. 23) and (7. 26) respectively. Proof: We have from Algorithm 1. 1 that solutions at and 9 exist to both problems, where x and a? are given by d L g l (7.3a) x=Q = Q 41 dz w Vl{f -113 L" g} ,. g-Ig d A A 1 A (7.3b) x=Q a = Q A‘l A A A “'1‘ 2 w Vl{f-BIL g} respectively. Subtracting x from a? we obtain 6d d (7.4) 2—x=éI II + aQIlI, 5d2 d2 A A _ where 5d1 = (11 - c11 and adz = d2 - d2 . Usmg (6. 10), (7.3a) and (7. 4), we find after taking norms (7.5) IIé-xIIsIIsdlIIMIaBZIwIIsBIIIIxII. Lemma 5.1 gives an upper bound for IIGQII I7-6) IIGQIIIIscII/II,cII]xIIxII. 40 Before estimating I ISdZI I, define the vector E e [Rm by A A 3 (7.10) g = {f - B1 1} - {f - Bl d1} r ’15 d B d " 5 ' 15 1 ' 5 1 l A If we make use of the equalities (l. 33), (3.17) and V1: V1 + 6V1 , we obtain the following simplified expression for 5d2 A_1 A /\ A A -1 { d (7.11) 5d2=W V1{f-B1d1}-W vl f-Bl 1} [it-16 w'1 v I{f B d} 8r” 0 " l ' l ' l l + 16 ”I ‘ I _w 5V1{f-Bld1}-6Wd2+vl€. In the last expression we have made a simplification by using the definition of <12 given in (l. 33). From (1. 28) we have B2 d2 - R = f - B1 (11. Substituting this into (7. 11) we find “I " I (7.12) 6d2_W wlszd2-8v1R-8Wd2+v16 A_1 A w ((5le -5W)d2-6V1R+V1€ . 2 Using (1. 32b) we see that A A . = - V (7 l3) 6W V1 B2 1B2 A V16B2 + 6V1BZ . If (7. 13) is substituted into (7. 12), we find after the cancellation of 6Vl B2 A_1 A A (7.14) 5d =W [-V15Bzd2-5V R+VlE . 2 1 If substitution of (7. 10) into (7.14) is made, we have after regrouping 41 75 (id-V’Ir'lesffiod 5Bd 5BdIW-15VR ('1) 2‘ 1'11'11'22’ 1’ Taking norms in (7.15), and using (6.6b) and (6.10) we have (7.16) IIadzIIs BZIIIGfII/IIIAII ||x||I+IIIBllllllAl| + IIBBlll/IIAIII lladlll/llxll + llsBlll/IIAII + IleBZII/IIAIII IIAII IIW'III IIxII +e,IIw'1II I|6V1Il IIRII- From (1.27), by using (3.9) and (3. 13), we have that (7.17) IIBIII. IIBZII/|If+R|| >llf|| - IIRII where e2 = l/(l-A) . Using inequalities (7.17) and (7.18) to simplify (7. 16) we obtain (7.19) IIaBZIIseZIezIIstI/IIfII+<1+IIBB1IIIIIAII> -|I6d1II/llx||+ IIBBIII/IIAII + IIsBzII/IIAIIIIIIAII/IIB,II>xIIxII + eleW'lIl IIsleI IIBII. where (6.4) has been used. 42 From (6.9b), and our hypotheses (7.1d) and (7. 1e) it follows (7.20) llsBlllézAllAll. thereby simplifying the coefficient of I I fidl I I/I IxI I to be (7.21) e4=1+2A. Using (6.9a) as an upper bound for I I6B1I I and I I6 BZI I, and sub- stituting (7.9) for I I5d1I I, we obtain after regrouping (7-22)I I5d2I I < BZIeZ I I5fI I/I IfI I + 2| I5AI I/I IAI I + IIscII/IIcIImml '1 '(IIA||/IIBZIIX(BZIIIX|| +62 IIW || I|5V1|I IIRII' where (7.23) K4(r)= e2 e4 K1(r)+2K(r), and Kl(r)= {17(2 +A)/I)/2(1 -AI). We will now obtain an upper bound for the coefficient of I IRI I, by applying the estimates given in (6. 4), (6. 8b), (6.9a) and (7.18) -l (7.24) BZIIW II II6V1||||RII IIIBAIIIIIAII + K(r)X(C)I IscIl/l IcI I>3<2IB2>IIBI I/I IfII. where (7.26) K5(r) = e K1(r) + K(r). 2 We have shown that the condition numbers appear only linearly in (7. 25), except for the coefficient of I IRI I/I IfI I. The question could be asked, if whether the term I)(2(B2) is reasonable, or if it might be the fault of the error analysis. Van der Sluis [31] has shown, using a geometrical argument, that squaring of the condition number in the unconstrained case can indeed be realized. If K(BZ) I IRI I/I IfI I is not too large, however, the effect of 'XZ(B2) can be minimized, and we would expect the condition numbers to appear only linearly in the error. Also, the full effect of a condition number is seldom or possibly never realized, see Wilkinson [34] . 44 Finally, because A and B2 are calculated during our solution of (1. 1) to (1.4), we have the bound IIAII/IIB,II< I‘T IIAIIE/IIBZIIE. which is easy to calculate. 45 Section 8. Rounding Errors in Computation We will state here for later reference some known results on computational errors as developed by Wilkinson [3 I, [35] . For our purposes, we will limit ourselves strictly to floating point computa- tion. To develop our results, we introduce the very general notation f1(- op- ), which is used by Wilkinson [36] and others. The "f!" notation means, that if we have two quantities A and B, which could be numbers, vectors, matrices, etc. , and a corresponding operator "op" such as scalar addition, inner product, matrix multi- plication, etc. , then fl (A op B) is that quantity which would result in the computer by performing that operation by some computer pro- gram using floating point arithmetic. A floating point binary number x consists of two parts, the binary exponent b and the binary mantissa a, where -% )a > -1 or %g as 1. Then x is expressed as x = a- 2b. The computer memory allocated for x is limited to a certain number of digits, which are divided up in some way between a and b. We will denote by t the number of binary digits allocated to the mantis sa. Also, we will assume that the number of digits allotted to the exponent is large enough to accommodate all of our calculations, and will not concern ourselves here with the problem of exponent under- or over-flow. Finally, we will assume that the computer with which we are working 46 has a double -precision accumulator. This means when single precision floating point numbers are retrieved from the memory to perform certain arithmetic operations, they are allocated a double precision mantissa before the operation is performed. The operation is carried out in double-precision, and the result is not rounded to single -precision until it is sent back to the memory. Under these assumptions Wilkinson [36] has shown the follow- ing results. Section 8. 1. Vector Addition Given two real numbers a and b, there exists a real 8, such that an) f1m+by=m+bfll+éh wmne (&2) fiqu*. . n Therefore, if we have two vectors x, ye R , (8.3) l|f£(x+v)-(x+v)Iléz-tIIXWII- Section 8. 2. Matrix Multiplication Given a matrix Ae [R and a vector b 6 En, let E denote the computational error in calculating A-b (8.4) E:f.€(A-b)-A-b, where the entries of E are represented by ei for l g is m. 47 Wilkinson [36], p. 83 has shown that under the assumption nz't< .1 -t n (8.5) IeiI\< 1.06- 2 [nIailIIblI +132 (n-(k-2))IaikIIka] t n $1.06‘2-n2 Ia,IIbI k=1 1k k - n 2 l n l <1.06-2tn(z a,k)2(2b:)2 , 1(2). 1 kzl where the last inequality is obtained by using Schwarz's inequality. Therefore, it follows from the definition of I I I IE and (3.4) that -t (8.6) IIEI|<1.06-2 nIIAIIEIIbII -t — 42 KllnlllAII IIBII. where (8.7) K—1(n): 1.06 n3/z Section 8. 3. Solution of a Triangular Set of Equations nxn . Let Le R be an upper- or lower-triangular non-singular matrix. We now wish to know the computational error introduced by solving the linear equation (8.8) Lx=b, where x,bstRn, for the vector x. Without loss of generality we will assume here that L is lower triangular. Wilkinson [36] has shown that the entries xi of x are calculated sequentially from i=1 to i=n 48 by using (8'9) xr = f£((-£r1xl - £r2 x2 - . . . - Ir,r-l xr-l+br)/£rr) ’ for l g rg n. Note that the vector x which we have constructed in (8. 9) is in general not the solution of (8. 8), but rather, it is an exact solution of the perturbed problem (8.10) (L+5L)x:b, where the lower triangular matrix 5L is bounded by [36 , p. 103] (8.11) IIISLIKIIISLIIE $1.06 2't(1 + 1.06- 2't-3(n+2)/2) IILI IE .1; — <2 K2(n) IILII ’ with — -t (8.12) K2(n)= 1.06 fn‘(1+l.062 3(n+2)/2), and the value of n restricted sufficiently to make the term 1. 06 2-t3(n + 2)/2 << 1. Also, if L.1 exists, then Wilkinson [36] has shown that (L + 6L).1 also exists. Section 8. 4. Orthogonal Transformation. An elementary orthogonal transformation Pe [R has the T n . form P = I - wa , where WeIR and IIwII =1. Given the vector w, we will denote the matrix "P”, calculated using floating point arithmetic, by the symbol P, (which may no longer be orthogonal), i. e. 49 (8.13) p s 11(1)) , where it is assumed that n2.t < . l. Wilkinson [35] has shown that if Ae R 1s any matr1x, premultipllcatlon by P usmg floating point arithmetic gives — -t (8.14) IIf£(PA) - PA||< 2 (3 7m IIAII , where (8.15) I3 = 12.36. Also if A is premultiplied sequentially by orthogonal transformations P1, 2, °°', Pm, and for l<1! = fl (E;~Xi), where X = A . (8. 16b) 1 1+1 Then there exists a 5Ae [Rnxm such that (8.17) Am+1 .-. Pm- Pm_1- . P1(A+ 5A), with (8.18) IlaAIlsllsAllE -t -t -1 sm2 I3(1+2 mm IIAIIE .1;— <2 K3(m)IIAII ' where [3 is given in (8. 15) and 1;;(m) is defined by (8.19) 1_(-3(m) = (3 my2 (1 + 2'tp)m'1. 50 The results of (8. 14) through (8. 19) hold also for post multi- plication by orthogonal transformations. This is true because (8.20) B P: (PBT)T. m where B e [R and P = P , and our norms are invariant under transposition. 51 Section 9. Rounding Error Analysis We now wish to analyze the effect of computational errors on our least squares solution with constraints. We will show that most of the error can be accounted for by perturbational induced errors in the initial problem. Algorithm 1. 1 gives the exact solution (1. l) to (1.4) to be I‘ -1 q L 8 (9.1) x: Q . W"l Vl {f - B1 L'1 3} We will now define stepwise the order of computation defined in (9. 1). This will determine the effect of the errors. Step 1: Reduce C by orthogonal transformations to obtain L. Define for l < i( r (9,2) (:1+1 = fl (Ci Pi" where C1 = C and [LID] = Cr+1 s and ‘ ._ A (9.3) Pi = f1(Pi), where Pi e IR is that elementary orthogonal matrix which I A exactly reduces row i of Ci . Define the orthogonal matrix Q by 6 1’3 1’5 13 (9'4) - l 2 r' ' From (8.17) and (8.18) there existsaperturbation SC in C such that (9.5) [flo] = (C+6C)8, 52 where -t— (9.6a) II6CII<2 K3(r)IICII, or -t (9.6b) II6CII:O(2 ||c||). Step 2: Calculate L-1 g . From (8. 10) and (8. 11) there exists a * _ perturbation 5L in L such that __1 — :k -1 (9.7) fl (L g)=(L+6L) g — * -l . —-l where (L + 6L ) ex1sts because L does and >1: -t— - (9.8) II6L ||<2 KZIrHILII- Step 3: Calculate B1 and B2. Define for léig r (9.9) A1+1 = ff (Ai Pi) where A1 = A , (9.10) [Ell-132] = Kr“ , where Pi is defined by (9. 3). From (8. l7) and (8.18) there exists * mxn a matrix 8A e [R such that (9.11) [B1 IBZI = IA+5A*)6. where >k -t- (9.12a) II6A Hg 2 K3(n) IIAII, or >1: - (9.12b) I|6A||=OIZtIIAIIL 53 Step 4: Calculate B1(L + 5L*)u1 g . Define ElelRm by — — r -1 — — *-1 (9.13) fl(131-(L+ 8L) g)=B1(L+6L) g+£l A bound for 61 is given in (8. 6) -t— - — - (9.14) ||51||<2 Kl(r)||131||||(L+8L*)1g||. _ _ :1: -1 Step 5: Calculate fl(f - [B1(L + 5L ) g + £1]_). Define EzelRm by — — 4-1 (9.15) f£(f-I:B1(L+5L) g+61]) —f EL 8L*'1 5 8 _-(1(+ )g+l)+2. A bound for 62 is given by (8. 3) -t — _. *-1 (9.16) |l€2|I<2 IIf-(BI(L+5L) 8+61)|| Step 6: Calculate W 15; reducing B2 with elementary orthogonal transformations. Define for 1< i< n-r (9. l7) B2,i+r = f1(Ui B2,i) and B2,1= B2 , where _ f A (9. 18) Ui _ [(Ui) . A “13:!!! U1 6 [R is that elementary orthogonal transformation which exactly reduces B2 i' Define the orthogonal matrix 0 by 54 A A A (9.19) V=Un_r-----U1 . * mx(n-r) From (8. l7) and (8.18) there exists a matrix GB 5 [R such 2 that 920 [WI—E -\7§+6B* (' ) '6' ' 2,((n-r)+1)' ( 2 2 ) and >I< -t— — (9.21) Ilosz Hg 2 K3(n-r) IIBZII , — (n—r)x(n-r) , , , where We [R 18 an upper triangular matrix. A Step 7: Premultiplication by V. Define _ — —- * -l (9.22) z ={f—(B(L+6L) g+£)+£} l l l 2 andfor lgign-r (9.23) z1+1 :fl€(Ui zi). From (8.17) and (8.18) there exists a vector 6Ze [Rm such that (9.24) z(n-r)+l = V(z1+ 62) , where -t- _ (9.25) ||8z||42 K3(n-r)IIz1II . We now define y e an-r by (9.26) y : [I I0] 3 = 6 (-z- +52) n-r r+l l l ' where '55 (9.27) 9 =[I [o] 9. 1 n-r A A It is clear that V1 is the first n-r rows of V. Step 8: Calculate W'l y We have from (8.10) and (8.11), that there exists a perturbation * 6W such that - -l - * -1 (9.28) fl(W y):(W+ 6W) y, —' * -l . —-1 . . where (W + 6W ) ex1sts because W ex18ts and t— * _. — (9.29) ||6W Hg 2 K2(n-—r) ||W||. Step 9: Premultiply by Q. Define 71"an from(9.7), (9.28), (9.26) and (9.22) by ._ * _1 (L+ 6L ) g (9030) 1 = — *-1A — — *-1 _(W+6W) V1{6z+f-B1(L+6L ) g-51+52}J andfor lgiér (r+l-i) 11 (9.31) ~1. =u(i§ 1+1 )° Therefore our calculated solution is given by (9032) X=lr+l , * and from (8. l7) and (8. 18) there exists a vector 61 e [Rn such that _ A— >:< (9.33) x=Q(£1+61 ), 56 where 9: -t— _ (9.34) H51 Hg 2 K3(1) ||11||. From (9. 33) and (9. 30) we have an exact representation of our calcu- lated least squares vector 3? using floating point arithmetic. We now proceed to estimate how far i is from the true solution x given in (9. 1). This will be accomplished in two parts. First we will estimate how far ; is from an intermediate vector 2 which is the exact solution of a perturbed problem, and then estimate how far 3‘; is from the exact solution x of the original problem. In order to obtain a compact expression, we will make the assumption that the following quantities are moderately small. For a real A satisfying 0:< --1 (9.35h) l|6WllllW INA. where Wsz 6W+W. We will now account for most of our errors by interpreting them as errors induced by a perturbation of the initial problem (9. 1), and then apply Theorem 7. 1 to bound this portion of the error. Consider the following constrained least squares problem A A (9.36a) C=C+5C,g=g,i.e. 5g=0, where 5 C is defined in (9. 5), A A (9.36b) A=A+5A,f=f,i.e. 5f=0, where >5: AT >:< (9.36c) 5A=[0|5Bz ]Q +5A , * * 6B2 and 6A are defined in (9. 20) and (9.11) respectively, with solution 3? which satisfies AA (9. 37) Cx = g , minimizing the norm of A AA (9038) RzAX'fo 58 A /\ From (9. 5) it is clear that the orthogonal matrix Q reduces C _ exactly to [LID] , where (9. 6b) gives a bound for I [GO] I . Also, from (9. 36b) and (9. 11) we have 939 26-[s.s.te*] (' ’ " 1 2 2 ' . A _ * and from (9.20) the orthogonal matrix V reduces B2 + 632 to A A .— [E31] . This is the reason we have L = L and W = W . Therefore, using Algorithm 1. l we have :4 g a A A A l (9040) X: Q = Q a , __ A ._ __ w l v {f - B L l g} 7' L... l 1 .H where — * 3 f E 3 a (9.41) (B2 + 6B2 ) 2 - - 1 1+ . In order to obtain a bound for ”Till I, ||§Z|| we use (9.11) and apply (3.13), (3.9)and (9. 12b) to obtain (9.42) ”3,”. IIBZII=0(||AII)- Using (9. 42) on (9. 21) we have * -t (9.43) ”45132 II = 0(2 ||A||). Therefore, from (9. 36¢), (9.43), (3.13) and (9. 12b) it follows that -t .(9.44) ||5A|| = 0(2 Hall) , an: A'1‘ * where 5A: SBZ [0'1] Q +5A . 59 If we use Theorem 7.1, (9.44) and (9.6b) it follows that (9.45) (morn/”XII =0(2'tx(<:)) +o<2‘tx IIAII/IIBZII) +<><2't MC) 7((32) IIAII/IIBZID + o<2'tx 12(BZXIIRIl/IIfIINIIAIIZI 2 IIBZII n. where in the last expression we have merged terms by using the fact that K(C) )1 . We now want to estimate the error which we cannot account for by perturbation. Toward this end use (9. 33). (9. 30) and (9. 40) to define Al e [Kr and A2 ean-r _ A A * AA} (9.46) x-x=Q6£ +QA 2 Taking norms, it follows from (9.46) and (6.10) that _ * (9-47) llx-§||<||51||+||A1||+||A2||- We will consider first the vector Al, and use (9.30), (9.40) and (3.17) to obtain - *-l —-l (L+5L) g-L g (9.48) A1 --1 *-1-—1 *A -(I+L 6L) L 6L d 60 Taking norms and using (9. 35g) and (3.16) bounds the first inverse, using (9. 8), (9.35a), (6.5c) and (6.10) yields (9.49) ||A1||= 0(2'tX(C) (IQII). Before evaluating I IAZI I, we will bound the vectors 51 and 52 . To accomplish this, we show the following result from (9. 39) by using (3.9) and (3.13) .. _ >§< A (9-50> llBlll. Ill-32+(513z IISIIAII- _-1 A Also, it follows from the fact that L g: (11 - *-l --1 *-1" (9.51) (L+5L) g=(I+L 5L) (1 . 1 Therefore, from (9.14), (9. 50), and (9. 51) we have using (9. 35g), (3.16) and (6.10) that -t A A (9-52) ||51||=0(2 IIAII IIXI|)~ From (9. 16), a bound for I ICZI I follows from the simplification of the expression -- *— —- _ _ (9.53) f-B1(L+6L)lg-<€=f-BL g+B1(L — * :(B +5B 3 £31 L6L* L5L*d 2 2) - + (+ ) 1-619 2 1 Where we have used (3.17), (9. 40) and (9.41). In (9. 35g) it is assumed that I IE-1 I I I I5L*I I < A < 1, upon applying this twice and using (9. 50), (3.16), (6.10) and (9. 52) we have that the norm of 61 (9.53)is 0(||K|| ||§2||)+0(||fi||), hence from (9.16) -2 A A -t (9.54) ||£2|| =0(2 IIAII IIxII)+O(2 A A A A A '[IIRIIMHAII le||)]|lAll lell). A A A Using (9.38) we find IIAII IIxII )IIfII - IIRII , which, when combined with (9. 54) and the assumption (9. 35¢) that A IIRI |/||£| |4 A <1, yields -t A A <9-55) ||52||=0<2 IIAII IIxII). We now define the vector C3e [Rm by {f-§1(E+5L*)'lg-61+5} -{£-§ E'lg} (9.56) 6 2 1 3 -- -—-1 *-1--1 *A -5, E, B1(I+L 6L) L 5Ld1 1+2 A bound for I I63I I is obtained in similar fashion as that employed for I ICZI I, therefore we have from (9. 50), (9.35g), (3.16), (9. 8), (9.35a), (6.5c). (9.52) and (9.55) that -t A A (9.57) ||<53||=0(2 XIIAII IIxII). Finally, using (9. 25) and (9.22) we have 't- - " * -1 5 a (9.58) ||5z||<2 K3(n-r)IIf-B1(L+5L) g- 1+ 2”. Upon examination of (9. 16) and (9.58), we see that they differ only by ($2 and the factor E3(n -r), therefore using (9.55) it follows that -t A A (9.59) ||52H=0(2 IIAII UK”)- The definition of A2 comes from (9.46). (9.40), (9.33) and (9.30), 62 which yields the following simplification using (9. 56), (3. 17) and (9.40) _ —— *-1 ) _ ::<_ A (9.60) A2=(W+5W)1V1[52+{f-B1(L+5L g _-1/\ ___1 _£1+£2}]-W VIIf-BIL g] 97-1 (1 + 6w* v—v"1)'1 [-5w* 32 + 91(53 + 57.)]. Using (9.35h). (3-16). (9.29). (6-10). (3-14). (3-7). (9.57). (9.59). (6. 6b) and the fact that W = W, it follows upon taking norms -t _ _1 A A (9.61) IIAZII =0<2 1(0) IIW II IIAII IIxII). — _ * A where we have used the fact that IIWII = IIBZ+6B2 IIQ IIAII , and 7L(C) 21. A Finally, from (9.44) it follows that IIAII = o (| IAI |), hence (9....) ||A2|| = 0(2't'X(C)'X(B2) IIQI) IIAII/(13,11). where we have used the fact that ||w|| = ||132|| and (6.4). We are nowinaposition to bound ||51*||. From(9.34) we have (9.63) ||51*||=0(2't||I-1||), and substitution of (9.33) into (9. 46) yields after rearrangement alt Therefore, a bound for I I6! I I follows from (9. 63), (9. 64), (9.49) and (9.62) 63 Z taco) ((211) + (9.65) ||61*|| =0(2' -2t o<2 X1 _ A -t A (9.66) IIx-xII =0(2 7((0) IIxII) -t +0<2 70C) 7032) IIQI! IIAII/llell)o From (9.45), we have a bound for I IQ-xl I/I IxI I, which can be tested to see if our assumption that I Ix-xl I/I IxI I< A is reasonable. If it is, then from (9. 66) we have _ A -t (9.67) IIx-xII = 0(2 ‘X(C) IIxII) -t +0<2 700170le IIxII IIAII/Ilelll- From (9.45) and (9. 67) follows our final result _ -t (9.68) llx-XII/l IXII = 0(7- 70(3)) -t + 0(2 7032) IIAII/IIBZII) -t + 0(2 7((0) 9((132) IIAII/IIBZII) -t 2 2 + 0(2 700) X (BZXIIRll/lIfllxllAll/IIBZII) )- (-9. 68) shows the errors introduced by solving (9. 1) using floating point arithmetic. Here we have suppressed all quantities which are constant (depend only on A, r, n, and m) to illustrate the 64 . -t . . dependence on the machine accuracy, 2 , and cond1t10n of the matrices C and B As hoped for, the influence of the condition of 2. the constraint equations C appears only as a linear factor in (9. 68). However, the term of major influence is the last one, in which the condition of B2 is squared. The effect of this term will be minimized if the quantity X(B2) I IRI I/I IfI I is not too large, i.e. the residual R is small compared to f. 65 CHAPTER 2 BLENDING FUNCTION THEORY In Section 1, an introduction to blending function theory as developed by Gordon and Hall [12, 13, 14, 15, 16, 17] is given. The presentation will be a generalization of their results to the more general setting of interpolation spaces. Most of the proofs of Section 1 will follow from their own. However, the more general setting of Section 1 will enable the application of blending function techniques in the following chapter to be accomplished with greater ease. In Section 2 the dimension of discretized blending function spaces will be shown and several bases will be explicitly developed in terms of the cardinal bases for the corresponding interpolation spaces. In Sections 3 and 4, natural cubic spline blending will be developed with error bounds given for "approximate" natural cubic spline blending. Finally, for fe C [a,b] and ge C( Ea, b]x[c, d]) we define I If! I -- sup |fl and I lgll -- sup (saw). ag x‘b ai}i_l , it is understood that these two quantities are always associated with V(1rx, M, m). We are now in a position to define the projection operator P \ x of a bivariate function. Given the interpolation vector space V(1rx , M, m), the corresponding interpolation function 0., basis (In. 0) {i:IM1 i-1 and fa C , we define Px[f] pointwise to be M . (1.12a) (Px[f])(x,y)‘= z f(0(1):0) i=1 (xi. v) 45x) . where it is clear that 71 0) (1.1210) (D‘a‘i’m Px[f]) (xi. 9) = 159”" (xi. 9) for 1g i < M. Examination of (1. 12a) and (1. 12b) shows that Px is indeed a projection operator on C(m’ 0). We define the univariate interpolation vector space V(1ry, N, n) _C_ C(n) c, d] in the variable y with interpolation func- tion 8:1N—+ JN and cardinal basis {413:1 in an identical manner as above. The corresponding projection operator nyf] for (0.11) f6 C is defined by N . >3 f(0.130)) (1.13) (P [3]) (my) = Y J=1 where for 1< j g N (D(o.(3(j)> (0.50)) (1014) . Py[f]) (X, Yj) : f (X, Yj) ° In the usual manner, we define the sum or difference of two operators to be the pointwise sum or difference, and the product as composition. Direct calculation shows that the following state- . (1111.111) ments are valid on C where m1) m and n1)» n , and Px and Py are defined as above. (1. 15a) Px PY = PY Px , (0.1) (1.15b) Dw'l) Px = Px D for 0<£< n1, and (k,0) for nggm . (k0) 1.5C D’ P=PD (1) y v 1 72 We define the projection operators (1.163.) R =I-P , x x 9 (1.16b) R =I-P v v where I is the identity Operator. The following relations are a . (mllnl) direct consequence of (1. 15a) to (1. 16b). On C where m>,m andn 1 >n 1 (1.17a) RR=RR x y y x (1.17b) D(0'1)Rx=RxD(o'1) for ogig n1, and (k. 0) (1.17c) D(k’o)Ry=RyD for 0(k‘ml . (m. n) The first observation that we make is that given f6 C , then Px Py [f] is the tensor product interpolant to f, where M M2 f(c1'(1't).fi(.i)) 1 . 8 P P f = (11) (x y[])(x.v) 1: ,- (xi. vj)4>i(X) ¢j(v) . see Gordon and Hall [14]. It follows from (1. 18) that for 1‘( i( M and lgj S N (a(i).fi(j)) _ (a(i).fi(j)) (1.19) (D Px Pym) (xi.vj) - f (xi.yj). The error of the tensor product interpolant to f is given by . (1. 20) (I - Px Py) [f]: Rx[f] + RyEfJ - Rx Ry[f] . 73 Gordon and Hall [17] (see Theorem 1. 2 below) have shown that there is an interpolation scheme using the projection operators Px and Py which will eliminate the terms Rx[f] and Ry[f] in (l. 20). Definition: The operator on C(m,n) defined by (1.21) Px®Py=Px+Py-PXPY is the blending function operator and Px Q Py[f] is the blending function interpolant to the function f6 C(m’ n). (m. 11) Theorem 1. 2. (Gordon and Hall [17]). If fe C then (1.22) (1 - Fx 3) Py)[f] = Rx Ry [f]. Proof: I-(P +P -P P) I-P +(I-P)P x y x y x x y (I - 1°le - Py) RR. xy The set of functional values and derivatives on which Px G) Py interp'olates f is given in the following theorem. Theorem 1. 3. (Gordon and Hall 17]). If fe C(m,n), then for lx o nyflltx,.y) = £“"‘"°’(xi.v). 74 also, for lgjgN and xe[a,bJ (0.9m) f(0.15m) (1. 23b) (D PKG) Pym) (x. Yj) = (x. Yj) . Proof: By direct calculation. What we have accomplished is the approximation of a bivariate function with a sum of univariate functions. Therefore, it should be clear that the image of C(m,n ) under the operator Px® Py is not a finite dimensional space. In comparison, from (1. 18), it is clear that the image of C(m’ n) under the tensor product operator Px P is a finite dimensional space. Therefore, to implement the blending function method, it is usually necessary to make second level approximations to f(a(i)' 0) (xi, °) and f(o’ BU» (. ,yj) which have an accuracy compatible with Px® PyEf] . This is the sacrifice which must be made in order to achieve this gain in accuracy. Next, we want to show how much more accuracy is gained by blend- ing function techniques, to see if it will justify the extra work of irnplementation. Toward this end, we note that the accuracy of blending function methods depends upon the interpolation accuracy of the two univariate spaces V(1rx, M, m) and V(1ry, N, 11). There- fore, we introduce the following general notation for an univariate e r ror bOlmd. 75 Definition 1.4: Let V(1rx, M, ml) be an interpolation vector space, with interpolation function 0 , and cardinal basis {69:11 . We say V(11-x , M, m 1) has an error bound for the kth derivativeif and only if there exists a function gx (k) = gx (hx’ m1, m2 ,k) such that if x e [a, b] is any point where ¢i(k )(x) exists for l ‘ ig M, (m2) and f e C [a, b] where m2 )max (ml, k} then k (m l (1.24) “((17:71) - 2 {Minn 1M .( )(§)|< <8x(k)|l f 2 ll » i=1 where hx is a parameter of the mesh fix. It is clear, if V(1rx, M, ml) has an error bound for the kth derivative, (m2: n) g :0 , ye [c, d] , Px is the corresponding projection opera- tor and x is any point satisfying Definition 1. 4, then (1.25a) (D(k' 0) 19x [0) (9:, y) exists, and Ba. 0) (1.25b) ( Px[£])(9t,-) t C(n)([c,dJ) . Therefore, it follows from (1.15b) that for o < 1-4 n (1. 26a) (D‘k' “ Px[f1)(§.y) = (D‘k'm lefm' "3) (9w). correspondingly (1. 26b) (13‘1“) Rx [ill (£9) = 03”" °’ Rxffw' “1) (2‘49) and finally (mzll) R x(3])(x.v)|< 8 x00 sup If (t,y)|. «t4: (1.266) lurk") 76 For the interpolation space V(1ry, N, n1), interpolation N function [5.IN Jn and cardmal baSls {tpj}j=1, we use an t analogous definition for an error bound for the l h derivative, and we have conclusions similar to those given by (1. 25a) to (1. 26c). We are now in a position to prove the following theorem, which is a generalization of the error analysis given by Gordon and Hall [17]. Theorem 1.4. Let V(1rx, M, m1) be an interpolation vector space which has an error bound for the kth derivative for 0 ( k ( m3, where m is an integer such that 0g m Also, let 3 gin‘ 3 2' V(1ry, N, n1) be an interpolation vector space which has an error bound for the 1th derivative for 0 \< l \< n , where n3 is an integer (m2,n2 such that O-JB Cardinal bases: {59:11 2| {411.}J. 1 Projection operators: P; and N2” 11 H I x P and R = I - P- . Y Y. Y . From this define the interpolation spaces vfix, :4, m) S C(m)([a, b]) and Vh-ry, N, n) §C(n)([c,d]).r 81 * * For f e C(m ,n ), m* = max{m,'r—n‘} and 11* = max {n,?l'} define the discrete blending approximation to PXG) Py [f] as (1. 39) PXG) Py[f] = Px Py [f] + 13ny [f] - Px Py[f] , where, for example M— 2: M2 (1. 40') 55,, Py[f](x. v) = r‘"“" 5"”(351. vi) 310:) (am . H II y—n H. II 0—0 In general, the discrete blending approximation Px® Py [f] does not interpolate values of f and its derivatives. However, with the following restriction on the interpolation spaces, we will prove that Px® Py [f] does indeed interpolate. Definition 1.5: Let V(1rx, M, m) and Vfir'x, M, ‘55) be interpola- tion vector spaces with interpolation functions a and '5 respectively, then V(1rx, M, m) is subordinate to Wix, M, ii) if, and only if mg IT: and for each i where l ( i< M there exists an i- such that l<-i-< M, xi =E-i- and a(i)='c'r—(-i—) . We have the following generalization of a theorem due to Gordon and Hall [17] . Theorem 1. 5. Given the interpolation spaces V(1rx, M, m), VG”: , M, m), V(1r , N, n) and VHF, N, n) where V(1r , M, m) is x y y x subordinate to 7(Fx, M, m) and V(1r , N, n) is subordinate to (m. VfiFy, N, n), if is C n) then for 1< ig M, l1: — >1: — where m2 = max {m2, m2} and 112 = max {n2, n 2} then '— k, I A 9 (1.44) (m - Pxo pyml)‘ > x.y))< g (k)llf(m2 "II _ k,— + g(1)||f( “2’” + g (k) g (1)|If(m2’n2)|l Y x Y _ (I?) , - (m :— +gx(k) gy(1)l|f 2"Z’II +gx§ig;:iq:irn ' We see that (2.4) is true, because the left hand side is clearly con- tained in the right hand side. Also, if f is an element of the right hand side of (2.4), then N M . . (2.6) f: 2 >3 f(“(1)'p(m(x.,y.)¢i} q). j=l i=1 1 J J N '12 ~. . _ jzl 1:] J J We will have proved our result if the bracketed expressions in (2. 6) are contained in V(1rx, M, m) {NV-(ii, M, r71) for lg jg N. Using the cardinality conditions, we have for each j satisfying 1g jg N (0.5m) _ (a(1).a())) _ (2'7) f ('9Yj) " if]. f (xi,yj) ¢i€V(1Tx,M,m) "M _. . = 2 f(a(1)’B(J))(§i,yj)$ieV(?rx,M,'rh'), i=1 which proves (2. 4). We note that (2. 5) follows in a similar manner. 89 We now show that the following statement is valid for mg Y5- or ngn (2.8) (76;. M. a) ® my. N. n))fl. 4.. + 2: §_ b,. <9. i). ieIM 13 z c.. z >3 ¢i(“‘s)’(xs)(.‘5‘t”'(yt).¢;““”(§3)($095.) ieIM jeJN ‘3 s=lt=1 J ‘ ‘ .J = CM, 1J where we have used the cardinal interpolation conditions and the correspondence given above to obtain our result. With all Cij = 0, it is now a simple matter, using the appropri- ate cardinality conditions, to prove that all aij = O for is I‘M, l,I\—/I-N + M-N - M-N (2. 23b) Dim (DBF)< MN + MN + M- N - N - Dim (Wig-1‘71, m)f\V(nx, M, m)) — M - Dim (V(?Y,N,n) fl V(1ry,N,n)) . Proof: The upper bound follows from Theorem 2.1 and the lower bound from Lemma 2.1. We would like to know the exact dimension of DBF, and have a basis for the space. Examination of Theorem 2. 2 shows the assum- ption needed to obtain our result. Theorem 2. 3. Given the interpolation spaces V(1rx, M, m), VHF, "M, m), V(1r , N, n) and VG; ,N, n) such that war , M, m) x y y x is subordinate to 7.5;, 1T4, m), V(1ry, N, n) is subordinate to .332 n). V(1rx. M. m) g VWX. M. m). my. N. n) _c_ , n), and if f3 «fl a" 2 51 = {¢i 4’j}iei'1\'/i, igjgN’ S2 = {¢i ‘l‘jheJN, 1(x.y)= :1f(xi.v)Ai(x) and N . (3.7) (P [f])(x.y)= >3 any.) Bm . v j=1 J J for each f e C([a, b]x[c, d]). The natural cubic spline blended interpolant to f is defined to be (3.8) NB[f] = Px[f] + Pv [f] - Px py[£]. 105 Define h = max Ay. , where ij = yj+l - yj . ISjQN-l We have the following theorem. Theorem 3. 2: Let NB be the natural cubic spline blended inter- polant to £6 C(m’n)([a,b]x[c, d]). If x6 [xi, xi+1] , YE [Yj' Yj+IJ 9 lsigM-l,lO_ as N—-—>- 0° , where K is chosen with consideration of the bound on the derivatives of f. Remark 3. 5: For purposes of illustration, we will consider the specialcase of k=l =0, a=c=0,rb=d= l, M=N and m=n=4. Therefore, both a and 50 have a factor of h , and because of the 0 exponential decay of the terms Ai and A5 as we move away from the boundary, we have the following situation for our error, illus - trated in Figure l. Y A 0(h4) 0(h6) 0(h4) L... i .48; ° ° ° as. .O;h;). . . . .446; . . . . I. ;)(.h;). 109 Section 4. Exponential Decay of Natural Cubic Cardinal Splines Given a mesh fix:a : x1 < xz< - -- < xM = b of M knots, the natural cardinal splines Ai(x)e 52(1rx) are uniquely determined by the M + 2 conditions given in (3.1a) to (3.1c). We will now prove the exponential decay of the natural cubic cardinal splines by a series of lemmas and theorems. Much of what follows parallels the results of Birkhoff and De Boor [2 J, for cardinal splines with first derivative end conditions, and we will quote theorems from their paper which are also valid for natural splines. Finally, we will use the abbreviation M. V. T. for the Mean Value Theorem and I. V. T. for the Intermediate Value Theorem. Lemma 4. 1. (Birkhoff and De Boor [2 :l). If p(x) is a cubic poly- nomial which vanishes at 0 and h 7! 0 then (4.1a) mm = -2p'<0) - h p"(0)/2 (4.1b) p"(h)/2 = -3 p'(0)/h - p"(0). Corollary 4.1. For 1 g i g M, the natural cubic cardinal splines satisfy (4. 2a) A'i(xj+l) = -2A'i(xj)—ij A'i'(xj)/Z (4. 2b) A; (xj+1)/2 = -3A'i(xj)/ij - A'i' (xj), Where l< j 0 (4.4) sgn(x) 2' 0 if x = 0 -1 if x<0 Lemma 4. 2. (Birkhoff and De Boor [2 J). Let s(x) be any cubic spline function with knots at the xj, which satisfy for some i, 8(xi_1)= 8(Xi+1)= 0. 8(xi) > 0. 8'(xi_1) 8"(xi_l)>0. 8'03“) ~s”(x. 1+1)$0 . Then s‘(x )20, s”(xi_ )90, s'(x )so, i-l 1 )>,O, s"(xi)<0 and s(x))O on [xi- i+l s"(x. 1+1 l'xi+l]' Lemma 4. 3. For 2 S i g M-l , Ai(x) satisfies Lemma 4. 2 on the interval [xi-l’xiH] . Proof: From Corollary 4.1 and the fact that A'i'(x1) = A'i'(xM) = 0 . Lemma 4. 4. For 1 g is M the natural cubic cardinal splines satisfy (4.5a) sgn (Ayle) = sgn (A'i'(xj)) 7‘ 0 111 (4-5b) sgn (A'i(xj)) = -8gn (A;(xj_1)) 7‘ 0. where zgjgi-l, and for i+lgj (M-1 (4. 68) sgn (Ai(xj)) = -Sgn (A'i'(Xj)) 5‘ 0 (4. 6b) sgn (Angn = -ssn (Ayxjfln at 0 Proof: Case 1: 231$ M-l . We will first prove that A'i(x1) and Ai(xN) are both non-zero. If this is not the case, say A'i(x1) = 0, then inductively Ai(x) E 0 _ I on [xl,xi_1]. Therefore, on [xi-l’xiJ’ A(xi-1) — A (xi-1) A"(xi_l) = O, Ai(xi) :: l and from Lemmas 4. 2 and 4. 3 we also have Ai”(xi) < 0. Several applications of the M. V. T. gives F, :(xi_ , xi) 1 such that A'i'( g) > 0 and the I. V. T. gives another distinct zero for A'i' . This implies that Ai is linear on [xi-l'xi] because A'i' 5' 0 on [xi-l’xi]° But this is a contradiction to the fact that Ai must satisfy the conditions Ai(xi-l) = Ai(xi-l) = 0 and Ai(xi) = 1. An identical argument shows that Ai(xM) f O. The result now follows inductively from Corollary 4. 1. Case 2: i=1, M. For i: M, if A' (x M l)= 0 then AM(x)_=.0 on [xv xM_l]. W H ' e then know that AM has zeros at xM-l and xM, showing that A cannot satisfy all of the required conditions, therefore A'M(x M l) i 0. An identical argument shows that A'1(xM) 3‘ 0 . 112 The result now follows inductively from Corollary 4. 1. Lemma 4.5. If lgjgi-Z and xe[xj,x. ], then A;.(xj+ 1)Ai(x) 1+1 <0 andif i+1gng l for xe[xj, xj +1,] then Az(xji;)A(x) Proof: From Lemma 4.4, sgn (Ai( xj )) = ~sgn( (Ai(xj+ l)) -7’ 0. It is now clear that if the conclusion of the lemma is not true, then Ai will have four distinct zeros in [xJ,,Jx+1,] which is false. The other case is proved in a similar way. ' I I Lemma 4. 6. For Is 1 g M we have Ai(xi-l) > O and Ai(xi+l) < O. Proof: Case 1: 2 g i g M_-l . The result follows from Lemmas 4. 2 through 4. 4. Case 2: i = 1, M. For i: M, A l, A”(x MUM) )<0,we 0. If A'M(x M(XM) = M-l have from Lemma 4. 4 that Ali/f(xM l) < 0. Two applications of the M. V. T. and one of the I. V. T. yields another zero for A'M which cannot happen if A is to satisfy the required conditions at x M M-1 and xM. The case where i = 1 follows in the same manner. 113 Lemma 4.7. On [x1, x2] , max |A1(x)| = 1, and on [xM-I’XM] max IAM(x)| = 1. Proof: On [xM l’xM]’ direct calculation yields a real number a such that 0 > a > -1/(2Ax13v[_1) and 3 3 (4.7) AM(x) = a(x-xM) + (l—a(AxM_1) )(x-xMVAxM_l + l . Examination of (4. 7) shows that AM has no interior relative maxi- mums or minimums on [x xM]. The bound for A follows in M-1' 1 the s ame way. We will now give a proof similar to that of Birkhoff and De Boor [ 2 J for natural cardinal cubic splines. Lemma 4.8. For lgjgi-Z, if xjgxgx. J+1' then IAi(x)| g , . . . _ ijlAi(xj+1)l and if 1+1$J,O on [xj, xj+l]’ and from Lemma 4. 4 An I n . ° b ° - . i(xj+1) < 0, Ai(xj) > 0, Ai(xj) 3 0 Our proof is y contradiction Let £1 be the point where Ai obtains its absolute maximum on 1 _ _ 1 [3:3, xj+1], then Ai(§1)— 0. Assume Ai(§1)> Ai(xj+l)ij >0. 114 then because Ai(xj+1) = O, we have from the M. V. T. a £2 . o ' E = - E satisfying £1 < 52 < xj+1 such that Ai(° 2) Ai(_1)/Axj < Ai(xj+l) < O . Applying the M. V. T. again we have the existence of . . H g satisfying £2 < 63 (xj+ such that A1033) > 0. Because 3 l H . . . Ai (xj+1)< 0, the I. V. T. gives a £4 satisfying g3 < §4< xj+l such that A'{(§ 4) 0. Because Ai(xj) > 0, A'i(§1)= 0 and A'i'(xj) >,0 (A'i'(xl) : O), we have a second zero of A'i' , which implies that A1 is linear on [xj,x. J+1]. This contradicts the fact that Ai ' _ ._ i must satisfy Ai(xj) .. Ai(xj+l) _ 0 and Ai(xj) > 0. The proof of the other case is identical. Paralleling the results of Birkhoff and De Boor [2 ], we have the following two lemmas for natural cubic cardinal splines. Lemma 4.9: For 1gj<~ i-Z [A'i(xj)|g1/2 lA'i(X )|, j+1 and for i+1 gjg M-l |A'.( 1xj+l)|< 1/2 |A;(xj)| - Proof: From Corollary 4. l. 115 Lemma 4.10. For 1‘ jg i-Z, on [xj, xj+l] we have 2j-i+2 IAi(x)l s IA;(xi_1)|ij . and for i+2gj$ M, on [Xj 1, xj] we have i+2-' (A)... s 2 ”Aux... >( As“ Proof: Follows inductively from Lemmas 4. 8 and 4. 9. We have now shown through a sequence of Lemmas that the natural cubic cardinal splines behave in the same manner as the splines which have zero derivatives as their boundary conditions. Therefore, the proof of the following Lemma is now identical to that given by Birkhoff and De Boor [2 ], and we will only state their conclusion. Lemma 4.11. (Birkhoff and De Boor[ 2] ). For 2 g i g M-l on [xi- xi+1] we have OSAi(x)< L and lAi(xi-1)| $ L/Axi-l 9 1’ 2 I _ IA (xi+1)) g L/Axi, where L _ 3 MTr (M1T +1) /(3+4MTr ), the mesh X X x ratio is M = max Ax./minAx. . 17x 1 1 . - - 3 I We W111 now show Similar bounds for |A1(x2)| and IAN(xM_1)I by constructing a majorant, Ui(x), for the end splines. Consider first the spline A then define the unique cubic spline U on M' M ' ' — _. II _. [xM-Z’ KM] satisfying UM(xM-2) — U(XM_1) — UM(xM_2) — 116 H _ __ . . . UM(xM) - 0 and UM(xM) — 1. Define the cubic spline T on _ _ ' l [KM-2’ xM] by T(x) — (UM AM)(x), then we W111 prove T (xM-l) )0. If M-Z =1, then T"(x )2 0 and we are done. If M-Z >1, M-Z then T"(xM_2)7l 0, and we will show that T'(x )> 0. To do this, M-l we use the fact that T”(x 2) > 0, which follows .. _ n M-Z) ‘ O AM("M— from Lemmas 4. 4 and 4. 6, and that T satisfies T(xM) = T"(xM) = T(xM_1) = T(x ) = 0. Also, T satisfies the conditions of M-Z Lemma 4.1 and therefore (4. 3a) and (4. 3b), for xM_2, de, xM . II _ II I Because T (xM) — 0 and T (xM_2) > 0, we have that T (xM) f 0. From (4. 3a) and (4. 3b) we have that sgn(T'(xj_l)) = -sgn (T'(xj)) 3f 0 for j = M, M-1 and sgn(T"(xj)) = -sgn(T'(xj))f 0 for j 2 M-2, - n - : ._. M 1. Therefore, because T (KM-2) > 0 it follows that T (xM-l) - I ' (xM-l) AM(xM_1) > 0. Using Lemma 4. 6, and our above ' I I results, it follows that UM(xM_1) > AM(xM_1) > 0. I UM UM(x) is given explic1t1y by pl(x) on [xM-Z’ xM_1] and p2(x) on [xM_1, xM], where p1(x) = b(x-xM_1 )(x-xM _2)(x-xM_1+ZAxM_2) 2 p2(x) = (x-xM_l)(-beM_Z[(x-xM) -AxM_l(mm/Arm1 +1 /AxM_1) where b = I/EZAxM-ZAxM-1(AxM-1 +AxM_2)] and U'M(xM_l) :AxM-Z/(AxM-1[AxM-l +AxM-2]) . 117 Therefore, I (4. 8) o < AM(xM_l)g wa/(AxM_1|-_l +Mflx]) . A bound for A'l(x2) follows in an identical way. The above yields the following theorem. Theorem 4.1. If lg i g M, lg j gi-Z and xjg xngH, then 2+j-i |Ai(x)lg 2 LMTr X If 1+2SJ| +éY 3‘3 lij)! 1:1 3:] M N Z A E B +l/2(£x+£y)( i:ll i(x)l)( j=l| j(Y)|) g 6x K(Mnx) + 6y K(Mfiy) + l/2(E,x +£Y) - K(M )K(M ) ‘IT 11’ x Y The proof is completed by applying the triangle inequality, and carefully observing the bound for each term in each interval given by 00 Theorem 4.1 and using the fact that Z} 2 = 2. i=0 Remark 4. I: It should be noted that the result of Theorem 3. 2 is independent of the mesh ratios M" and MW , while the exponential x y . decay of the basis functions is dependent on both M and Mn 11' x Y 120 Remark 4. 2: To see the size of K(M ), consider the special case 1r . X of equal spacing, where M = 1, then K(l) = 72/7 . TI' X 8 Remark 4. 3: If m = n = 4 and 6: 0(h ), then the error satisfies Figure 1 from Remark 3. 5 of Theorem 3. 2. 121 CHAPTER 3 DISCRETE LEAST SQUARES This chapter is devoted to the development of discrete least squares algorithms on unstructured data sets, and to showing the accuracy that can be expected depending upon the distribution of the data points and the smoothness of the function f from which the data Originates. Sections 1 and 2 develop preliminary estimates for use in the following section. In Section 3, an example is given to show the necessity of having a sufficient number of data points reasonably distributed to guarantee that the discrete least squares fit will be close the function f. The remaining portion of this section gives error estimates for several univariate discrete least squares algor- ithms. Sections 4 and 5 extend the error estimates of Section 3 to bivariate functions, and algorithms are developed using dis cretized blending function spaces. Finally, in Section 6, general domains with curved boundarys are considered, and this case is reduced to the methods of Section 5. 122 Section 1. Uniform Error Estimates We will develop here an error analysis in the uniform norm for univariate cubic splines in terms of interpolation errors at the knots. Let s e 52(1rx) be a cubic spline on the mesh 17x:a = x1 < x 2< ° /3(M-1) data points and the uniform mesh Trx:a = x1 < x < 2 . . . < xM z b, where h : (b-a)/(M-l). For each j where 2 < j g M-l we assume that there exists six fixed data points which are . -i 3 i 3 designated by {xj}i-1U {Xj}i-1g X such that (2.1a) x. s x; <3—i.z<§.l3 (x.-x1.)}. 1O, 2sJ 0. 2gng lgig If X> 0, then 131 (2.6) IIsII< 7(l—a)(1 2a)[(3 -6a)g /" a +hnI/(4a’) +hn/4. Remark; From (2. 5a) and (2. 5b) it follows that 1/3 > 9 > 0 and l>o>0. Proof From [1 , p. 10], s e SITrx) is a cubic spline if and only if for 2Sj {M-1 2 u H n _ _ (2.6a) (h /6)Jsj-l + 4sj + Sj+l) — sj-l 2sj + sj+l , 2 n 11 ._. _ .. ' (2.6b) (h /6)(281+ s2 ) _ s2 s1 hs1 , and (2 6c) (ha/6X3" + 28" ) = hs' — s + s ' M-l M M M M-l ' Because se C(2)[a, b] is a cubic polynomial on [xj ., xj +1] for l g j g M-l we have s}'+1 = 63% s,"'(xj) , where s'"(xj) is evaluated from the right. Also, s'"(x) = constant = 6 Aaj on [xj, x J, (see j+1 [24, p. 249]. Therefore, 3 .7 1.1 = 1.1 ., f ' _ , (2 ) SJ+1 sJ+6hAJ or 1ngM1 and correspondingly (2.8) s'. 1: s'j.' 6hA.3, for 2gng. Because 8 is cubic on [xj, xj+1] for léng-l, we represent 8 on that interval as 0 0 l 0 1 2 (2. 9) sj — Aj + (x-xj) Aj + (x-xJ.)(x-xj) Aj 132 o 1 2 3 + (x-ijx-xjxx-xj ) A,- 1 (see [24, p. 248]). Differentiating (2. 9) twice and evaluating at xj we obtain (2.10) s'.'= 2A.z+ 2(2x. - x.1 - s.2) A? . J J J J J J Correspondingly, on [xj_1, xj] for 2gjg M we have (2.11) s'.'=2A.2+2(2x.-x.1-§.2)A.3. J J J J J J Substituting (2. 7), (2. 8), (2. 10) and (2.11) into (2. 6a) through (2.6c) it follows that for 2 gj $ M—l (2.12a) h2[(2xj -§;-§§-h)A?+A§+A§ sj“1 - 2sj + st , 2+h)Ai+ A2] =sZ-sl -hs', l + (2x. -x.1 - xiz+ h) A3] = J J J J (2.12b) hZI-_(2xl -x: -x and 2 -1 -2 "3 ’2 _ , (2.12c) h [(Zx -xM -xM -h)AM+AMJ -hsM - BM+8M-1° Substituting (2. 2a) and (2. 2b) for our divided differences into (2. 12a) through (2. 12¢), we obtain the following after regrouping (2. 13a) h2[(2xj-§j1§§'- h)E(3,j,0) +7.:(2, j, o) + (2j0)+(2x -x1-x2+h) (3j0)] s ” " J J i *‘ " i + 2sj - sj _1 - sj+1 133 3 . -h2[(2xj Ejl -23.?“ - h) z s(§;) fi(3,j,i) i=1 i 2 i + 23 s(x.)fi(2.j.i)+ s(x.) («2.1. 1) i=1 J i=1 J l 2 3 i + (2x. -x. -x. +h) z s(x.) u (3,j,i)] , J J J i=1 J 2 l 2 (2.13b) h [(2x1-x1-x1+h)).1(3,1,0)+p.(2,l,0)]81+ s1 -s2 3 . = -h2[(2x -xJ -x2+h) 2 s(x1)u(3.1.i) 1 1 1 i-l 1 2 . + Z s(x1)p.(2 l i)] -hs' 1 r n 1 ! i=1 and 2 .. ._.2 _ _ (2.13c) h [(2xM - xiv, - xM - h) u (3,M, 0) + u (2, M, 0)] sM + 9M " J’M-l 3 . 2 .1 ._2 -1 .. - -h [(2sM - xM - xM -h) 17:11 s(xM) ,1 (3, M,0) 2 i + i_1 s(xM) pt (2, M, 0)] + hs'M Using (2. 3a) and (2. 3b), we simplify (2. 13a) through (2. 13b) to obtain (2.148) -s. +g.+2+€.s.-s. =—.+ ., ') J-1 (J J)J J+1 6J 6J - 2'. - ' (2.14b) (1+él)sl s2 51 hsl, and (2-14c) (6M+ 1) 8M - sM-l = 5Mths' , 134 where for l gj (M-l . 3 . (2.15a) E. =hz[h +, 2 (X.-x1.)_I u(3.j.0)>o , J i=1 J J 2 3 3 1 i (2.151)) 5. = 'h 2 [h 'I' Z (x.‘X.)]8(x-) I-L (3,5,1) 1 J 1:]. 1:1 J J J . 1 ' and for 21(ng J1 _ 2 3 . (2. 15c) 6. = h [-h + 2 (x.-xi)_I 1: (3,j, 0) >0 , J i=1 J J 2 3 1 i (2.15d) 6. = -h 2: [-h + 2: (x..‘x.)] s(i.)fi(3.j.i) - 3 i=1 1:1 3 J J Hi In (2. 15a) and (2. 15c) we have used (2.4) and the fact that I) 0, (1(3, j, 0) < o and ;(3, j, 0) > o to show that they are positive. In matrix form, equations (2.1'4a) through (2. 14¢) become I-- 1 '. q t: - 1+ 1 1 s1 -1 62+2+1€2 -1 ea (2.16) . 135 M-1+ 5M-1 0" I 5M+ hsM or AS = E, where A, S and E correspond to the quantities in (2.16). In order to estimate the norm of the vector S we define the diagonal matrix D (2.17) D: 136 From (2. 15a) and (2.15c) we see that all of the diagonal entries of the matrix D are positive, which implies that D.1 exists. Using this fact, and premultiplying A by D”1 we obtain (2.18) D-1A=I+B, where the tridiagonal matrix B is given by I.— 0 1/(1+€1) l/(€2+ 2+ 62) o 1/ (62+2+ 82) (2.19) B=(-l). l/(EM_1+2+&M_1) o 1/(-E-J\4-1+2+8M-1) 1/(€M+1) 0 If IIBII°°<1, then, from(3.l6) of Chapter 1, we have that (DJ-1A)“l exists and -l -1 ‘ -1 (2.20) llSllw,h31r|p(3,j,0)| 3 l 2 3 >h X x.-x.x.-x,(x.-x / /|(J J)(J J)J j)! 2 h3X/[h-(1-201)-h(1-o)- h] >/ ‘6/[(1-oz)(l -2o)] . Correspondingly, from (2. 15c), (2. 4) and (2. 5a) we have for 2,X/ [(1 —a)(l-2o)] . Combining (2. 21), (2. 22) and (2. 23) yields (2.24) IIBIIoog(l-a)(l-2a()/[(1-a)(l-2ar)+X] <1 In order to obtain a bound on I ID-IEI I0° we use (2. 16) and (2. 17) which gives 138 (61- hs'l)/ (1 + 61) (52+ 52)/(62+2+£2) 1 (2.25) D‘ E = (5M_1+ 5M_1) / (EM-1+ 2+ Md) (6M+ hs'M)/ (£M+ 1) h - Which implies using (2. 22) and (2. 23) that -1 A (2.26) |(D Ellwg [(1-a)(1-24)/((1-a)(1-2a)m] ”Ell... . where )7 _ . 7 61 hsl (32+ (52)/2 A (2.27) E e h (5M_1+ 5M_1)/2 M M _I /\ _ An upper bound on I IEI Ico is obtained by bounding 5j and 5j from above. Using (2. 5a) and carefully observing internal cancellations we have for 1gig3 139 3 (2.28) |h+ 2 (x. -x£.)I< h[l —i-s‘] 1:1 J J [#1 and 3 1 (2.29) |-h+ z (x.-§.)I3 tothe function fe c”) [1,M] £(4(x-5/4))5 if is x g 5/4 (3-3) f(X) = o if 5/4 < x g M Direct calculation using (3.3) yields | If] I = 6 and | |f(4)| | = 30720 C. If sfe 82(1rx) is the cubic interpolation spline which inter- polates f on the mesh "x' then from Theorem 1.1 of Chapter 2 (3.4) ||£-sf||g4ooé. We will explicitly construct the discrete least squares spline approximation 3 to f on the following M+Z data points LS (3.5) X = {1 + i/4}:0 U {j +83%: _C_ [1,M] . where f(1)= -5, f(l +i/4)=o for 1o, (3.7b) s'Ls(2)> o , and (3.7c) SLS(2)>O . Because SLSE C(2)[a,b] and sLS(j+€) .= f(j+€) = 0, it follows that on the interval [j, j+l), s has the unique representation LS . . z . '3 . 3 (3.8) SLS(X) = —|> lsLSuH/é . where we have used our assumption that 0 (fig 1/2 for the con- servative lower bound in (3. 9). Combining (3. 7a), (3. 7b), (3. 7c) and (3. 9) we conclude that for 3 g j g M (3.10) IsLS<1/£)ZJ'5 . (3.11) ”15'5le l > (1/£)2M-5 , and 143 ZM-S (3.12) ”3(3le l > (1/8) . where we have used the fact that f(M) = sf(M) = 0. Therefore, by decreasing 5,, we can cause f and its first four derivatives to be as small in norm as desired. However, the norm of the error in (3. 11) can be made as large as desired, even though the residual vector R is zero. From this example, it is clear that an acceptable upper bound on I If-s can be obtained only if we place restrictions on the LSll number and distribution of the data points with respect to the mesh is close to f, s need not "x' Finally, observe that even if 8 LS f be close to either sf or f. Remark 3. l: The observation should be made that the mesh, Size h = l was chosen only for convenience, and its size is not crucial to the above example . The above example motivates the hypothesis of the following theorem. Theorem 3. 1. Let sLSe 82(1rx) be a discrete least squares approxi- . (m) . " mation to fe C [a,b], -1 g ms 4, on the set X of M unstructured data points, where "x is a uniform mesh. Assume for each 5, 'where 2 < j < M-l, that there exists six data points which we desig- nate by {31;}:1 U {x;} 3 C X satisfying (2.1a), and also i=1“ 144 i 3 {x1}i=1 numbers If, a and 3 be defined as in (2.4), (2. 5a) and (2. 5b), g X and {iiflil g X satisfying (2. lb). Let the real respectively. If X>O, If'(xk) - (sk)I1/32. This gives the following bound from (3. 13) (m) m . (3.23) IIf—sLSI ls (Em, o127 ||f ||hx +126 g + (85/4) bx. 11 . Examination of Theorem 3. 1 shows the need of insuring the smallness of 11 . We want to approximate f'(xk) for k = l, M in some manner which is compatible with the hm. One possibility is as follows. If each of the intervals [xl,x2] and [xM-l’ xM] con- tains at least m data points, respectively (the hypotheses of (Theorem 3. l guarantees at least 3), then using Lagrange interpolation polynomials of degree m-l, we can approximate f'(:fi<) for 147 k = 1, M with the derivative of these polynomials at the end points of our interval x1 and xM (see Remark 1. 1). If a constrained least squares algorithm is used to insure that the least squares spline SLS has these values for its end derivatives, then inequality (1. 24) gives a bound for M. Therefore, the following Corollary is an immediate consequence of Theorem 3. l and the above construction. Corollary 3.1. If the hypotheses of Theorem 3.1 are satisfied and there exists m data points in each of the intervals [x1,xz] and [xM_1, XM] on which the Lagrange interpolation polynomials of degree m-l are constructed to approximate f'(xk), k = l, M, and s'LS(xk) is equal to those approximations, then 2 (3.24) lli-sle ls [Cm 0(21(1-a)(l-201)/(43X)+1) + (7(1-a)(l -2a)/4i’+ 1/4)/(m-l) 1] ||f(m)| pi: + 21(1-a)(1 -2c)2t/(431§) It would also be desirable to have an a priori bound on . (m) . ' . . I If-sLSI I, knowmg only that f e C [a,b] and the distribution of the data points X. Toward this end we prove the following Corollary. Erollary 3. 2. If the hypotheses of both Theorem 3. 1 and Corollary 3. l are satisfied, then 148 2 A1 2 (3.25) IIf-sLSI | g [€m,0(21(l-a)(l-2a) (1+M/ )/(43){)+ 1) + (7(1-a)(1—za)/(4x) + l/4)/(m-1)1] ||£(m)|| 11;“ , where 6m is given in Table l of Chapter 2. ,0 Proof: Examination of (3. 24) shows that all that remains is to bound g . This will be accomplished by obtaining a bound for I IRI I and using (3. 20). The cubic interpolation spline sf is a candidate for the discrete least squares approximation to f, hence the norm of its residual vector must be greater than or equal to I IRI I, i. e. , A M (3.26) ||s| |<< 2 (f(SEi) - sfu’iin i=1 2)1/2 A M <( Z Ilf'stI i=1 2)1/2 <6 ”Emmi?“ ill/2 , m,0 x where an application of Theorem 1. l of Chapter 2 has been made. Combining (3. 24), (3. 20) and (3. 26) completes the proof. A1 2 - Because M / > (3(M-1))1/2 >hxl/Z , the power of hx is no longer m, as (3. 26) might indicate, but rather no larger than m-l/2, depending upon the number of data points. In practice, of course, we would expect (3. 26) to be a rather crude estimate and 149 m) I I hm instead of x would hope that E. is much closer to am 0 I If( the bound given in (3. 26). We now give a more stringent condition on the data points, X, which will insure that the discrete least squares spline sLS will be uniformly close to f. In preparation for the following theorem, the real non -negative number I I I IX of a function f is defined to be (3.27) IIfIIX = maxA |i(i’ii)| , l VM-l >/h)j/2 , the best a priori bound obtainable using this method can not have the exponent of h exceed m-l/Z . Remark 3. 7: The observation should be made that a uniform mesh is required for an application of Theorem 3.1. However, the conclusion of Theorem 3. 2 is independent of the mesh spacing. 153 This section is concluded by recording a few observations on the distribution of the data points required by each of the above theorems. The hypotheses of Theorem 3.1 are satisfied if there exists at least three data points in each interval [xj, xj+l] such that the spacing satisfies (2.1a), (2. lb) and in (2.4) we have X>O . The observation should be made that this is a reasonably weak condition to be placed on X. For example, in each interval, all of the three data points could lie in just half the interval, say [xj,xj+hx/2] and X > 0 could still be realized. The data points le = hx/3 + xj , ij= 5hx/12 + xj and xj3= hx/Z + xj, all of which lie in the half interval, are surely acceptable, and give the value 1: 1/4 > 0. If X contains an excess of data points beyond that required to fulfill the requirements (2. la) and (2. lb), then a judicious selection of the three data points required for each interval can maximize the quantities X, a and 3 and therefore minimize the upper bound given in (3.13) for I If—sLSI I . For p(X) < 1/9 in Theorem 3. 2, the number of data points in each interval [xj’xjH] must exceed 9 and be distributed in a reasonably uniform manner throughout each interval. Usually for unstructured data, this will require the number of data points in each interval to be far in excess of 9. In Theorem 3. 2, the number of data points in each interval must exceed 31r/(2 f2.) 5' 3. 3 to insure A that p(X) < ’273 . However, uniform distribution of the data points 154 is not what is required. Examination of (3. 33) discloses that more data points are required near the end points of the interval rather than near the center. 155 Section 4. Bivariate Least Squares With Data on Mesh Lines ° :2 so. = 0 = '00 < Let 11x.a X1 (X2< (XM b and try.c y1.‘< where K1: 2, K2 = 1/8, bx: max (xi+1-x,) and lgigM-l h = max (V. 'Y-) - V 10 , l0. 1<5 0 and 3” > 0 are defined in a manner analogous to (4. 8), (4.9) and (4. 10). For Algorithm 4.1, the following error estimate is valid. Theorem 4. 1. Let 3, defined in (4. 7), be constructed by Algorithm ( 4.1. If f e cm'n)([a,b]x[c,d]), xx>o and xy> 0, then :1: * (4.11) ||i—sllgxmi Kn*IIf Axx + (3/2)I:€m O(21(l-arx)(l-2c1rx)2/(4ar t ) + 1) +(7(1-ax)(1-2ax)/(41(x) + l/4)/(m-1)!]IIf(m’0)I Ih:r1 +(63/2)(1-ax)(1-2ax)2 max ||R7‘lloo/(43x3x) 13 (£(- y.)-sx .).); ._ ’ J LS.J J J-l M N —23 2 Mann: yl-sy (y)+f(x y)-s" (x))<(> 4: i=1 j=1 i' j Ls,i j i’ j LS,j i i j Taking absolute values, observing that both (pi and qij are non- M N negative, )3 d), = l and E 41, = 1, then (4. 13) reduces to i=1 1 j=1 3 (4.14) |(Px® P [fJ-s)(x.v)|< (3/2) max ||f6MN-(M+N), i=1 j=l for sufficient data points to be available to apply Theorem 4. 1. Next, because only univariate least squares is being performed, the size (total number of entries) of the largest matrix that must be stored in the computer at one time is given by A ~/\ (4.17) T2 = max {(M+2) max M,, (N+ 2) max N. } . 1gjgN J l".~ * (m"‘,n>€) m n (4.22) ||£—s|| x m,0 x 1$j1: * where m = min {m,Z}, n = min {n,2}, (Em and (in are ,0 ,0 >1: * given in Table 1 of Chapter 2, K1 = 2, K2 = 1/8, h = max (x. -x.) and h‘ = max (y. -y.)”. 1gigM-l 1“ 1 Y 1gjgN-l 3“ J 166 Proof: The proof of Theorem 4. 2 is essentially identical to the proof of Theorem 4.1. We merely note that combining (4.12), (4. 14) and the bounds of Theorem 3. 2 yields our result. Algorithms 4. l and 4. 2 are hybrid algorithms, which combine piecewise cubic and piecewise linear splines. If it is desired that cubic splines be used throughout, then the following algorithm, out- lined below, may be useful. We blend our discrete least squares cubic splines with natural . . . (m,n) cubic splines (see Section 3 of Chapter 2). If fe C ([a,be[c, d3), 2 < m,n g 4, then this method has a potential accuracy of order m + n in the "interior of the region”, (see Sections 3 and 4 of Chapter 2). Examination of Theorem 4. 2 of Chapter 2 indicates that preservation of this accuracy requires the discrete least squares cubic spline approximations to be of order m + n, when their residual vectors are small. To accomplish this, on each mesh line of 1r @w , we replace the meshes 'n' and Tr by uniform univari- x y X Y ate meshes 17x33 = x1< x2< '° ' < XM = b and Try:C = yl< y2< -- - < 37E = d, respectively. From Theorem 1.1 of Chapter 2, the interpo- lation accuracy of the univariate cubic spline spaces 52(5):) and 2 _ . -m -n . - - S (Try) is 0(hx) and 0(hy) respectively, where hxz (b-a)/(M-l) and h; = (d-c)/(N-l) . Therefore, for the preservation of our 167 accuracy, it is necessary to refine the meshes '1')" and 'T'r' x sufficiently to have ‘11:? g h? h: and hn g hm hn Y Y Y Remark: For the special case where m = n = 4 and hx = hy = h, this reduces to hxg h2 and hyg h2 . The above observations lead to the following algorithm. <°°° ij)° §<:> S_te_p__2: Blend the above univariate discrete least squares cubic splines with natural cubic splines (see Section 3 of Chapter 2) to obtain the following bivariate approximation to f, where {Ai}i\:l QSZUrX) and {Bj}:1§_82(1ry) are natural cardinal basis functions, N X LS,1 i :1 sLS,j Bj M (4.24) S = 2 i=1 M N .. 12:1 j:Z)1(1/2)(Sy LS,1 .(y.)+SL S,j(xi)) Ai Bj. 169 The following assumptions are made on the distribution of the data points so that an error analysis can be performed. For each j, 1‘ jg N, assume that there exists m distinct data points of X, J in each interval [x, x2] and [KM-1' xfi] , and for 1 < i g M, assume that there exists fixed data points in X5 satisfying (2. la) and also fixed data points in Xj satisfying (2. lb) with respect to the mesh 3;. For these fixed data points in Xj, define the real num- bers 73‘, 53‘ and 3‘: with respect to the mesh Fx by (2.4), (2. 5a) and (2.5b) respectively. Define the real numbers 7x, Ex and fix to be the minimum of Y? , 33‘ and 33‘ , respectively, for ' 1 ( j(N. Correspondingly, for each 'i, l ( i ( M, identical assumptions on the data sets Yj, with respect to the mesh 7r}. are made. .The real numbers 7y, KY and 3}! are defined in an analogous manner. Algorithm 4. 3, along with the assumptions on the distribution of the data points, gives the following theorem. Theorem 4. 3. Let s, defined in (4. 24), be constructed by (mm) Algorithm 4.3. If feC ([a,b]x[c,d]). 7x>0 and 7y>0, then for x€[xi'xi+l] and Y€[ijYj+l] (4- 25) |f(X. y) - s(x.v)| < I |f(m'n)l Him, 0 11::1 m-l n n-l " + Km(hx AxiAi/tinén’o hy+Kn(hy Avlej/(n 170 (m, 2) m m-l "" + (1/8)| If ||{ém’ 0 hx + K (hx Axini/4}(hyayj)Aj m +(1/8)llf(2’n)|l{é h: +Knn(h 'leij/4}WhAx)/_\ n,0 + (1/64)| [f(z’ 2" I{(thxi)(hyAYj)}Ai—A—j +K(M1T )(1+K(M1rxam)/Z){[O(121( -a x)(1 -2a “)2/(4013‘ 7x )+ 1) v + (7(1—ax)(l -23X)/(4'?‘) + 1/4)/(m-1)1] | |£(m' 0)| | 71:1 + 2.1(1-axx1-za")2 max Ile‘llw/MZQ'IT‘H 1<5:< = min {n, 2} . However, Theorem 4. 3 requires continuity of C(m’ O) n C(O’ n) n C(m,n) = C(m’ n), hence, using cubic splines throughout the algorithm makes more efficient use of the available continuity. 172 Section 5. Bivariate Least Squares with Unstructured Data A A Let X = {(xi, 91)}111'1 §[a,b]x[c, d] be a set of M unstruc- tured data points. We will describe here several bivariate finite dimensional vector spaces from which discrete least squares fits can be calculated on the data set X. The first two spaces, bicubic splines and cubically blended cubic splines are known (see [29], p. 49 and [13], respectively). The Hermite blended piecewise poly- nomials are new and have some interesting properties. For this chapter, define the real number I I I IX of a bivariate function f 6 C(0’ 0)( [a, b]x [c, d]) to be A (5.1) IIfII .= max |£(£,,y.)| . X 1gi<1¢1 1 1 Section 5. 1. Bicubic Splines t :: ‘°° = :: so. Le Trxa x1) >|3—o|+|9-8|. Define the bivariate trigonometric polynomial of degree n to be R(a, 6) = Q(((b-a)cos a+a+b)/2, ((d-c)cos 9+c+d)/2) . Because IICII: |R(5:—5)|= HRHL-n,0]x[-1r,o]:||R||[-oo,oo]x[-oo,oo]the11 R.(1 o) R,(o 1). (5-8) (3.9)= 0.9)=0 and we have (5.9) 116,8) =f R(l' O)(a, 8)do + R(0’1)(a, 8)de+R(&‘, 3) , C A — where C isa straight line from (3,9) to (3,9). If C is param- eterized by te [0,1], then 1 R” 0) (5.10) R(a,0) =] [R (t(a-a)+a,t(6 -9)+0) -R o (t-l) (t-l) R<0.1) (o. 1) ”(E-SHE}, t( 613 )+3)-R (t-l) (3’6) (t-l):I dt A + 11(6), 8) . 175 By the Mean Value Theorem for each te [0, 1), there exists a te (t, 1), such that (5.11) (R(l’o) z a”, °)(‘t’(a‘—6‘)+3, "( A! ~—— A A _ A + R(l’ ”(dB-3H3, t(9-9)+9) (9-9) , with a similar expression for the second term in brackets on the right hand side of (5. 10). Taking absolute values and substituting (5.11) into (5. 10) we have, after two applications of Bernstein's inequality (see [6 , p. 91]) . (5.12) IIQII = |R(67.-5)| 1 (la-{r‘|+|8-?5|)z f |t—1|dt+|R(a‘,3)| o 2 .. - A 2 < IlQlln (la-3l+|6--9|> /2+ HQIIX A — 2 < llQll ||f-s|l<{&m,ollf llhx+£mollf 11h, (m,n) (m n) +ém0 £n0||t llhin hy n+5m,0£n,0||£ ||hm h: (m,n) m-n 3 ._ .. 2 +£m,OE,n,O||f ||hx hy}(1+l/(1(313(X1rx,fiy))/Z)) A __ 2 +l|R|lw/<1-< 3B(X1r.wry)) /2). where E, and C, are given in Table 1 of Chapter 2, h , m, 0 n, 0 x h , h and h are the mesh sizes of 1r , ;, 1t and :17, respec- x y x x y y tively. Proof: Identical to Theorem 5.1, hence omitted. Section 5. 3. Hermite Blended Piecewise Polynomials. Let wxza=x1=x2~ 2 11‘ 1”(2)1110 1:1 1 1 k+2 - k = n (x-x k+l) g( +2)S(x))/(k+2)! , where E(x)e [xsk+1, x(s+l)k+2]’ see [33, pp. 1-5]. Hence (M-l)k+2 —. (a/(1 _ -- (5.28) llg- 2 g ”(x11 ¢i|| i=1 k+2 — k+2 g n Ix-xsk+i| Hg‘ )H/(k+2)! i=1 (k+2 k+2 < llg )| |hx /(k+2)! Remark: Note that max (1? - 3? ) ogng-z (s+l)k+2 sk+1 : max (x - x )= h , the mesh size of 1r O2) , —— k (5.29) (If-Px (9 Py[f] | | g (1/(k+2) z)| [f(kJ'Z' 0)l |hx+2 (4, 4)| lh4h 4 +(1/384)z||f +(1/(1+2)1)| (f(0’£+2)| Ih:+2 +(1/(384-(k+2)!))||f( (k+2’4)llhk+2h:+(1/(384(1+Z)')) . ||{(4,1+2) ||h4h 1+2 by The limiting accuracy of the above interpolation scheme is h: h4 , . . . 1&2 thus we choose I and k suff1c1ent1y large to insure that hx , h;+ +2 < h4 xh3° For example, if' h = max (hx’ hy) then this would iInply that in terms of h, the choice of I = k = 6 would suffice. Remark: If h = max (hx, hy)’ and we assume sufficient continuity of f, then both of the discretized blending function spaces, cubically blended cubic splines = DBF and cubic Hermite blended piecewise 11 polynomials = DBFZ’ give eighth order approximations to f. For large M and N, examination of (5. 20) and (5. 26) shows that Dim (DBFl) is much larger than Dim (DBFZ). Thus, there will be a corresponding savings of computer memory needed to store the least squares matrix using DBFZ. However, in order to obtain this savings, the continuity required to implement DBF must be larger 2 than DBF1 . 186 A A Let X = {(121, ;>i)}i\:11 _C_ [a, b]x[:c,d] be a set of M unstruc- k2! (+, +2)([a tured data points. Let fe C , b] x[c, d]), then a discrete least squares solution 3 e DBF is a function which minimizes the 2 A Euclidean norm of the residual vector Re [RM defined in (5. 2). Theorem 5. 3. Let s e DBF2 be a discrete least squares solution on k+2, 1+2)([ the data set X to fe C( a, bJXEC, d]). If 30% 17x, Try) < l/tz, then (5'30) llf-SII<{(l/(k+2)z)|[f(k+2’°)l|h:+2 + (1/(1+2)!)l|f(0,1+2)l '11:” 2 (4,4) 4 4 +(1/384) ||£ || 11th h Y }{1-1/(1- Em, nx,ny))}+ “Rum/(1-36m, «x, Try». A If fi(X9 17x: 17y)< fi/t, then (5.31) ||f-s||< { (1/(k+2),)| lf(k+2, 0)ll h)1:+2 + (1/(1+2) y)I Ifm’ 1+2)l lhffz 2 4,4 4 4 +(1/384) ||£( )Ilhxhy + (1/(384' (k+2) !))| lf(k+2’ 4)Hh:+2 4 h V + (1/(384- (1+2) 3))l “(4: “2)l “1:11;” A }{1-1/<1- 2, then DC(l, l, l, O) = 2 and DC(1,1,1,1) = 4, etc. Define 3 2 ij -ij ij 2 3 ~ij ij ~15 IV I" (5.43) 31].: z: 2: ast est 4’t + z 2 lost cps. it 8:2 13:]. 3:1 (2:2 2 2 xi ij i' + Z 2 C431: ¢s q’tJ ’ s=l t=l to be that element in DBFij which has as its coefficients the average Hale”, Hb's" and "c's". Correspondingly, define s’éDBF3 bY 194 M-1 3 N ij _ (5.44) 14’: 2 >3 2 “a1 4». .)., i=1 3:2 5:1 ‘11 3(1'1)+8 1 M N—l 3 ~ij _ M N ij +222b¢>.¢. +226¢.¢., i=1 j=1 t=2 11 1 3‘34)” i=1j=1 1’1 1 J , ~i,N _ ~1,N-1 ~M,j _ ~M-1,j where we define as,l — as,2 , b1,t — b2,t , A1,N _ ~i,N-l ~M,j _ ~M-l,j ,N _ ~M-l,N-1 c1,1 ‘ C1,2 ’ C1,1 C2,1 and C1,1 “ C2,2 ' Let the real number £>O be such that (5.45) max ||f—s..l I 1/ 0, the third from the fact that E (1)13 E l, and k k k-l k the last from the triangle inequality and (5. 48). From(5.39), (5.43), (5.45), (5.49), (5.47) and (x,y)e[xi, (5.52) |(f-3'ij)(X.y)l g |(f-sij)(X.Y)| + Ins,j - ngxxmn 3 2 .. .. g E + z 2: 2£(¢:J(X)l 42:1(v) 8:2 t=1 2 3 _i. 1' 2 2 U. i. + z z 28l¢t1(y)l¢SJ(X)+ 2 Ema/2”: (natty) s=l t=2 s=l t=l B 3 .. _.. s (5/2)€+2€ >3 (lfithH + ILIJLJWH) 5:2 3 - . g(5/2)E,+2£{ max [23 Ht . ll] 12 Hi. ll]}. 1/ 0 , E 4) a 1, 8:1 20» 41 2 .. .. .. ij ‘1] - -1_] - Z : l : : s-1 4’5 "' ’ ¢s ¢3(i-l)+s on [xi’ xi+l] and 11’s L1J3(j-l)+s We now show that sij = s on [xi, xi+1]x[yj, yj+l]° This is accomplished by making the following observations. For 1 g s, t g 2, __ -—ij_ _ , . -iJ' g _ _ : . . , — . ,9 1g 1 M 1. 1 where the distinct pomts A = (a1, a2), B = (b1,b2), C = (c1, c2) and .5 D = (d1, d2) are ordered as in Figure 2 to form a quadrilateral. We . *‘1’ 2 2 . deSire a map U:I -—+ 52, where I = [0, l]x[0, 1], whose range is exactly S2 and is univalent. (1.1) Let Fe C (52*), where 82* is some open region containing -9v -—b the curve F(x,y) = 0 from A to B where (1.0) (0.1) )2+(F ) for all points (x, y) on the curve F(x,y) = 0 from X to B. (6.1) (F (x.y (x,y))2>o For each se [0, 1], construct the straight line L(r;s) that con- —> —> —> —> tains the two points (l-s) A+ 8B and (l-s)D+sC (which are on the straight lines AB and DC, respectively) —> -> —'> —> —> —r —> (6.2) L(r;s): r{(l-s) A +sB - ((l-s) D+sC)} +(l-s)D+sC , where relR and s is aparameter. The x and y components of 201 .4 L(r; s) are expressed as Lx(r; s) and Ly(r; s), respectively. _y The line L(r; s) can also be expressed in the form (l—s)(a2-d2) + s(bZ-cz) (6. 3) y = (1'3)(a1'd1)+ s(bl'cl) (x-((l -s)a1+sb1))+(l-s)a2+sb 2 01‘ (6.4) Y n(S)X+§(S)- Remark 6. 1: If for some 3 4 [0,1], the denominator of 11(3) is zero, i.e., (l-s)(a -d1) + s(bl-c1)= 0 , or if [11(3)] >> 1, then 1 4 N “J N express the line L(r; s) as x = n(s) y + E, (s) where 71(3) = l/n(s) and, in the following discussion, interchange the rolls of the x and y variables. Without loss of generality, throughout this section we will assume that 11(3). is finite for the value of 3 considered. _y The following assumptions about the lines L(r; s) (y = 11(8) x + §(s)) and the curve F(x,y) = 0 are made. For each s 4 [0,1], 4 the straight line L(r; s) (y = 11(3) x +§ (s)) intersects the curve F(x, y) = 0 once and only once. Also, each point of F(x,y) = 0 from “*, -> —b A to B lies on a line L(r; s) (y = n(s) x + §(s)) for some 86 [0,1] (these conditions are usually not too restrictive, since a domain often can be subdivideduntil it holds). The line L(r; s) (y = 11(3) x + §(s)) does not intersect the curve F(x, y) = 0 tangentially. Also, for s e [0, l] , assume that for distinct values of s the lines L(r; s) (y = r)(s) x + €(s)) do not have a point of intersection in the 202 region :2 and that the curve F(x, y) = 0 does not intersect the — —>- interior of the line DC. Finally, for s e [0,1] , if P is that point of —-’ intersection of the straight line L( r; s) and curve F(x, y) = 0, then —> —D —.> 4 -+ {t(P-((l-s)D+ sC))+ (1-s)D+sCl o g t g 1 }gs2 Remark 6. 2: This final assumption can be proved from the others, but its proof leads us away from the desired results of this section. Parameterization of F(x,y) : 0 We wish to locate the point designated by (x(s),y(s)) = F(s) e 82* where the straight line L(r; s) (y = 11(3) x + §(x)) inter- sects the curve F(x,y) = 0. This can be accomplished by either one of the following two procedures which we deve10p simultaneously. For a fixed 3 6 [0,1], let (6. 5a) z'(r) : F(_1:(r; s)) and (6.5b) w(x) = F(x, 11(3) x +§(s)) . The root 9 where z(?) = 0 or x(s) where w(x(s)) = 0 is the desired solution. One of the many root finding techniques for uni- variate functions can be employed to locate the root in (6. 5a) and (6. 5b); for example, Newton's method or the method of false position (see [24, pp. 97-100]). If Newton's method is chosen, then the derivatives z'(r) and w'(x) are calculated as follows. 203 -" . >.'< (1,1) If L(r,s) and (x,n(s)x+§(s)) d2 , then because FeC (52*), we have (6. 6a) z'(r) = F‘1'°’(L(;r s))(1‘L (r; s))'+ FO 1’(Ltr: 8))(Ly(r;s))' :F('lO)(L(r;s)){(l-(s)a1-dl)(+s1-c1)} + F(O’ 1)(1—in; 8)){(1-S)(a2- d2)+ 8(b2- 02)} (6. 6b) w'(x) = F”' O)(X.n(8) x + as» + F‘O'1’(x.n(s)x+§)n(s). Let 1’) and x(s) correspond to the roots of (6. 6a) and (6. 6b), res- pectively. It will be shown that both z'(lr‘) and w‘(x(s)) ' are non- zero. If z'(lr) = 0, then without loss of generality, from (6. l) we (0 ”(L assume that F s)) f 0. Thus from (6. 6a) 01(L( (0 1)“ (6.7) (Lx($;s))' {-F1r;s))/F(L(r an}: (1417?;an x A y A . . . . If (L (r; s))' = 0, then (L (r;s))' = 0. Usmg (6. 2) this implies that the straight lines AB and DC intersect, which cannot happen, implying that (Lx(’r; s)) at 0. The Implicit Function Theorem (see [10, p. 256]) implies that the slope of the curve F(x,y) = 0 at _) L(?; s) is given by the term in brackets of (6. 7) which would equal y A x A . . . (L (r; s))'/(L (r; s))' : 11(3) which is the slope of the straight line A (L(r; 3). Our nontangential intersection assumption guarantees that this cannot happen, implying that z'(?) 7‘ 0. 204 The argument that w'(x(s)) :f 0, is similar and is omitted. For sufficiently close initial estimates r1 and x , for 1 1 = l, 2, (6' 8‘1) 1.1+1 = 11 ' z(14)/”(111) and (6'81”) X1+1 = 1‘1 ' w(x11/W'(xi) J A where the ri converge to r and the x1 converge to x(s). (2.2) Remark 6.3: If F 4C (9*) , then z(r) and w(x) are twice differentiable in a neighborhood of {5 and x(s), respectively. Under these conditions, the convergence of Newton's method is quadratic (see [24, p. 98]). A Remark 6. 4: Finding an initial estimate for r is often easier than for x(s), because r = 1 corresponds to a point on the line KB and the curve F(x,y) = O is usually "somewhat near" ATB. Having calculated ? and x(s), then *A (6.9a) L(r;s) 01‘ (6.913) (K(S). 11(8) X(S) + 5(8)) = (X(8). Y(S)) + is the point of intersectionof F(x, y) = 0 and L(r; s) (y = T)(S) x + E, (8)). This completes the parameterization of the curve F(x, y) = 0. 205 Remark 6. 5: If the curve is given as y = f(x) or x = g(y), then the extension of the above to this case is obvious. ._.) Calculation of U(s, t) The straight lines DA, C_B and DC are parameterized linearly as follows for s e [0, l] and t6 [0, l] —5 9 -s -5. -¥ (6.10a) t(A-D)+D:(l -t)D+tA (6.10b) t(B-C)+€:(l -t)€+ t3 and —9 a —) —> —> (6.10c) s(C-D)+D=(1-s)D+sC. .9 For notational convenience we define the vector F(s) = ._. (x(s), y(s)), where there is no confusion between the vector F(s) and the curve F(x,y) = O of which (x(s), y(s)) is a point. Linearly blend the four curves D—A, C—B, DC and F(x, y) = 0 . -> 2 2 (see [16] for procedure) to obtain the vector valued map U:I ——>R —) __s 4 —> 4 (6.11) U(s,t)=(1-s){(l-t)D+tA} +s{(1-t)C+tB} +(1-t) {(1-s)3+ 8E) + tfis) —> —> —) .1. -(l-s)(1-t)D -(1-s) tA - s(l-t)C - stB = (l-t) {(1-s)3+ 33} + tfis). 206 -> —> -> 4 Remark 6. 6: Observe that U(s, 0) = (l-s) D + sC, U(s, l) = -> _p -+ —)> -> —->- —> ->- F(s), U(0,t)= (l-t) D+tF(O) :(1-t)D+tA and U(1,t):(1-t)C+ 1 -> —> —> tF(1) = (l-t) C + tB. Thus the mapping U carries the boundary of I2 onto the boundary of Q. U is univalent. Let (81, t1), (sz, t2)el2 be two points such that -.> —> —>- -> U(s ,t )= U(sz,t ). Observe that the points F(sl) and (l-sl) D+ l 2 —> —-> ‘ le are on the line L( r; 81) by construction. From this and (6. 11) ._y. it is clear that U(slfil) is on the same line L(r; 31) because .4 -) —* -+ 4 —+ (6.12) U(sl, t) = t{F(sl)-[(l-sl)D + 81C]}+(1-sl)D + 81 c is a straight line passing through these two points. Making the -) -s corresponding observation that U(s t2) is on the line L(r; 8'2), then z) .—p we have 31 = 82 from the assumption that the lines L(r; s) have no points of intersection in $2 for distinct values of s and for te[0, 1]. .5. From (6. 12) and our assumption that F(s) does not intersect DC, it is clear that t1 = t2 because 6 13 t 3 15" 3 3] ( ~13) (51’ 2) - (Sl.t1)—(t2-t1)( (81) - [(1-81) + 81 ). hence, U is a univalent mapping. Remark 6. 7: For linear blending, the map U will not always be univalent. However, with the special parameterization given, this problem is circumvented. 207 (1.1) Continuity of U. If Fe C (52*), then we will prove that .+ Us C(l' 1) 12 ( ). Examination of (6. 11) shows that the continuity of U is limited only by the continuity of F(s) = (x(s), y(s)), hence it will be shown that x(s), y(s)eC(1)[0, l] . For a fixed Q6 [0, l], the denominator of 11(3) is assumed non-zero, i.e. )(1-s)(a1-d1)+ g(bl-clfl = (5)0. Hence, for 86 (3-5, 3+5)n[o,1], it follows that )(1-s)(al-d1) + s(bl-cl)| 2 £/2> o where 5 = 6/2 if l-(al-dl) + (bl-cl)| <1 or 5 = f/(ZI -(al -dl) + (bl-c1) I) otherwise. Direct calculation shows that both 71(3) and g (s) are continuously differentiable in the interval (Q-s, 3+5) fl [0, 1]. Also observe that F(x, r)(s) x + g(s)) is con- tinuously differentiable as a bivariate function of x and s for 86 (3-5, §+5)/\ [0, 1] and x such that (x, r)(s)x +§(s)) e (2* (1’ 1)(521‘). The implicit function theorem (see [10 1 because F e C p. 257]) implies the existence of x(s) which is a unique continu- A ously differentiable function in some neighborhood N of 8 if F”' 0’ ‘0' l’(xté‘). «3) x(é‘) + §(§)) m?) A A A A (X (S), n(8)X(S) +§(S)) 4: F 7% 0 (note that our assumptions guarantee that x(s) and y(s) are both univalent, and we only need prove continuity). It has already been shown that this cannot be zero (see the discussion following (6. 6b)). Thus (6.14) x'(s) = -(n'(S) x(s) + €‘(8))F(O'11/(F11’0)+n(8) F‘o’ 11) , 208 where F(O' 1) and F(l’ O) are evaluated at the point (x(s), n(s) x(s) + €(s)) for as N. From (6. 4), the first derivative of y(s) also exists for s e N. Remark 6. 8: If higher continuity of F is assumed, then higher derivatives of x(s) and y(s) follow by repeated differentiation of (6.14) and (6. 4) (observe that the denominator of (6. 14) cannot be zero). -> ..) U([0, 1134-22411) = $2 . Because U is continuous, univalent, and maps the boundary of 12 onto the boundary of 9 (see Remark 6. 6), _.) we have from Theorem 13.1 of [27 , p. 121] that U([0, l] xEO, 1]) = $2, i.e., the range of U is precisely 52. Remark 6. 9: Linear blending will not always yield a map whose range is $2 . It is possible for the mapping to "spill over" the boundary of $2 into the complement of S2, (see [16]). Summarizing, to construct a univalent and onto bivariate map .9 U, parameterize F(x, y) = 0 with respect to s by the above method and use linear blending. The observation should be made, given the point (3, t) e 12, that F(x, y) = 0 need only be parameterized for that single value of s. 209 ""-1 Calculating U (x,y) A procedure is now given to calculate 13462,?) for (9:, 9k ‘2 . It was shown above that U). is a univalent onto map; thus a unique point (3,1135 12 exists such that 7173,?) = (2’2, £1). Coordinate ’3‘ is calculated first. From (6. ll), observe that ($2,?) lies on the straight line L(r; 3) (y = mg) x + g(Qn. The two A —) A-> A —b A—r points (l-s) A +s B and (l -s) D+ 8C are also on the same line. Thus, we desire the value 9 which causes the vector (v1,v2) = % (1-3) A + 33 - [(1-3) 3+ 38] to be parallel to the vector (wl,w2) = A A A -) A-) . . _ (x,y) - [(l-s) D + 8C], 1. e. (w1,w2) to be a scalar multiple of (v1, v2). If (vl,vz) a! (0, 0), which is the case because A :f D and -)- —+ B :I C, then the above is equivalent to w v 1 1 II C (6.15) det W2 v2 Direct calculation shows that 5 must satisfy A2 A (6.16) 0—Kls +Kzs+K3, where (6.17) Kl : ((11 -c1) (bZ-az) - (dz-c2) (bl-a1) A (6.18) KZ — ((11 -c1) (az-dz) + (x-dl) (bZ-c2 + dz-a2 d c ) d ) A d b -+d -a 4 2' 2 (a1' 1 ”(y‘ 2)( 1'°1 1 1) 210 and 6 1 K — " d d A d d < . 9) 3 — (x- luaz- 2) -- —+ —> —> (6.20) det(B-A, C-D)=O, or that AB be parallel to DC. Hence, equation (6.16) reduces to a linear equation. In either case, because of our construction, ’3‘ is unique in the interval [0, 1] . A The coordinate t is computed as follows. First, calculate . . . " A . 4 A A A the pomt of intersection F(s), of the line L(r; s) (y = 11(3) x + g(s)) and the curve F(x, y) = 0 (the procedure is identical to that given A above). t is given by (6.21) 2: Hum?) - {(1-2)B+38}||/||?(3) -{(1-3)“13+3"}| l- The denominator of (6. 21) is non—zero because it is assumed that the curve F(x,y) = 0 does not intersect the line DC. -> 4 —> -) -> Using the fact that (1-§)A+’s‘B, (1 -’s‘)D+§c, F(Q) and (32,9) A . are all on the line fir; Q) (y = M?) x + g(s)) by construction, then . . -* A A A A direct calculation shows that U(s, t) = (x, y). . *-l A A . . . A To reiterate, U (x,y) is calculated by first solvmg for s from the quadratic (6. 16), parameterizing F(x,y) : 0 to obtain —-> A A F(s), and then calculating t from (6. 21). 211 Section 6. 2. Type 2 Transformations The above procedure is modified by letting two of the points in -> -> -—'>- —) Q A —> -) Figure 2 coincide, either A = D, B = C, A = B or D = C and the resulting procedure will be denoted as a type 2 domain transforma- ti on. _5 Type 2a Transformations. The case where X = D will be examined -—> -+ first, observing that the case where B = C is nearly identical. ~ -> Nothing is changed in parameterizing F(x, y) = 0 to obtain F(s) —> —9 —> —> (using the fact that if s = 0, then F(O) = A = D), and U(s, t) is given by equation (6.11). U is no longer univalent, because the points - -+ (0, t) for te [0, I] are all mapped to the single point A = D (see Remark 6. 6). However, the argument given above proves that U is univalent on (0, 1]x[0, l] . Nothing is changed in the argument _) on the continuity of U, taking n(s) = (bz-c2)/(bl-cl) = n for ‘* 2 s e [0, l] . However, the proof that U(I ) = $2 is no longer valid for —;. this case because U is no longer univalent. To circumvent this .5 problem, we first show that U cannot map to a point outside of $2 . A A . “*A Let (s, t)e (O, 1]x[0, 1] , then from the above construction, F(s) and “‘A A "') A U(s, t) are both on the straight line L(r; s) which can be expressed -) —> —) —> --> —} as U(Q, t) = t{F(’s\) - [(1 -§)D+§C]}+(l -§)D+3C. Because ta [0,1], our assumptions guarantee that 6%,?) 2 62,9) 6 $2, proving that U - ‘9 A A maps into S2. To prove that U maps onto 52 , let (x,y)e $2 . If A A AA . . . — A (x,y):A: Dor(x,y)isapoint on the line BC, then take 3 =0 or 212 A = 1, respectively. If ($2, y) is a point on the curve F(x,y) = O or the . . — -— A A A -o- A straight line AC = DC, then take 3 such that (x, y) = F(s) or A —b A—r . A A . =(l -s)D+ sC, respectively. If (x, y) is not on the boundary of S2 , A A . . A A . . then (x, y) is a point on the straight line y = n(x-x) +y, which is parallel to the line BC (recall n(s) = n = constant for se [0, 1]). This line must intersect the boundary of Q at least twice and our assumptions imply that this line must also be one of the lines _’ A A, , A A L(r; s) for some 55 (0,1). To see this, note that y = n(x-x)+ y —as —> —-—- cannot intersect the point A = D or the line BC which is parallel to it. If it intersects the curve F(x, y) = 0 at some point, say -->A A A —¥ A F(s), then the lines y = n(x-x) + y and L(r; s) are identical because they have the same slope and a point in common. A similar argu- ment holds if y : T‘(x-3i) + § intersects the curve DC at some point A -+ A—i A (l -s)D+ sC, in either case take 8 = 3. Also note that this value A A ‘9' A of s is unique because y = T](X-X) + y is a parameter line L(r; s). A If g '7’ 0, then compute t from (6. 21), and direct calculation —5 A A A A A A shows that U(s, t) = (x, y) (if s = 0, then any te [0,1] is accept- able). —a. -> Type 2b Transformations. Consider the case where D = C (note that —> -> the case where A = B is similar and less complex). All of the —) —> -+ lines L(r; s) (y = n(s) x + €(s)) intersect at the point D = C, but no other point of Q. F(x, y) = O is parameterized as above and the map U reduces to 213 --> A —) (6.22) U(s,t)= (l-t)D+ tF(s) . _p U is not univalent because the points (s, 0) for s 6 [0,1] are —~ —> mapped to the single point D = C. However, the above proof for _y the univalency of U holds for (s, t)e [0, 1]x(0, 1] . The differenti- _.). ability of U follows from the above argument. Finally, the proof that U(Iz) = $2 is similar to that of type 2a transformations where -> 4 we work with the straight line t{(x,y) - D} + D . Section 6. 3. General Domains. It should be understood that for the region (2, the side on which the curve F(x, y) = 0 is located was specified as it was above for purposes of illustration. If it is desired, then the curve F(x, y) = 0 can be any one of the four sides of the quadrilateral region, where minor modifications of the above pre- sentation will yield the appropriate procedure and formulas for the construction of U. We now consider closed regions 1" which can be subdivided into N subregions 9i such that (6.23) I" = U $2. , . 1 i=1 andfor 1 9i . Several examples will be given to illustrate the procedure (where the numbers in the following illus- trations have the obvious correspondence). \ x H N w A V\ Figure 3 In Figure 3, $2 , $2 , $2 and $2 are type 2b regions (type 2a 1 3 4 6 could be used with minor modifications) and $22 and (25 are type 1 . . 2 2 2 2 regions. Note that the left hand Slde of II and I4 (I3 and 16) are identified in the obvious manner and the adjacent side of If and 2 2 2 . . “" I (I and 16) are mapped to the Single pOint Pl 4 3 ,123t+:: 5mm}? 15' 678 2 9107 1112 1112 —9 [In J')‘ . P ’I’ I V Flgure 4 3 (P2) . 215 InFigure4Q,Q,Q 1 4 5'9 , $2 and Q are type 2a 8 ll 12 regions and the remaining regions are type 1, where the left hand —+ sides of If and I: are mapped to the point P1, etc. Figure 5 In Figure 5, the regions 9i for IS i g 10 are all type 1, where the left hand side of If and I: are identified with the right hand side of I: and IIO’ respectively. The procedure of subdividing F should be lucid from the above example 3. (0.0) Section 6. 4. Discrete Least Squares Over 1" . Let . f e C (1") , where I‘ satisfies (6. 23) and (6.24) and let X = { (£16 §k)}11:’:1 E 1" be a set of 101 unstructured data points. A discrete least squares fit gi:Qi——>R to f is constructed over each region $21 in the following manner. Choose a finite dimensional bivariate real valued function space 5(12) over I2 (for example, any of those of Section _5 5) and construct the map Ui:I2——+ 9i . 216 If {21 is a type 2 domain, then constrain S(I2) to insure that each element of S(Iz) is constant on that single boundary line of I2 which is mapped to the single point 3 (see Section 6. 2) (for the spaces given in Section 5, this can easily be accomplished by modifying the basis elements). With this restriction there is no . . +4 2 . ambiguity for the element 5 o Ui . where Se S(I ), because 3 is -D_ single valued on the inverse image (under U, 1 ) of each point of the .1 +-1 —)- type 2 region S21. Thus we define Ui (P) to be (for example) the 4 midpoint of the boundary line of I2 whose image under Ui is the 4 point P. /\ M. A Define X = {(i? 9.)} 1 = X (\9. to be the set of M data 1 ij’ i] j=l 1 A i . i . . . '*-l A A i pomts which are in $2. and define ST. = {U. (x..,y..)}. , where 1 1 1 iJ ij J=l A for lg jg Mi A A ”Ml A A (6.25) (Sij’ tij) — Ui (xij' Yij ) . Finally, define the finite dimensional function space V621) = {5°Ui-l I s e S(IZ)}. We say git V621) (sic S(I2)) is a discrete least squares fit to f (fa U1) on the data set AXiQQi (STiag 12) if the Euclidean norm M of the residual vector Ri e [R 1 (Eie IR 1) is minimized, where component j of R1 is given by 626 R -£A A A and component j' of Ri is given by 217 A N A 3.. t..) - s.(’s‘... t..). N —> (6.27) (R-)' : (fOUi)( 1J 1] 1 1:] 1:] 1J It is clear from (6. 25), (6.26) and (6.27) that if ’g’i (E1) is a discrete .1, ’1‘ least squares fit, then there exists s*e S(Iz) (g = g. 0?}..1 e V(Q)) i 1 i such that 8*0—6,-l =‘g’. (g*eU. z’s’.)where i i A i 1 Mi 4 .. (6.28) ||R.|| =( 2 [(£.3.)(s... i..)-'s'"‘(/s‘... Q..)]2)1/2. 1 j=1 i ij ij ij ij and 1?). (629) ||§||-< 21H" ) (A “12)1/2 . i _ i=1i x11 y111' -g x1J' Vii] ' N N showing that IIRill = IIRiH because IIRiII >I|Ri|| and I IRiI I < I IRil I . Therefore, a discrete least squares fit gie V621) ~ 2 can be calculated from a discrete least squares fit sie S(I ) by taking N M +‘1 (6. 30) g1: SjOUi Observing that. H -) Al (6.31) [If—gill“:- ||£.>Ui-si||12 . C N O I an error analys1s for f - gi over 9i can be given in terms of .4 ~ 2 fo U.-s. over I . i 1 Thus, the case of least squares over a general domain I" is reduced to least squares over rectangular domains and the error . . _* M -) analySis reduces to bounding I Ifo U, -si| l where, for example, fa U1 1 replaces f in the formulas (5.15), (5.16), (5.21), (5.22), (5.30), (5.31), (5.36) and (5.37). BIBLIOGRAPHY 10. 11. 218 BIBLIOGRAPHY Ahlberg, Nilson, and Walsh, The Theory of Splines and Their Applications, Academic Press: New York, 1967. Birkhoff, G. , and C. DeBoor, "Error Bounds for Spline Inter- polation," Journal of Mathematics and Mechanics, Vol. 13, No. 5 (1964) pp. 827-835. Bjorck, A. , "Solving Linear Least Squares Problems by Gram- Schmidt Orthogonalization, " Nordisk Tidskrift for Informations - Behandling, Vol. 7 (1967). Businger, P. , and G. H. Golub, "Linear Least Squares Solu- tions by Householder Transformations, " Numerische Mathematik, Vol. 7 (1965) pp. 269-276. Carlson, R. E., and C. A. Hall, "Error Bounds for Bicubic Spline Interpolation, ” Journal of Approximation Theory, Vol. 7, NO. 1, Jan. 1973, Pp. 41'47. Cheney, E. W. , Introduction to Approximation Theory, McGraw- Hall: New York, 1966. Deskins, W. E. , Abstract Algebra, MacMillan Co.: New York, 1964. Dugundji, J., Topology, Allyn and Bacon, Inc.: Boston, 1966. Faddeev, D. K., and V. N. Faddeeva, Computational Methods of Linear Algebra, W. H. Freedman and Company: San Francisco, 1963. Fulks, W. , Advanced Calculus - An Introduction to Analysis, John Wiley and Sons, Inc.: New York, 1961. Golub, G. , ”Numerical Methods for Solving Linear Least Squares Problems, " Numerische Mathematik, Vol. 7 (1965) pp. 206-216. 12. l3. 14. 15. 16. 17. 18. 19. 20. 21. 22. 219 Gordon, W. J. ""Blending-Function' Methods of Bivariate and Multivariate Interpolation and Approximation," General Motors Research Publication, GMR-834, Warren, Michigan, October, 1968. Gordon, W. J. , ”Spline -B1ended Surface Interpolation Through Curve Networks, " Journal of Mathematics and Mechanics, Vol. 18, No. 10, April, 1969, pp. 931 -952. Gordon, W. J. , and C. A. Hall, "Discretization Error Bounds for Transfinite Elements, " General Motors Research Publica- tion, GMR-1196, Warren,Mic_higan, May, 1972. Gordon, W. J., and C. A. Hall, "Geometric Aspects of the Finite Element Method, ” in The Mathematical Foundations of the Finite Element Method with Applications to Partial Differential Equations, Academic Press, Inc.: New York, 1972. Gordon, W. J., and C. A. Hall, ”Geometric Aspects of the Finite Element Method: Construction of Curvilinear Coordinate Systems and Their Application to Mesh Generation, " General Motors Research Publication, GMR-1286, Warren, Michigan. Gordon, W. J., and C. A. Hall, "Transfinite Element Methods: Blending—Function Interpolation over Arbitrary Curved Element Domains," Numerische Mathematik, Vol. 21, 1973, pp. 109- 129. Gordon, W. J. , and J. A. Wixom, "Pseudo-Harmonic Interpo- lation on Convex Domains, " General Motors Research Publica- tion, GMR-1248, Warren, Michigan, August 1972. Hall, C. A. , ”Natural Cubic and Bicubic Spline Interpolation," SIAM J. Numerical AnalLsis, V01. 10, No. 6, Dec, 1973, pp. 1055-1060. Hall, C. A. , "On Error Bounds for Spline Interpolation, " Journal of Approximation Theory, Vol. 1, 1968, pp. 209 -218. Halliday, J., and J. G. Hayes, ”The Least-Squares Fitting of Cubic Spline Surfaces to General Data Sets, " National Physical Laboratory, NPL Report NAC 22, December, 1972. Householder, A. S. , ”Unitary Triangularization of a Non- symmetric Matrix,” Assoc. Comput. Mach., Vol. 5, 1958, pp. 339-342. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 220 Householder, A. S., The Theory of Matrices in Numerical Analysis, Blardell Publishing Co.: 1964. Isaacson, E., and H. B. Keller, Analysis of Numerical Methods, Wiley and Sons, Inc.: New York, 1966. Jennings, L. S., and M. R. Osborne, "A Direct Error Analysis for Least Squares," Numerische Mathematik, Vol. 22, 1974, pp. 325-3320 Marcus, M. , and H. Ming, A Survey of Matrix Theory and Matrix Inequalities, Allyn and Bacon Inc. : Boston, 1964. Newman, M. H. A., T0pology of Plane Sets, Cambridge Univer- sity Press: 1964. Peters, G. and J. H Wilkinson, "The Least Squares Problem and Pseudo-Inverses," The Computer Journal, Vol. 13, No. 3, August, 1970, pp. 309-316. Schultz, M H., Spline Analysis, Prentice-Hall, Inc.: Engle- wood Cliffs, N. J., 1973. Soble, A. B., ”Majorants of Polynomial Derivatives," Ameri- can Math Monthly, Vol. 64, 1957, pp. 639 -643. van der Sluis, A. , "Stability of the Solution of Linear Least Squares Problems," Numerische Mathematik, Vol. 23, 1975, pp. 241-254. Varga, R. S. Matrix Iterative Analysis, Prentice-Hall, Inc.: Englewood Cliffs, N. J., 1962. Wendroff, B., Theoretical Numerical Analysis, Academic Press: New York, 1966. Wilkinson, J. H. , "Error Analysis of Direct Methods of Matrix Inversion," Association for Computing Machinery,. Vol. 8, No. 1, Jan, 1961. Wilkinson, J. H. , "Error Analysis of Transformations Based on the Use of Matrices of the Form I-waH, " in Error in Digital Computation, Vol. 2, Edited by Louis B. Rall, Wiley and. Sons, Inc.: 1965. 221 36. Wilkinson, J. H. , Rounding Errors in Algebraic Processes, Prentice -Hall, Inc.: Englewood Cliffs, N. J., 1962. SECURITY CLASSIFICATION OF THIS PAGE (When Dete Entered) REA I S REPORT DOCUMENTATION PAGE BEFOREDCgMEEg‘I‘IEgNFSORM 1. REPORT NUMBER 2. covr accessnou no. 3. ascmew's CATALOG nuusaa 4. TITLE (end Subtitle) s. was or REPORT a semen cove’nso BLENDING-FUNCTION TECHNIQUES WITH Interim APPLICATIONS TO DISCRETE LEAST SQUARES 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR“) S. CONTRACT OR GRANT NUMBER(e) Dale Russel Doty AFOSR-72-227l 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT.PROJECT, TASK Michigan State University AREA A WORK UNIT NUMBERS Department of Mathematics 61102F 9749 -03 _Qast Lansing, MI 48824 H. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Air Force Office of Scientific Research (NM) August, 1975 1400 Wilson Blvd. 13. uuuasggapmes Arlington, VA 22209 IA. MONITORING AGENCY NAME 0 ADDRESSU! dillerent lrom Controlling Olflce) IS. SECURITY CLASS. (0! title report) UNCLASSIFIED ISe. DECL ASSIFICATION/OOINGRADING SCHEDULE IS. DISTRIBUTION STATEMENT (0! title Report) Approved for public release; distribution unlimited. I1. DISTRIBUTION STATEMENT (o! the ebetrect entered In Block 20, I! dtlterent from Report) IS. SUPPLEMENTARY NOTES 19. KEY WORDS (Conttnue on reveree etde I! neceeeery and Identlty by block number) blending-functions, least squares, splines,error analysis, interpolation, natural splines, domain transformations 20. ABSTRACT (Conttnue on reveree elde U neceeeery end ldentlly by block number) The theory of blending function spaces (bivariate interpolation) is developed in the general setting of interpolation spaces. In this setting it is shown that blending function spaces have the desirable quality of doubling the order of accuracy with less computation when compared to standard tensor product spaces. The dimensionality of discretized blending function spaces is derived, and DD .523!” 1473 camera or 1 NOV as us OBSOLETE SECURITY CLASSIFICATION OF THIS PAGE (When Dete Entered) SECURITY CLASSIFICATION OF THIS PAGE(When Date Entered) several bases are explicitly constructed. The special example of Hermite spline blended piecewise polynomials is developed, showing that these spaces have bases with small support which are easy to calculate. These spaces offer maximum order of convergence for a minimum number of basis elements. For example, linearly blended piecewise cubic polynomials offer a fourth order approximation scheme, and cubic Hermite spline blended piecewise polynomials offer an eighth order scheme. Next, using the exponential decay of the natural cubic cardinal splines and natural cubic spline blending, a derivative-free approximation scheme is developed, which is eighth order in the interior of the domain. Algorithms with corresponding error estimates are given for solving the discrete least squares problem with unstructured data. For the univariate case, algorithms are developed using the space of cubic splines. The re sult- ing error analysis indicates the necessary restrictions to be placed on the number and distribution of the data points to insure that the discrete least squares {it will be O(hm) to a function fe Cm [a,b] from which the data arises, where h is the mesh size and 15mg4. An example is given to illus- trate that the discrete least squares fit need not be close to f if these condi- tions are not realized. For the bivariate case, algorithms and error analyses are given for the spaces of bicubic splines and discretized blending function spaces. It is shown that the discrete least squares fit to a bivariate function f is of the same order accuracy as the corresponding interpolation accuracy. Discrete least squares is considered on general domains which have curved boundaries and are possibly multiply connected. This general domain is sub- divided into ”standa rd" subdomains, and explicit mappings from the unit square to these standard subdomains are constructed which are one -one, onto, and have easily calculated inverses. Thus, discrete least squares over general domains reduces to the cases previously considered. Finally, an extensive computational error analysis is given for a con— strained least squares algorithm. SECURITY CLASSIFICATION OF THIS PAGE(When Dete Entered) "'TITI'I‘IIINIIIII!flIilIILIILIIMIQIII“ 1293 030