-. a ~-.~...~._q ASYMPTOTIC BAYES SEQUENTIAL TESTS OF THE HYPOTHESIS THAT THE DRIFT 0F A'WIENER ‘ PROCESS {S ZERO Thais for the Degree of Ph. D, MICHIGAN STATE UNIVERSITY THURMAN JOSEPH BROWN, Jr. 1973 ’3“) LIBRARY ‘E l 7"”“1’1‘c t6 ‘itvcrs a lv b-'.'V "- I “'17. m“? a?“ .U—L; --- L: This is to certify that the thesis entitled ASYMPTOTIC BAYES SEQUENTIAL TESTS OF THE HYPOTHESIS THAT THE DRIFT OF A WIENER PROCESS IS ZERO presented by Thurman Joseph Brown, Jr. has been accepted towards fulfillment of the requirements for Ph.D. degreein Statistics & Probability I/H'L,’ ’Mrfl-VV/ 1’ I 5% 71”" Major professor Date May 17, 1973 0-7 639 m 9!? w (1 u . A...» U \1 ABSTRACT ASYMPTOTIC BAYES SEQUENTIAL TESTS OF THE HYPOTHESIS THAT THE DRIFT OF A WIENER PROCESS IS ZERO By Thurman Joseph Brown, Jr. Let {xtz t > 0} be a Wiener process plus a drift p. This paper is concerned with approximating the optimal sequential procedure for testing H u = 0 vs. H : u f O for the large 0' 1 sample case when the prior distribution for the alternative is approximately Lebesgue and the loss is approximately prOportional to Hka. The fixed sample size problem was treated by Rubin and Sethuraman (Sankhya, A, Vol. 27, 1965, pp. 347-356). The solution is similar to that of Chernoff (Sequential Tests for the Mean of a Normal Distribution, Proc. Fourth Berkeley Symp. Math. Stat. Prob. 1, 79-91, University of California Press). It consists of two strictly increasing functions ao(t) defined on [T0,w) and a1(t) defined on [T1,w), with O 5 T1 < TO and a0(t) < a1(t) on [T0,m), which determine the following sequential procedure. Suppose observation begins at time t8 2 0. Observe “Xt N. If a0(ts) < “XtSH < a1(ts), continue sampling until th“ = :i(t)° If i = O, accept: if i = 1, reject. The asymptotic nature of the solution is derived, and standard numerical procedures are used to approximate the regions and the risk. Rubin and Sethuraman's work.has shown that the general asymptotic testing problem may be reduced to the above case. ASYMPTOTIC BAYES SEQUENTIAL TESTS OF THE HYPOTHESIS THAT THE DRIFT OF A WIENER PROCESS IS ZERO By Thurman Joseph Brown, Jr. A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Statistics and Probability 1973 (5’? " ACKNOWLE DGEMENTS I wish to thank deeply Professor Herman Rubin for his patience and encouragement during the preparation of this paper. His comments and insights have been more than instrumental in obtaining the results displayed in this thesis. I would also like to thank.Mrs. Noralee Barnes for the typing of this manuscript. I wish to thank the Office of Naval Research for providing the author with financial support through grant NONR 2587 02. Thanks to Dr. V. Fabian for aquainting me to Doob's paper. Special thanks to Dr. V. Mandrekar for rather extensive discussions of the first section of this paper. ii TABLE OF CONTENTS Section Page 1 GENERAL DESCRIPTION OF PROBLEM ................... l 2 APPROXIMATION OF BOUNDARIES FOR LARGE t ......... 23 3 APPROXIMATION OF THE BOUNDARIES BACKWARD IN TIME . 29 4 ASYMPTOTIC WIDTH OF THE CONTINUATION REGION ...... 36 5 TRUNCATION ERROR AND STABILITY ................... 41 PART II CALCULATIONS 6 THE CONSTANT LOSS PROBLEM ......... ........ ....... . 47 7 THE SQUARED LOSS PROBLEM ......................... 63 BIBLIOGRAPHY ............................................... 73 APPENDIX 0...... ..... O...OOOOOOOOOOOOOOOOOOOOO ..... 000...... 74 iii SECTION 1 Suppose {Xt: t > O} is an n-dimensional Wiener process 2 2 _ with covariance matrix a In’ a known, plus a drift p = (p1,...,un). That is: = - = + X pt+Zt pt (th,...,z ) nt 0 if i f j with E(Zt) = o and E(Zitzjt) = 2 ot:if i = j We want to test sequentially Ho: p = 0 vs. H1: u f 0 for the case in which the apriori measure go 0 is proportionately normal on the alternative. That is, 2 50,005) = eIE(0) + 0-.) éfiumomomu 2 where “0 represents the mean vector, 00 is a constant, and 2 2(ui‘u01) n E - 202 2 2 _ 1 2 o 0 The dependence on n will be suppressed. Throughout this paper unindexed summation signs will go from 1 to n. The cost of sampling per unit time is c, and the losses of type I and type II errors are W0 and Wluuuk respectively. The fixed sample size problem was treated by Rubin and Sethuraman [I]. Chernoff [2] treated the problem of testing sequentially H0: p s 0 vs. H1: u > 0. Bickel and Yahav [5] showed his procedure to be robust. Lindley [6] and l others have shown that for large samples the exact form of the prior distribution is irrelevant and that maximum likelihood estimation is optimum. We now normalize the problem to one in which we test H0: u = 0 vs. H1: u i 0, but with apriori measure placing mass one at u = O and Lebesgue on the alternative: 90,03) = IE(0) + v03), v Lebesgue, and with W and oz replaced by ones. Although O the apriori measure is not a probability measure, it gives rise to a posterior probability measure. Three steps are employed to accomplish the normalization. Step 1) Let * u = OH , * t =Bt * * * * 2 * Then X = E--t---+ Z with 2 ~ N(O, Q—'I t ) where N(a,B) t as t* t* B n denotes the normal distribution with mean a and covariance matrix B. 2 If 1) -1-=1 and 2) 5L=1 then x has drift a8 8 t * * * u, = (u1,...,un) and covariance matrix In per unit tflme measured * in t . The Bayes risk for a given procedure in the original problem is given by s = e(cE[T|01 + v(0)Wo) + (1-e> j'(cE[Tlu'l + Wlllullkv(u)>i(u.uo.o§)du where y(p) is the probability of error given p, and T is the sampling time. W * * * * * * 1 s =W0{e(fi';‘é' EET l0] + v (0)) + (1'6) ”5:5 BET ll» 1+ v (u ) w ark 0 Hu*“k)¢(u*,auo,azog)du*} * * 1 * * where v (u ) = y(u) and E-ECT ‘p 1 = EFT‘n] . S l tt' * “ -£—' * = *2 = 2 2 and W* WI 0 e 1ng c W B, “0 duo, 00 a do, 1 k . 0 W a minimizing [B is equivalent to minimizing 53': .(.*:[T*\03 + Y*(0)) + (1-6) j(c*s[r*\p*] + WiHu*Hky*(p*)) * * *2 * ©(u ,u0,oo )du - The solution to equations 1) and 2) is _ l_ a ‘ 2 o 2 B = o * c So c - w 2 0° 2 k w* _ W1(o ) 1 W0 Step 2) We now show that testing HO: u = 0 vs. H1: u # O with apriori measure a at u = O and (l-g)N(u0,o§In) on the alternative is equivalent to testing the same hypothesis against the same alternative, but with apriori measure placing mass one at u = O, and bv, b constant and v Lebesgue, on the alternative, and starting observation at some positive time (discounting sampling cost accordingly). The posterior measure given Xt = x is now computed for the normal apriori problem. ,O,t 5x t<0) = Pomlxt—w) = 6“" ) 2 ’ €Q< ) e c’o «p.357. -7) E I 2 t t t 00 d§X.,tu(u*0) '— ZX'Z 2 2 i - ”‘01 n 2t' 2 2 - U 1 + (LE)( 1 2)2e 0 € t'G O For the uniform alternative problem ,O,t px’tm) = M o if: b )(l)n( Elm X, r I C Q u,t’t U 2 EX. n 1. [1 + b(%‘3-)32 e21:— 1-1 , dp. (1) (1) (2) U1 2 a. Efi 2 2 2c 1 be?) e «this; dpx t(p.#0) = 2 dp, . . (2) ’ 2x n i [1 + b(%3)2e t 1 Equations (1) and (2) are equivalent for the choice 2 - Eu01 E 202 _ 2 _ l:s. _l__.2 0 b — b(€,u0,00) — ( e )(znoz) 0 Step 3) For testing HO: n = 0 vs. H1: u ¥ 0 with apriori mass one at n = 0 and bv, b constant and v Lebesgue, on the alternative, the Bayes risk of a given procedure is k B = > +f> + y: EET hf) + j; Hu’ll 3(3):; du* V with Y*(h*) = w(h) and é-E[T*\u*] = E[T|n] . So for 3) E; a 1, minimizing .6 is equivalent to minimizing 5* = >du* W * C *_-l where c - B and W1 — Gk . The solution to the three equations is l. a = bn _3 e=b “ .1. ' n 6 = b The total effect of the normalization is to make the normal . . 2 . . alternative problem With constants c, W0, W1, k, a , and apriori 2 constants e. no, and 00 equivalent to the Lebesgue alternative problem (mass one at u = 0) with constants w* - 1 O " s * o 2 - 1 , 2 _ 21101 2- 2 n02 * 1- c =§,—(—1)“(°2>e 0 , 0 3 2nd 0 2 Oi k k 2 W - — 2nd * 2 2 W = ‘1 (—£—)n(2no ) e 0 , 1 l-e O O * k = k , with observation of the linear functions of Xt and t 2 mor _1_ 02 2:102 02h * = .5.“ .95 0 0 x, (1-6) (211 4> e (xt+ 2). t O 2 CO ”or g o2 no2 2 'k c =<~L>“(2n -—Q-)e 0 (ML) . 1., C32 C32 0 so that observation begins at the point 2 2 z“01 mor 1 2 2 2 I“: 211' % ano H {100 <<—9-1_€) (—2—> e “‘0’ (—9—1_e> 211 e ). C O Equivalence of the two classes is in the sense of what might be termed lens equivalence. That is, the normalized problem views linear functions of the process Xt and the time t of the original * * problem. Thus, if X * = axt + b, and t = ct + d, and if the t * * optimal boundaries in the normalized problem are a.(t ),*then the a.(t )-b optimal boundaries in the original problem are ai(t) =«—£1;———- Considering the normalized problem,the posterior probability measure given Xt = x agrees with (2), but with b = l. Letting 2 2x1 2. ___. 2 2t e U = ($1) ’ suppressing its dependence on n, we have _ -l px,t(0) - [1 + U(x,t)] ' 1 dpx cm“) = [1 + ”(XMJ 1U(x.t)¢(w,’f.;)dh . Two strictly increasing continuous functions a0 defined on [T0,m) and a1 defined on [T1,a0, with O 5 T < T , 1 O a0(T0) = a1(T1) = o, and tao < 81 on [T0,m) determine a sequential procedure in the following way. The two functions partition quadrant I into three regions: D0 = [(y,t): y s a0(t), t 2 To] "stop and accept" D1 = [(y,t): t < T1 or y 2 a1(t), t 2 T1] "stop and reject" B = [(y,t): y < a1(t), T1 < t < T0, or a0(t) < y < al(t), t 2 To] "continue sampling". Suppose the first observation Xt' on the process is at time t'. The procedure is: observe “Xt.H. If (“Xt.“,t') 6 Di, i = 0,1, stop. If (“Xt.u,t') E B, continue observation and stOp at the first t > t' such that “Kt“ = ai(t). If i = O, accept; if i = 1, reject. A procedure of the type just described will be denoted by (a0,a1,T1,TO). Since the procedure depends on norm and to avoid plathoria of notation, D0 will interchangeably denote the subset of quadrant I given above, and the subset of n+1 Space given by [(x,t); (“XH’t) 6 DO], and similarly for D1 and B. The meaning in each case will be clear from the context. In general, the domain and range of the functions that map (x,t) into (Hx“,t) will not be distinguished. We now define R(x,t,u) to be the conditional risk given Xt = x at initial time t, for a procedure of the above type which observes a process with drift u, and with sampling cost ct in- curred at the onset of the observation. On D1 the conditional risk is given by ct if u # O R1(xstaLL) = ct + 1 if u. = 0 On D the conditional risk is given by O k R (X,t,p.) = ct + Wlfiu“ We restrict consideration to boundary functions ai(t) for which the conditional expected sampling time T, given Xt. = x', for a process with drift H: is finite for all * (x',t') E B, and p. E(T\Xt, = x',u) < co. (A) ** Theorem. On the set B the conditional risk has con- tinuous second partial derivatives in X1 and t for every n, and satisfies the partial differential equation B B l B Rt(x.t,u) + E uiRi(x,t,u) + 3'2 Rii(x,t,u) = O, for each u (3) with subscript i denoting differentiation with respect to xi- The proof of (3) is based on Theorem 2.1 of Doob's paper [7]. We now fix u # O, and consider an arbitrary initial position X .= x'. On the boundary U U (a,(t),t) we define a continuous t u i=1,2 tzTi function g by g“t' Note that T is the exit time from the continuation region. The conditional risk RB in (3) can then be considered as the expectation Later in the chapter we make further restrictions on the boundary functions. ** Thanks to Dr. Vaclav Fabian for pointing out the application of Doob's work to this result to me. 10 0f 8U(XT,T) with initial position Xt' = x at time t'. The conditional distribution of {th t 2 t'}, given 1 _ I - I I _ . Xt' — x , IS the same as that of {x + “(t - t ) +-/.2 gt-t" t 2 t'}, where gt is Brownian motion with covariance matrix 2tIn. We may thus consider gt instead of Xt" In terms of gt, the stopping time T is the first time I I 1 I (x +u(t-t)+f§§ ..t)€ U (ai(t),t);t2t. t't i=1,2 The stOpping time T is equivalently determined by the first time (f2 (x'-ut') + g t) E U (/2 (ai(t)-ut).t); t 2 t'. t’t" i=1,2 which in terms of gt can be considered the exit time from the continuation region defined by the boundary ([2 (81(t)1lt),t) (i = 0,1) with initial position /2(x' - ut') at time t'. Letting s' = -t' and r = t-t', the stopping time is equivalently determined by the first time (/2 (X' +uS') + 57: S'-r) E U (./2(ai(r-S')-u(r-S')).S'Mr). i=1,2 ' Thus we are observing a trajectory process in the sense of [7, p. 256]. Let T = T - t', and define h” - war-3'», s'-:r> = g”,T) If H(x,s) is the conditional expectation of h? given the initial value gs = x at the initial time s, then under assumption (A) the conditions of Doob's theorem [7, Theorem 2.1] are satisfied, and we get ll 2 a. _ B Bxi The theorem now follows since for each fixed u RB = Tit/2e + us) ,s> where s = -t. The proof for the case u = O is similar, the only dif- . O I O I u ference being in the definition of g . Now we define the transformed risk 2x-52 “1 i 2 mi H(X.t) = jR(x,t.u)e de 0(u) On D , H is given by :13 H A >< " V II H0(x,t) = ct + U(x,t)j(ct + wlllu‘lk)¢(u,zt‘-.%)dh 2 3X21 EX. -'-- 2 w (EEL)J e t 2 g' F(j +‘—'- = ct + U(x,t)[ct + W1 2 j! (P) 1:0 TO + 2 2 _ 2.1. m (__l 3 2t 2t 9 = ct(1 + U(x,t)) + W1U(X,t) Z ., i=0 3' k n+k 2 ’2' ”j + T) (E7 n l‘(j + 5) = ct(l + U(x,t)) + F0(x,t) (ct+l) + U(x,t)jct¢(n,1:-,-tl-)du = ct(l + U(x,t)) + 1 . 12 Furthermore, for nonnegative integer m m (3x2)L 2m X l i Hp.“ Mars-Bdrm = E C (Lam) —— . I t t L=O n tL+m with m m-L cn(L,m) = (L) H (n+2m-2j) for L < m: cn(m,m) = l . i=1 For 2x? = 0, the sum in H0 is k n+k 2 2 NT) (3) ——n—- I‘('2') so that the smallest value of t for which H0(x,t) could equal H1(x,t), namely that value of t for which HO(O,t) = H1(O,t), call it tz, is n+k 2 n I‘( ) t 2(W 2 )k+n k+n > O z 1 Q We now investigate the function H(x,t) on the region B, . HB B . which we call (x,t). From (3), we get R (x,t,u) has continuous second partial derivatives in X1 and t and satisfies the partial differential equation RE(X’C:U) + zuiRE(x,t,u) + % xR§i(x,t,u) = O for each fixed n. 2 (mi) Since S(x,t,u) = exp[2xip,i - 2 t] satisfies the equation St(x.t,h) + 2. 2811(x,t.u.) = (-(m:)/2)S(x.t.u) + 5(mi)8(x.t.w) = 0 for each p, B we get that T(x,t,u) = R (x,t,u)S(x,t,u) satisfies the equation TC (X,t,p,) + 5; grii(x,t,p,) = 0 for each u, 13 in view of the following calculations. T (x,t,,,) + 3; gr .(x,t,p,) = (RBS + RBS ) + amt” s + 22333, + t ll t C ii i 1 ZRBSii) = (RE + {hiai + a 2R:1)s + RB(st + s 2511) = o , since Si = “is. It should be noted that (3) was used in the above derivation. Now HB(x,t) = RB(x,t,O) + j T(x,t,u)dp = RB(x,t,0) + H'B(x,t). From now on we restrict the consideration of boundaries to those for which * E[TEx,t,u] < t + m where m is finite . (B) For such boundaries we shall show that Hp(x,t) satisfies a dif- ferential equation.g Let L ? §EI+ 3 E 335. be a differential Operator and C:(B) be the space of inIIEitely differentiable functions with compact support contained in B, and endowed with the topology of Schwartz [8]. We observe that H'B is bounded on compact sets. Indeed, ,B O H (x,t) < j(c(t + m) + Wluuflk)3(x,t,u)du = U(x,t)c(t + m) +>F (x,t) where m as given in (B) is the bound on the expected continued sample time. Hence, U(t) = If H'B(X.t)w(x,c)dxdc This does not severely restrict the problem as can be seen from Sections 6 and 7. 14 exists for all I E C (B). Clearly, H is linear and continuous m c in the t0pology of G:(B). This implies that U is a distribution. Using the definition of the differential operator on the space of distributions as in ([9], p. 250) we shall prove the following theorem. Theorem. Let L and H be as above, then Ln = 0. Proof. By ([9], Def. 23.4) we should show that U(L*¢) = O for all t e C:(B). Noting that ([9], p. 249) L*¢ e c:(B), we get n(L*t) = ffH'B(x,t)L*¢(x,t)dxdt = ijfr(x,t,u)dhji*w(x,t)dxdt, and by Fubini's theorem we may write [KL-kw) = J'[MH.T(X,C,u)L*w(X,t)dth]dp . For each n, T(x,t,n) is locally integrable and hence by [9, p. 249, 250] HCL*w) = jljfLT(x.t.u)t(x.t)dxdt]du = jjlfLT(x.t.h)v(x.t)dh]dxdt since the transpose of L* =‘L. But for each #2 LT(x,t,n) = 0 giving U(L*¢) E 0. Thus the theorem holds. (C) Assume H':+1(x,t) and Hi:(x,t) are continuous. Then we get that Hi+1(x,t) and HEi(x,t) for (x,t) 6 B are continuous by the definition of HB and prOperty of R(x,t,0). Under the assumptions (B) and (C) we get Corollary 1. H£B(x,t) + k E H£E(x,t) = O. For M = 0 we get from (3) that RB(x,t,O) satisfies the equation RE(x,t,O) + k 2 REi(x,t,O) = 0. Hence by definition of HB(x,t) we get Corollary 2. HE(x,t) + k g H:1(x,t) = O. 15 Remark. In [2], it is claimed that the analogue of HB(x,t) for his procedure and the problems satisfies the equation of Corollary 2. However the conditions for the validity of the equation are not stated precisely. The procedures for which the analogue of Corollary 2 is valid for [2] are not clear to this author. In the following example we show that there do exist procedures for which assumptions of our Corollary 2 are valid. We now demonstrate that for a sequential probability ratio test (SPRT), in fact H'B(x,t) does have continuous derivatives of the first order in t and of the second order in x, so that on B, .B .B _ H2 (x,t) + % H11(x,t) - 0 Let 80 < O < a1 and %-= slope define a SPRT for testing HO: u = 0 vs. H1: p = M > O with cost of sampling c per unit time and loss of acceptance (loss of hitting the lower boundary) given by Wl‘p\k. Let RB(x,t,u) denote the conditional risk of starting observation at the point (x,t), having incurred cost ct at the onset of observation, and continuing observation of the process with drift u until the boundary is contacted, with additional loss of W1\u\k if the lower boundary is contacted. Using the notation: bi(t) = a1 + W(x,t,u) = (M - 2p.)(b1 - x) and ‘L(x,t,u) = (M - 2“,)(bo - x), h = a - a0 = b1 - b0 , then on [t > 0, b0 < x < bl] = B, and -m.< H < m , I“ 16 RB(x,t,u) = ct + [ew - eL]-1[ew - 1]W1\u\k + c(2(W(eL - l) + L(1 - ew))/[(ew - eL) (M - 2p,)2]) = ct + V(X,t,p,)W1\p.‘k + (:Q(X,t,p,), where V corresponds to the probability of hitting the lower boundary, and Q correSponds to the expected continued sampling tune. This calculation is an application of the results of Section 3.11 of Lehmann [a]. (RB(x,t,’-2‘-) is a limit.) Now on B B W L -I k 2 2 h R1=[e -e] [-Wllul (M-Zun’rfifi‘rfi—r], R’il = [ew - eLJ'lt-wllelkm 2“,)2 - 2Ch(M - 2L0] , 2c M - 2p R1: = c + [ew - eL]_1[W1\u\k mug (M - 2,.) + (% (eL - e") + l‘21-3-1' (M - 2U))]a so that using the fact that I N r: v I 3 n M II D b‘2’1‘0‘4 ' 2p») -u(M it is easy to see that B Before proceding, we show that Q(x,t,u) < m, a constant independent of x,t, and u, by extending the definition to B closure, Q =Q(b1(t).t.u) = o , and utilizing the facts a) Q(x we, t + m) =Q(X.t.tt) b) sup Q(x,t,u) < m for fixed t bOstbl -oo k , e e sup Q(X,t,p,) < 6 ° bOSX:b1 p>k 6 - + - Similarly k may be chosen, and letting k6 = max [k6, \k \], e e SUP Q(xstsu) < 8 s bOSbe1 NPR, a fact that is conceptually evident. But Q(x,t,u) is continuous (Q(x,t,%) = (b1 - x)(bO - x)) in x,t, and n, so that on the compact set [b0 5 x s b1, \u\ s k] it is bounded. Thus Q satisfies b), and indeed, if m is the bound on Q, H'B(x,t) = I RB(x,t,p)S(X,t,u)du S §(C(t + m) + W1\p‘k)3(x,t,u)dp = U(x,t)c(t +'m) + Fo(x,t) Now for arbitrary (x',t') E B, we let M M R - [tog t s t1,b0(t) — a0 2 t S x s b1(t) a a < a < a < a 1 define a rectangle containing (x',t') and contained in B, and Show that sup _lT2(x,t,u)\ s Y(u) integrable du . (X,C)ER 18 * \ Then 3 , p. 126 of Loeve [10] says HéB(x,t) = j T2(x,t,u)du (x,t) e E. (4) Now 2 2 szl = HRS - uRB>Sl < [lel w 112an . and R§(x,t,h) = c + wllhlkv2 + d22(x,t.h> But \V2(x,t,n)\ = New - eL]'1 %[M - Zull and \Q2(x,t,u)\ = \2[(M - 2,.)(ew - eL)]-1(%'(eL - e”) + %!.(M - zu))\ satisfy the conditions a) and b), where the sup in condition b) reads b6 5 x S bi. Thus on R \Rgl S c(l + m)‘+ W1\u\k m. On R, S(x,t,u) = U(x,t)Q(u,§3%9 s U(bi(cl>,t0>g where I - (t /2 )15 [:9 ( - (b'(t )/t ))2 b'(t )lt ' 1 “ ex? 2 “ o o 1 3 R < o o 1 g = (tl/Zn)% b6(tO)/t1 < h < bi(t1)/to = (t /2 )% [:9 ( -(b' t )/t ))2 > b'(t )/t 1 " ex? 2 “ 1( 1 o 1 B 1 1 0 ° Although g is not a probability measure, it has the property that all of its "moments" exist. Thus 19 \T2\ s U(bi(t1),to)(\Rg\ + u2\RB\)g(u) integrable du, and so (4) follows. The corresponding proof that Hi: exists is similar. The continuity of the derivatives is clear. We now return to the problems at hand. The remainder of the paper will concern itself with numerical approximations to the optimal boundaries for problems characterized by the triple (c,k,W1). From this point on, it will be assumed that the boundaries are such that LH' = 0, so that LHB = HE(x,t) + % Z Hii(x,t) = O on B. We also note that F1(x,t) = 1 and F0(x,t) satisfy the condition LFl = 0, 1 = 0,1, since 0 k F (x,t) = fwluun 3(X,t,u)du = [3(x,c,u)dAje gm + —> H0(r,t) 2t G(r, t) + w 1U(r, t) 2 j! j-0 m + g) A n “0'7;- VN = G(r,t) + F0(r,t) HE(r,t) +-;-HBr (r, t) + (“—'— r1)HB(r’ t) = 0 , and F0(r,t) and F1(r,t) satisfy the same differential equation. The last equation follows from 1 3 BlB__B 32(2-132-—22._ Ht *5 “u ' 0 " Ht + 22CHrrxi< Exp + Hr((2b(i) 2 - x1031) 2)] x2 — B l- B B l _ .1. = B l_ B n-l B _ Ht + 2 {an + Hr(}:(r r3») Ht + 2 an + (—-21. )Hr , The risk function H(r,t) for an arbitrary procedure (a0(t),al(t),T1,T0) is given by H1(r,t) on D1 H(r,t) = H0(r,t) on D0 HB(r,t) on B H(r,t) is continuous, of course. Now the general properties of the problem have been displayed. Let (50(t),al(t), Ti,T6) denote the optimal procedure, depending of course on the triple (c,W1,k) and the dimension n. The risk function H(r,t) will satisfy the following free boundary condition. 21 lim H (r,t) = lim H (r,t) i = 0,1 B r i r r6 rED r—oa.(t) r-.5,(t) l 1. That is: the r-derivatives "match up" on the boundaries. The free boundary condition is determined as by Chernoff in [2]. The posterior distribution has nothing to do with the proof. We now essentially repeat his hueristic proof. Clarifying remarks are given in [2]. a0(t) and a1(t) defined on ft0,m) determine the trans- formed Bayes risk H(r,t) for all (r,s) with s > t . It is O desired to extend ai(t) backwards to uniformly minimize 6100 as °° 1 H(t) = g H (r,t)dr + I H (r,t)dr +'j H (r,t)dr a0 a1 i B for t < to. On the boundary H = H , so a a 0 l m dH _ O B l dt-gtdr-i-J‘ thr-l-J‘thr a0 a1 If HB(a ,t) # Hi(a.,t) t 1 t 1 _ O B an increase in a0 1f Ht > Ht’ and similar adjustments for other possibilities, would increase 3%. and thereby decrease H for t t . < 0 If the optimal boundary has finite slope, we must then have on the boundary mg} Trim-w ‘ 22 ._ B_ Differentiating H1(ai(t),t) = H (ai(t),t) along the boundary, we get 80 i B Hr(r,t) = Hr(r,t) along the boundary. Since tz is the smallest value of t for which we could possibly step and accept For t small, 2 ' 2 2 -1 cum) s 90,60) = [1 + (fl) 1 is small, which would lead one to believe T1 strictly greater than zero. This can be shown easily for sufficiently large values of c. That is, for otherwise the sampling cost of getting to the acceptance region is larger than type I error = 1. However, the author has not shown in general that T1 > 0. More will be said about this in Sections 6 and 7. SECTION 2 The method we propose to use to approximate 50(t) and 51(t) is the following. For large fixed t, we approximate 50(t) and 51(t), using the known functions HO and H1 and a polynomial function to approximate HB. Then finite difference techniques are used to "fill in" the boundaries backwards in time. This section deals with approximating 50(t) and 51(t) for large t. Bars in general will denote optimal quantities, and primes will denote approximations to optimal quantities. Also, an arbitrary procedure will be condensed to (a0(t),al(t)): T1 and T0 being understood to be the values of t for which a1(t) and a0(t) equal 0. We note that the notation H(r,t) hides the procedure (a0(t),a1(t)). Also, the risk function associated with a procedure depends on the functions a0(t) and a1(t), and not just on the values of the function at a fixed value of t. That is, let HA(r,t) be the risk function corresponding to the procedure (ao(t),al(t)) and HB(r,t) be the risk function correSponding to the procedure (b0(t),b1(t)). Then a0(t0) = bo(t0) and al(t0) = b1(t0) by no means implies HAB(r,tO) = HBB(r,tO). Let H(r,t) denote the optimal risk function: ie, the risk function associated with the Optimal procedure (50(t),al(t)). Throughout this section i = 0,1. 23 24 Suppose the optimal procedure were known. Then differentiating the equations HB(ai(t),t) = H1(Si(t),t) and recalling the free boundary condition H‘idiuno H2620)» (1) we have H‘yaium) H2(52(t).o , (2) and differentiating equations (1) and (2), we get dai -B ‘i _ d5. _. _ H12(a (c) o— + H12(a. c) = H21(a2(t>,c>;E-‘-+ H{2, dai _ -i _ da, “i _ _‘1‘2(a. (c) c)-- 2+H2(ai,t> H12(ai(t)’t)d—t—L+ H22,t>) = - H{2(52,t)>2 The expression Hfil - Hi1 may be reduced. .. —’ - - .. —-B - 171:1(ai(t),t) - H:1(ai(t),t) = -2}—1§(ai(t),t) - 3:17%-)-H1(ai(t),t) Hil<52 = -2H2i<52,c) - 2“}; H3550») - Hildium) i -2[Gz(ai(t),t) + 6151<0N +2c11<52(t>,c>] ZEF; (a. 1(t), t) + (2 :(t))F1' l(ai (t), t) + -Fi 1(ai (t), t)]. The second expression in brackets is zero, while the first term is 25 -2[c(1+U(ai(t),t))] - 2cth2(ai(t),t) + (5%:%E7)U1(ai(t),t) + -1111(s (t), t)]. The second term in this expression is also zero. Hence fifi1(ai(t),t) - H:1(ai(t),t) = -2c(1 + U(ai(t),t)) -B - “B - However, H12(ai(t),t) and H22(ai(t),t) are unknown (even assuming 51(t) known). The following set of calculations holds for an arbitrary procedure, so the bar will be dropped. Differentiating the equation HI:(r,t)+(r21—;HH(rt)+§1-H(r,=t) twice with respect to r and once with respect to t, we have B n-l B n-l B 1 B _ Hrt + (2r ) Hrr - ( 2) Hr + 2 Hrrr _ O ’ 2r B n-l B n-l B l B Hrrt + (2r ) Hrrr - 2(2 2) Hr r+ 2(n 3) Hr + 2 Hrrrr - 0’ r 2r3 B n-l B l B _ Htt + (2r ) Hrt + 2 Hrrt — O ’ from which it follows B _ l B _ n-1 B n-l B Hrt(r’t) - '2 Hrrr(rat) (2r ) Hrr(rat) + (2r2) Hr(r9t) : B _ 1' B n-l B Htt(r,t) - 4 Hrrrr(r’t) + (E;‘) Hrrr(r’t) : n-12 n-l 2 l B + [(7) -(n2 ——1)](—) HB 1_,(r t) + [(‘21— - (T) ](:3') Hr(r.t) We now consider t fixed and large, say T, assume (50(t),al(t)) known, approximate HB(r,T) by a 5th degree polynomial, 26 compute its third and fourth derivatives, and substitute them for H8 (a (T) T) and H8 (5 (T) T) This in turn enables us to 111 i ’ 1111 i ’ ° . ‘B - —B - approx1mate H12(ai(T),T) and H22(ai(T),T). We find a 5th degree polynomial fT(r) subject to the follow- ing conditions. f2<5i> = HB(aTnim ,1") , £26200) = H’iéim ,1) , f'T'<52) = H’ilduicrm) . Suppose polynomials P0(r), P1(r), P2(r), Q0(r), Q1(r), and Q2(r) satisfy the following conditions. PO(5O(T>> = 1, PO<51) = P6(50(T)) = P5<51) = P3<5O = Paella» = o, Pi(5o(T)) = 1, P1<50> = P2<£1(T>) = Pi(51(T)) = Pg<£0(T)> = P'1'(51(T)) = 0. P2'(50) = 1, P2(50(T>> = P2<51m> = P2<50] P2.T> Qo(r) + Hidlam) Q1(r) + [-2c(1 + U(Elmn‘» + H}1(£1,T)] Q2 - HB(50(T),T)) +— 22 21- -36H 15011) T) 1 -2411 (a (T) ,T)] + in -9H (a01T)T) + 3H:B (a 1T),T)] . m' _ég—B‘ "B- 1_-—B.. £21a21T2) - 2, 1H 1a11T),T> - H 1aO1T>,T)) + 22 [ 24H21aO1T),T) -36H‘:(51(T),T)] +— :[ -3H11(a0(T) T) + 9111(a11T) ,T)] , I/I/ " _ :69 _B ‘ —B '- fT (80(T)) - - 4 (H (81(T).T) - H (80(T).T)) 4'“ 31:19? 1 k3 H11aO1T), T) + 16811 H(a1 (T) ,T)] + -,2H[3611(a0 (T) ,T) -24 H110: (T) ,T)] 12 14'1511T)) = 29%11‘1815121T2J2H - H1aO 1T) T2) + —— 31- -168H H‘i1aO 1T) T) X X3 -192H1(81(T) ,T)] + -—[-24H11(a0 (T) ,T) + 36H 11(s (T) ,.T)] 12 28 In view of the foregoing calculations, for large fixed t, say T, we find two constants a' and a' which simultaneously satisfy O 1’ the equations 1 b . ['ZC(1 + ”(31»T)]CH22(82,T) - H22(a; T)] = [H; (a!,T) - H12(a HT)] where b . __ m H12(ai’T) — " _fT (3 1..) ”(2 1%)H11(a; ,T) + (2%.; :)2)Hb ”(8 ,T): Hb _ _]_._ fun 0 m 1 1'1" ")1 2 M221 ,T) — 4 T(a ) + (2;.)fT (a) + [(— - (n 2 _)]((s')2) , i H‘ilugxr) + [112-1-2- 1312211537) H‘iegm, Hb(a£,T) = Hi(s;,T) , b 1 i . H1(ai)T) H1(ai,T) , H:1(a;,T) = -2c(1 + U(a; ,T)) + Hi1,(a1 ,T), and with A = a1 - a 60 13 f'fl‘l/(a O)= (Hb (a', ,T) - Hb (80 ,T)) + :3 -3o Hb (80, T) - 24 H‘;(ai,T)j 1 b 2 b ' . . m I ._. - + + x E 9 H11(ao,T) 3 H11(a1,T)], and Similarly for fT(al)’ III I I III I I fT (a0), and fT (a1). These values a; are approximations to 51(T), and fT(r) is the approximation to H(r,T), 86 < r < ai. Of course H1(r,T) serves as the approximation to H(r,T) and is exact for r 2 max(al(T),ai), while H0(r,T) serves as the approximation to ‘H(r,T), and is exact for r S min(aO(T),aé). We note that just as H(r,t) hides the procedure, the notation for the spanning polynomial ft(r) hides the endpoints and the function value and first two derivatives at the endpoints. SECTION 3 Suppose now for large t, say T, we have a6(T) approximat- ing 50(T), ai(T) approximating 51(T), and fT(r) approximating fikr,T), a6(T) < r < ai(T). Now a mesh Ar, At is chosen and we consider the grid points (rj,T) between a6(T) and ai(T). That 2 is, we define integer = E_ K(a,Ar) [Ar] where [x] is the largest integer s x, and consider the points (rj,T) with rj = j'Ar K(86(T),Ar) + 1 S j S K(a{(T).Ar) . Ar will always be chosen so that the number of grid points, K(ai(T),Ar) - K(a6(T),Ar) 2 3. At these grid points, we define Hb(r T) =£ (r) j, T j . Throughout this section i = 0,1. We now employ an iterative procedure, supposing at t we have a;(t) as approximations to ai(t), and Hb(rj,t) 33 approximations to fifrj,t). First we find a;(t - At), the approximations to 51(c - At), then use Taylor series expansions to approximate i at grid points near the boundaries, and finally employ finite difference techniques to approximate H' at grid points away from the boundaries. 29 w79~‘ 30 From the equations da -B d5 EB (a (c) c) -—+ u12(5i,c)= 11(a (t) c) ——+H12(a.(c> t) it follows that :151 = (17‘1:Z(5.,t> = - 11mm. (c) c) 42‘; m)“ (aium) +‘(““'—‘——0H.1(8. (t) t) , 2(a. (t))2 _B estimates of H111(ai (t), t) would provide us with estimates of da ‘8 i H12(ai(t),t), and in turn of dt (and ai(t - At)). Accordingly, letting m(0) = K(a6(t),Ar) + 2, and m(1) = K(ai(t),Ar) - l (or some such), we consider a Taylor series expansion of fi8(rm(i),t) around (51(t),t), and define 6 - a;(t)) (a; (t) t) = ( H111 3 )[Hb(rm(i)9t) " Hi(8i(t)’t) - (rm(i) 2 . (r . a'(t)) I 1 1 “1(1) 1 2 )<-2c(1 + U(a;(c>.c>> i . + H11(ai(t).t))] , H‘l’zogumo =§ gnaw) c) -(-2-§-;%5-><-2c(1 + u> uga (c) o) + <—————§> Hi,c) , 2(a' (t)) and approximate ai(t - At) by 31 b i I H12(a;(t) ,t) - H12(ai(t) ,t) 2c(1 + U(a;(t),t)) a;(t - At) = a;(t) - At It is assumed that a;(t - At) 5 a;(t). At the grid points close to the boundaries, we use a Taylor series expansion to approximate H. At j = K(ai(t ~ At),Ar) we let Hb(rj,t - At) = H1(ai(t-At),t-At) - (8i(t-At) - rj) Hi(ai(t-At).t-At) (ai(t-At) - r.)2 1 + 2 41 [-2c(1 + U(ai(t-At),t-At)) + H 1(ai(t-At).t-At)] _ l 3 (a'(t-At) - r.) 1 j b t and at the points near the lower boundary, namely K(ad(t-At),Ar) + 1 s j s K(a6(t),Ar) + l, we define Hb(rj,t-At) = H0(a5(t-At),t-At) + (r - 86(t-bt)) H2(86(t-At).t-At) j 2 (rj - 85(t-At)) + 2 [-2c(1 + U(a6(t-At),t-At)) + H31(aé(t-At),t-At)] (rj - a5>3 b ' + 6 H111(ao(t)’t) Hb (a;(t),t), lag computations, have already been computed, while 111 H211(a;(t-At),t-At) would (possibly) require values which are not as yet computed. The values m(i) are chosen with two thoughts in mind. First, it is desirable to choose the points as close to the boundaries as possible in order to reduce the residual error in estimating the third derivatives. Secondly, as we have just seen, the values of the points close to the boundaries are filled in using Taylor series expansions around a;(t). To then expand around these points (very 32 often) in order to estimate ai(t-At) is inappropriate. In- apprOpriate in the sense that the partial differential equation aspect of the problem is lost. At the other grid points away from the boundaries, namely those points (rj,t-At) with K(a6(t),Ar) +-2 s j s K(ai(t-At),¢r)-1, - b our approximations to H(r ,t-At) are the solutions H (rj,t-At) J to the simultaneous equations Hb(ri,t-At) - Hb(r.,t) (Hb(r b b ,t-At)-2H (r.,t-At)+fl.(r ,t-At)) + - ZAr2 b b 11"]. H (ri+1:t'At)'H (ri_1:t‘At) J b b b (H (r. ,t)-2H (r ,t)+H (r._ ,t)) + (1‘€)[ 1+1 2 l 1 2Ar b b H (r. ,t) - H (r ,t) n-l 41+1 1-1 + (zrj)( zAr )3 3 where only schemes for which 0 S e S l are considered. The choice a = 0 gives the four point explicit scheme, and the solution is simply b _ = b at b _ b b H (rj,t At) H (rj,t) + 2Ar2 (H (rj+1,t) 2H (rj,t) + H (rj_1,t)) n-l At b _ b + (zrj) 2A1: (H (rj+1:t) H (rj-1:t)) 0 For 0 < e S 1 we have implicit finite difference schemes involving six points, unless e = l, in which case only four points are con- cerned. The solution of the simultaneous equations in the implicit case is made easier by the tri-diagonal feature of the matrix [3]. ,4. a1“ 33 Note that for the explicit scheme, the order in which the values approximated using Taylor series expansions, and those in- volving the finite difference equations are calculated, is immaterial. The choice of e is dictated by time considerations and conditions discussed in Section 5. The following graphs indicate the four most frequent cases; circles indicate points whose approximating values involve Taylor series expansions, and X's points at which finite difference techniques are employed. Table A G 0 0 o /o /o x. . x. O o C 0 . X. C x. C X. 0 x. O x. O G C x. C x. O The iterative process is continued until the lower boundary crosses the axis. Suppose T00 such that I I _ 80(T00) > 0 2 aO(T00 At) Then we let T' 0, the approximation to T , be 0 I 80(T00) 0(Too) ‘ a6(Too ' 5") z T I = _ _ To T00 At (a. 00 Ato , and fill in the values Hb at the grid points (O,T6),(Ar,T6) ..... ........(K(ai(T6),Ar)~Ar,T6) with AC I I _ I __0 ! - ' .. '1‘To) ’ “1(Too) ' Ac (“1(Too) a1(Too At)) 34 as follows. b O H (0,T5> = H (0.T5) . and Taylor series expansions are employed to calculate ,Ar) + l and for j = K(ai(T6),Ar). b . . . ' H (rj,TO) for l S J S K(ao(T00) For K(a6(T00),Ar) + 2 S j S K(ai(T6),Ar) - l, a finite difference scheme is used with At0 substituted for At. From T6, the iterative procedure is continued in time steps of At, except for the initial step of length At - Ato, until ai(t) < 2Ar. The purpose of the first step is to be able to compare different schemes after the lower boundary crosses the axis. That is, T6 will vary from scheme to scheme. The procedure is similar, but with the following changes due to the special role played by r = 0. Defining in the obvious way B B H (tit) :H (02¢) = lim Hi(r,t) HE(O,t) = lim r10 r10 and similarly for all partial derivatives evaluated at r = 0, then since H€(0,t) = 0, and in fact all odd partials evaluated at r = O vanish, B H (r,t) . B l, B n-l l = B 3' B = l1m[H2(r,t) + 2 H11(r,t) + (-§-)--;--] H2(O,t) + 2 H11(0,t) 0 . r10 For the explicit scheme, we approximate fiKO,t-At) by Hb(0.t-At) = Hb>. Ar 35 and if an implicit scheme is used, the equation involved is ‘€(EA%) Hb(Ar,t-At) + (1 + 6 fig) Hb(0,t-At) 3 (1'6) BA'E-Hb(brst) Ar Ar Ar n t b +(1 - (1 -e) 45) H (0.0 Ar It will be shown in Section 5 that an implicit scheme is necessarily employed when n is large. Suppose T11 to be the value of t such that ai(t) < 2Ar. Then there are obvious estimates of Ti, including T itself. 11 SECTION 4 We now find the asymptotic width of the region B. As t gets large, the density of Hun under the alternative hypothesis becomes concentrated at %'= t , as is easily seen by noting that 1% in one dimension 2 . 2 l l h§i= 1314 < EM 0, with apriori probabilities p and l - p, losses L(rej‘HO) = W0, L(acc‘H1) = wl, and cost of sampling per unit time c. 20 and 21, with 20 S O S 21, determine a sequential probability ratio test (SPRT). That is, one which has boundaries M _ M. = zi + 2 t. We observe (Xt. t > 0) until Xt 21 + 2 t. If i O, accept; if i = 1, reject. Instead of considering the risk as a function of 20 and 21, let us consider the risk to be a function of x1 and h, with 11 = zffl, h = x1 - ho: k0 = 20M. In terms of XI and h, we sample 36 k ' h )sl aslongas M +-2't>> (8 -e ) kl'h x kl-h 1 "1 "1 + (1-p)>)) where k = 2% . :2 k1 or xl-h may be zero, and in fact, when p is small or large, the optimal procedure will be to reject or accept without sampling. We use the results and notation displayed in Lehmann, Chapter 3, to obtain the expression of the above risk function. The terms X X e 0 and e 1 represent the bounds on the likelihood ratio function L(x,t). _ x4Mt 2 M _ p(Kt: = x1H1) _ (1/2nt)%e 2‘ _ M(x ‘ 5") L(x’t) " (x =x‘H) " 2 " e ’ p t 0 _ (x) (l/Zfit)%e 2‘ M M "o ”(X ' '5 '3 >‘1 and x0 < M(x - E-t) < x1 iff e < e < e . The probability of hitting the upper boundary, given drift u, is given by “24.1 10(1 M) l-e -22 -211 11<1 M )_ex0<1 M ) em) = - e Then the probabilities of mistakes are given by 38 'X k X _ 1.23—0.— _‘__...(.)._..) = e 0&3 1 'X ' K e -e 1--e X0 e 1--e)\0 The expected sampling time Eu(T) given drift u, is given -1) x1 10 E (T) = lne 3(1):) + lne (L-Bm)) u Eu(1nL(X.1)) L(x,1) is the likelihood ratio evaluated at t = l. L(x,l) = M(x - g) and E (T) = l- [1 (31,1130) + x ("30‘‘2 M 2 1 11 x0 0 11 10 e '8 e -e k l -1))] 2 x 1 1 X 0 l O [x1(1-e )e + x0(e 1-1)e ] . In order to find x1 and h which minimizes R(x1,h), we set the first partial derivatives equal to zero. —-—— = 0 iff pf (W - kh) + ke (l-e )'_\ ax, o 2 h (1) x - x , = (l-pmw1 - lane 1 + He 1a-. “)1 39 ah 0 iff p[(w0 - kh) - k(l-eh)] (2) 1 - (1-9) e 1[(w1 - kh) + k(l-e “n, Dividing (l) by (2), we obtain 11 -h -h [(WO - kh) + ke (l-e )][(W1 - kh) + k(l-e )] 1 -h - = [(WO _ kh) - k(1-eh)][(w1 - kh)e 1 + k(1-e h)] 9 which reduces to (1:0 - kh)(w1 - kh) = 12(eh + e'h - 2) . (3) This equation involves only h. The LHS of (3) is a decreasing function of h for kh < min(WO,W1) and is S kzh2 < RHS for kh > min(W0,W1). The RHS is an increasing function of h. For h = 0, WOW1 > 0. Thus there exists a unique h satisfying (3). -4 - -2 _ 2 -2 2h 2h h — k (h + 4! + 6! o ....... ) o " 2 WOW1 - (W0 + W1)kh + k The approximation h' of h used here is obtained by neglecting terms of order greater than two in h. 2 2 2 2 _ I I = I wow1 (wO + W1)kh + k h k h , h. = l wowl k (W0 '1' W1) The approximation h' serves as an upper bound for h. .II [III itl.l .IIl' ll Vi I! I 1’ I‘ .l‘l.\llllll.[’k I [III]: ll." '1! .‘IIV‘I'I'I Illllllil 40 For a given h, 11 must satisfy (1) in order to be optimal. h' serves as an approximation to 11 - 10, so EL' serves as an approximation to 21 - 20. Returning to our problem, we define rI(t) to be the "indifference" point, that is, the value of r for which the risk of stopping and accepting equals the risk of stopping and rejecting. rI(t) is in the continuation region B. For the (c,W1,2k) problem in n dimensions, rI(t) is the solution of the equation 2 k 21 g-‘g— 1 = [w z c (L,k)rk+ ] (Zn/t) e t l L=O n t L We note that rI(t) does not depend on the cost of sampling c. By the definition of tz, rI(tz) = 0 . Considering the problem of testing the simply hypothesis rI(t) H : u = 0 vs. the simple alternative H : u = with 0 l (r1(t>)2 ' 2t _ (rI2 e 2‘ > , B. (t/211)2 e I5 ‘0 ll 1 u+uugomn ’ L(rej‘Ho) = 1 , 21 1 L(acc‘Hl) = w1 s(“pu \rI(t),H1) = U + 0(1c1r2> + 0(1r“>] , l-e H(r+Ar,t) - 2H(§,t) + H(reAr,t) (2)1 2 = L‘s. Ar 1 ( 2 )[H11(r,t> 2 z. + fg- H1111(r,t) + 0(Ar )] , (2:1) €[H(r+Ar, t-At) - H(r -Ar ,t -At) 2r 2 . -1 2Ar ] = (3?"961H1(T:C'At) + A%—'H111(r,t-At) 2 + o<1r4>1 = <§§1961H1(r,t>-1cH12(r,t) + Afi—jn (r,t)+o<1c2>+0<1t1r2>+0(1r‘>1 . 111 (El) (1_€)[H@Ar4t) - H(r-ArLt) 2r 2 _ Ell Ar ZAr 1 ' (2r )(1-6)[H1(r,t) + 6 H111(r,t) + 0(Ar4)] , 41 42 so that adding these five equations, on the region B we have, HB (r,t) - HB(r,t-At) + 61: (HB (r+Ar,tjt) -2HB(rLt-At) + HB(r-Ar,t-At) ) At 2Ar2 + (11" 1) (1‘[B(I‘+Ar;t it) " “B (1' “Ar Lt ”At) )] 2r 2Ar B B B B + (1-6)[(H (fiArnt) 'ZH (r,tl‘l'HBLr'ArLtl) n 1 H (fiArlt) -H (If-Aria 2Ar2 + (2r )( 21: )3 _ A£ 113 113113 As. 8 " ' 2 11,220: t) + 24H’1111(r t) + (12— )Arzfllnmt) ' 2 album” 1 - (gil>61cu‘132(r,t) + 001:2) + cum?) + our“). (1) Using formulas derived in Section 2, page , we may write a (r ,c) + (9%) H§(r.c> . H12-(—;H)H Zr 11 HB 1 HB n-l HB “’112(r t) = 51111116 t)- (2r ) H.111(r t) + 2(-‘2‘-;—) H111(r.t) - 20%) I{2(rat) 9 2r H22(r c) = lH1111 + [($2.21)2_(“_;_.1)]:2 H1110; > -1-1 2 1 +[<1-2—)-(“—'2'—)]—3 H1(r c) Hence, the right hand side of (1) may be written = _A£_ B B RHS H (r ,t)- (2 %)At H111(r 8 111113—t>-[< 2 -:-1->2-<—)1:§B H11(r, c) ' [(n 2 "1'" (L 2 _1)23L3'“B( ’t) + A271311111“ t) (112—12 “2“ “211“” t) Le _ 2:.1. + 4 eH1111(r t) + 1: a; TI) H111 + our) 43 2:1 2 B -1 2 '1 = [At— + g};— + 25 e] H1111(r,t) + [-(2-;-) At + (121.)Ar + M 6617)] 8 2 A__ At B 2:1 2 .E:l H111(rt)+[(—'—)— <—'2-'—“12>]( 2e>H11 -<2)] 2215' r (A93 - 15—;- e) Him-x) + oucz) + omzsrz) + our“) 2r r If e = 0, corresponding to the four point explicit scheme, the RHS of (1) is given by 2 RHS = 0(At) + 0(Ar) . . . r For dimen31on n = l or 3, the ch01ce At = A3— reduces the error term to RHS = 0(Ac2) = 0(Ar4) , and for other dimensions, the terms involving the third and fourth derivatives are cancelled. For 0 < s s 1, corresponding to implicit finite difference schemes, the RHS of (l) is reduced by the choice 2 _L. 6At n N:|r—- in the case n = l or 3 to 2 2 4 RHS = 0(At ) + 0(AtAr ) + 0(Ar) , and for other values of n to terms not involving the third and fourth derivatives. Similar calculations yield 44 B B H840 Lt) - HB (0 it ”At) + n (H (Ar : C “At) " H (0 LtLAt) ) At 6 2 Ar 3 B 2 2 2 - B + n(1-€)(H m”) 2 H (OLD) = [- J—“B t + “—A—ZZ + —A-“4 ‘ e] H1111(0.t) Ar + 0(At2) + 0(AtAr2) + 0(Ar4) , utilizing the fact that all odd partial derivatives with respect to r evaluated at r = O vanish. Only in n = 1 dimension can this error be reduced to 2 2 4 0(At ) + 0(AtAr ) + 0(Ar ) by choosing _A.r_2_ 6At rohd 6: The following table gives the number of calculations necessary at each grip point for the four point explicit and six point implicit schemes. multiplications additions and and divisions subtractions (non-integer) explicit, one dimension 1 ' 4 implicit, one dimension 5 4 explicit, multi-dimensions A 4 implicit, multi-dimensions 10 10 The free boundary condition makes the analysis of stability virtually impossible for the author. It appears that the stability condition certainly depends on the dimension n, and quite possibly on the power k. The best that can be hOped for in one dimension is that the stability condition is the same as the fixed boundary case: 45 Atzsllz 0Se<.5, Ar 6 unconditional stability .5 S e S 1. For the four point explicit scheme, this condition is A2,. 2 s 1 . Ar In higher dimensions, these conditions must be strengthened. For the explicit scheme the coefficient of Hb(rj ,t) is 1 At _ (n-12At 2 4r.Ar ZAr J which is negative for small values of rj, the smallest of which is Ar (for which this coefficient is used), if n 2 4 . At r = 0, the coefficient of Hb(0,t) is Ar which is negative for appropriate values of At, Ar, and n. The existence of negative coefficients insures instability, and in general the explicit scheme will be dropped once the lower boundary crosses zero (if not before) if n is large. The larger n, the larger the value of e chosen, because the computations show that small choices of 6 give instable schemes. However, no in- stability problems have been encountered by the author with the choice 3 = 1. If for large n, it is stated that the explicit scheme is used, it is understood that at or before T', an implicit 0 scheme is substituted. 46 , 2 . . In general At is chosen 5 Ar , because it is demonstrated by the computations that 13 any dimensions large values of the ratio At. 2 give instable results, regardless of the value of 6- Ar SECTION 6 Computations were carried out for the constant loss and squared loss problems. This section deals with the constant loss problem. For the (c,W1,0) problem in n dimensions, 0 r2L (W 2 C (L30) _) = W : l L=O n tO+L l and consequently the indifference function is given explicitly by rI(t) = [t(nln(%;) - 21nw1)]5 for 2 _ n t>tz-2n(w1) , for then W1U(rI(t),t) = l . The approximate width of the region B, call it AWB(t), as derived in Section 4, is given by [t(nln(;—n) - 21nW1)]35 w1 51(t) - a0(t) a AWB(t) = 2ct (1 + wl) . AWB(t) is maximized at 2 — n — tm - 2ne(w1 ) — etz as is seen by setting the derivative with respect to t equal to zero. 47 48 Although there is no reason to believe that the approximation AWB(t), based on an approximation that is an upper bound in a simple vs. simple testing case, would serve as an upper bound in this simple vs. composite case, it does in fact turn out to be an upper bound (for ai(t) - a6(t) and presumably for 51(t) - 50(t)) in the constant loss case whenever t > tm. For fixed WI and t, the approximation improves with in- creasing c, in the sense that the ratio ai(t) - a6(t) AWB(t) S l is an increasing function of c. For fixed c, and a fixed distance beyond tm (tz depends on w but not c) the approximation 1’ improves with increasing W for small fixed distances beyond t , m l and contrarily improves with decreasing values of w at large 1 fixed distances beyond tm. For c, W1, and t fixed, the approximation gets progressively worse with increasing dimension n. Table 6.a and 6.b illustrate these facts. I I - t a (c) a0( > 1 . ANB(t) at different Table 6.8 displays the ratio values of t, corresponding to varying values of c for two problems and procedures outlines. Starting value denotes the value of t at which the polynomial procedure was applied. Only major changes in the procedure would affect the ratio to any significant degree. We note that the value of a after the crossing would certainly not affect the ratio at values of t before the crossing (which all of these are). The procedure is included primarily for the sake of completeness. 49 Table 6.a 2 2 Problem: (c,l,O) in n = l dimension Procedure: At = Ar = .25 2 _ -- _ 9L.__1. c - .02,.Ol,.005,.0025,.00125 Implic1t, e 6At 3 Starting value T = 150 ai(t) - a6(t) The ratio at t = 100 and t = 50 Aw3(t) t = 100 t = 50 c = .02 .992 .982 c = .01 .972 .938 c = .005 .911 .831 c = .0025 .767 .656 c = .00125 .547 .455 . . . 2 2 Problem: (c,1,0) 1n n = 10 dimen31ons Procedure: At = Ar = .25 c = .02,.Ol,.OOS Implicit, e = %' before T6 = I e 1 after To Starting value T = 100 The ratio at t = 70 and t = 30 t = 70 t = 30 c = 02 .765 649 c = 01 .563 448 c = 005 .368 285 Table 6.b gives the ratio for varying values of W The 1. starting value T is 200, and 195 is considered to be the largest value of t that reflects an accurate estimate of a;(t). That is to say, the procedure is given 80 At steps to "settle down": to counterbalance the inaccuracies inherent in the initial polynomial approximation. Since for W1 = .02, tm = 68.3, tm + 127 was chosen in order to make its value no larger than 195. If the starting value had been chosen large enough, we would see the ratio for W1 = l surpass the ratio for W1 = 2. 50 Table 6.b . . . 2 2 Problem: (.01,W1,0) in n = l dimen31on Procedure: At = Ar = .25 W1 = 2,1,.5,.25 Implicit, c = % Starting value T = 200 ai(t) - a6(t) tm and the ratio at t + 5 and tm + 127 AWB(t) m t t + 5 t + 127 m m m . W1 = 2 68.3 .988 .993 W1 = 1 17.1 .890 .982 wl = .5 4.3 .709 .984 W1 = 25 1 1 .649 991 The question of convergence has been deferred from Section 5. There are two types of convergence to be considered. One is the convergence of the values Hb(rj,t) to H(rj,t) at grid points common to all meshes, and the other is of the approximations a;(t) to 51(t). The convergence (in L norm) rate of an explicit 0 finite difference solution with A£§'= %‘ in a fixed boundary case Ar (necessarily convergence at grid points) to a function satisfying the heat equation H2 = % H11 with an analytic initial function (certainly 88(r,T) as a function of r is analytic) is 0(At2) = 0(Ar4). The corresponding rate for any other value of the ratio 0&5 (but 3 l to insure stability) is 0(At). Thus in the problemgrat hand, the most that can be hoped for is 0(At2), and the best chance of that happening is in the one-dimensional case. Table 6.c shows values of a) Hb(r ,t) at selected grid J points common to all meshes, b) a{(t) at selected values of t, and c) T6 for the explicit scheme outlined below. The first three 51 values correspond to Ar = .5,.25,.125 respectively, the next two to successive differences, and the number offset to the right to the ratio of the first difference to the second. The first difference may subtract the first number from the second, or visa versa, but whichever way is followed through for the second difference. Table 6.c Problem: (.01,1,0) in n = 1 dimension Procedure: Explicit, A£§'=‘% Ar Format: value when Ar = .5 Ar = .5,.25,.125 value when Ar = .25 Starting value T = 100 value when Ar = .125 first difference . ratio of differences second difference a6(60) Hb(10,60) Hb(12,60) Hb(14,60) ai<60) 9.405498 1.78500146 2.16200199 2.594289809 14.004402 9.405357 499944 199828 771 564 9.405344 33 02 66 82 .000141 .00000202 .00000371 .000000038 .000162 .000013 11 11 19 26 14 s 7 18 9 I b b b , a0(20) H (3,20) H (5,20) H (7,20) a1(20) 2.342487 1.03437318 1.26730696 1.56816093 7.672784 1974 6641 29513 3728 4108 875 597 437 502 423 .000513 .00000577 .00001183 .00002365 .001324 99 5 44 13 76 16 226 10 315 4 Hb(0,5) Hb(l,5) Hb(2,5) ai(5) 1.05880985 1.07461814 1.12015429 2.839361 70206 54217 39970 795260 69505 3754 41125 0645 .00010779 .00007597 .00024541 .044101 701 15 464 16 1155 21 4615 10 T6 Hb(0,3.5) Hb(.5,3.5) ai(3.5) 10.225519 1.08088505 1.08313990 1.243785 53206 106705 59393 0.778694 8385 9361 3347 0.558537 .027687 5 .00018200 7 .00045403 -8 .465091 2 5179 2656 - 5954 .220157 52 Considering first of all convergence at grid points, if the rate of convergence were 0(At2) = 0(Ar4), the absolute value of the ratio of successive differences would be about 16. It should be noted that defining convergence in terms of L0 norm would seem to make more sense in the fixed boundary problem than in the present free boundary problem. It does not seem to be clear from the computations whether the fastest convergence (in absolute value) would occur near the boundaries or near the middle of the continuation region. And in fact, there may be a difference in the boundaries. It is possible that convergence is more meaningfully defined in terms of the average ratio or the minimum ratio. Without defining convergence, ratios of successive differences are displayed in Table 6.d for a one-dimensional problem, and three different procedures. Table 6.d Problem: (.01,l.0) in n = 1 dimensions Procedure: Ar = .5,.25,.125 * Format: ratio for AEE = %, explicit Starting value T = 50 Ar ratio for OE? = %, explicit Ar ratio for At = Arz, e = % b a6(20) H (3,20) Hb(5,20) Hb(7,20) ai(20) 5 13 17 10 4 5 2 3 2 10 ll 20 17 11 4 b b b . H (0,5) H (1,5) H (2,5) a1(5) 15 16 21 10 15 11 -l 11 19 14 6 1 T6 Hb(0,3.5) Hb(.5,3.5) ai(3.5) 5 3 5 2 7 3 2 l 6 8 -22 2 * Different starting value than Table 6.c 53 Two things should be pointed out. First of all, the ratio is not consistent in the sense that three different values (like Ar = .25,.125,.0625) would give substantially different ratios at some grid points. The difference could probably not be explained SOlelYby round off error. If the ratio were consistent, it should be greater than one in absolute value to insure convergence. Secondly, the ratio of successive differences for Ar = .5,.25,.125, as shown in these tables does not necessarily provide the criterion for the best procedure (if the ratio is not consistent), for all three values using one procedure may be closer to the true value than the corresponding values for another procedure -- even though the ratio for the first set is smaller. In higher dimensions, as would be suspected, the convergence rate, however it might be defined, becomes progressively slower. Table 6.e gives the ratio at selected points for a given procedure when n = 1 and 10 dimensions. _‘J‘I Arm-1 -. .F‘. -v- I 54 Table 6.e Problem: (.01,1,0) in n = 1,10 dimensions Procedure: At = Ar2 Format: ratio for n = 1 Ar = .5,.25,.125 ratio for n = 10 e ='% before crossing e ='% after if n = l e — 1 after if n = 10 Starting value T = 100 86(m) Hb(10,60) Hb(12,60) Hb(14,60) ai(60) n = l 11 19 14 7 9 Hb(34,60) Hb(36,60) Hb(38,60) n = 10 7 -5 -3 -9 8 a6(20) Hb(3,20) Hb(5,20) Hb(7,20) ai(20) n = l 5 13 16 10 4 Hb(13,20) Hb(15,20) Hb(17,20) n = 10 -25 -1 -2 -9 1 Hb(0,5) Hb(1,5) Hb(2,5) a{(5) n = 1 15 16 21 10 n = 10 3 3 2 2 If the values corresponding to Ar = .5,.25,.125 (or any other set of three numbers, the last two each one half of the pre- ceding) are monotone, there is an obvious estimate of the true value. Call the values corresponding to Ar = .5,.25 and .125; a, b, and c respectively. Suppose the ratio of successive differences to be r > 1. Then an estimate of the true value is Sc - b) true value = c + r - l O O I t I It was stated in Section 5 that as the ratio A—2 increases Ar beyond one, the results become increasingly less reliable, whether due to instability or round off error. Table 6.f presents data for three different schemes, the last having ratio 2. e is chosen optimally in all three instances. Table 6.f Problem: (.01,l,0) in n = l dimension Starting value T = 50 Hb(3,20) Hb(5,20) Hb(7,20) .5 1.0343739 1.2673100 1.5681627 At = % Ar2, e = 0, Ar = .25 672 2982 391 .125 67 75 68 .5 1.0343774 1.2673118 1.5681637 ’3 At = Ar2, e = %3 Ar = .25 672 2983 390 A .125 67 75 68 .5 ---- ---- ---- * , At = 2Ar2, e = %5, Ar = .25 1.034340 1.267282 1.568196 2 .125 64 95 39 * For At = .5 and Ar = .5, ai(20) = 2.901, so grid points (3,20), (5,20), and (7,20) were not in continuation region. (Indicating extensive round off error.) This table indicates that the results of the scheme with At = 2Ar2 are unsatisfactory. At the grid points (3,20), (5,20), and (7,20) both of the first two schemes would estimate the true values to be 1.034366, 1.267297, and 1.568136 to seven places, employing the above estimation procedure. For Ar = .25 and At = .125 the last scheme gives values which are further away from the three estimated values at two of three grid points than the second scheme with Ar = .5 and At = .25. At the other point the values are essentially equally far away. The same thing happens (two values further away and one equally far away) when the last scheme with Ar = .125 and At = .03125 is compared with the second scheme with Ar = .25 and At = .0625. This just should not happen, and it appears that round off error, if not stability 56 in this case, is affected by the ratio of At to Arz, even when 3 is chosen optimally in terms of truncation error. Other programs indicate stability problems when the ratio exceeds one. Nothing will be said about convergence of the boundary approximations a;(t) except to say its rate appears to be slower in n = 1 dimensions, but faster in higher dimensions than the con- vergence rate at grid points. Graph 6.a plots three sets of boundaries for the (c,1,0) problem in one dimension. The set of unmarked lines correspond to c = .02, those marked by +'s to c = .01, and those marked by 1's to c = .005. The procedure used for this graph is implicit with At = Ar2 = .252 and e ='% - AEE'= %3 however, only major changes in the procedure would be detected for plotting purposes. Con- sequently we will omit the procedure in describing following graphs. We note that for c = .005 it appears that the upper boundary comes in to r = 0 at t = 0, which may indicate that T1 is in fact 0. Evidence from solutions in both the constant and squared loss problems strongly substantiates this uncomfortable possibility. The slope of the indifference function t E_ _ r'(t) = n + (n1n(§; - 21nw1) = n + (n1n(2n) 21nW1) I 2t(1(—t)-21nW)% 21"” { n n 2“ 1 j I is infinite at t = tz, and although it can't be shown by the author, the upper boundary appears to come in to zero more sharply than the indifference function for all values of the parameters and all dimensions. Also the lower boundary comes in less abruptly. That is, for 6 positive and sufficiently small 57 81(T1 + e) > r1(tz + e) > 80(T0 + c) There is reason to believe 56(Tb) is finite. Here are two polynomial functions, A9 even and A? satisfying the same partial differentiation equation as H8, such that both functions and first derivatives match on r = s(t) t. 3r2(t-l) - 3:2 - t3 A° AB(r,t) = 2(r3 - 3rt) 2t3 - 6t2 6(t2 - t) A°(s(c) ,t) AB ,t) A2 A‘i(s(t).c> This is not to imply that H8 can necessarily be expanded in a polynomial around (O,T6), let alone one containing odd terms. Graph 6.b shows the results for the (c,1,0) problem in one dimension, with c = .005, .0025, and .00125. Therefore the narrowest set of boundaries in Graph 6.b correspond to the widest set of boundaries in Graph 6.a. Graph 6.c plots sets of boundaries corresponding to W1 = 2,1, and .5 for the (.01,W1,0) problem in one dimension. As W1 becomes large (or c becomes large) the upper boundary comes in very closely to tz. This graph is slightly misleading in that the boundaries would have to be shown for larger values of t in order to see that the width of the continuation region is asymptotically larger for w1 = 2 than for W1 = l. (The more costly the errors, the more worthwhile the sampling.) Graph 6.d shows the effect of increasing dimension. Bound- aries are plotted for the (.01,1,0) problem in 1,2, and 3 dimensions. Graph 6.e is similar to 6.a except that n = 10. 58 0.xax» w..u. A, q. H. .. us an pa.um “n at "u.u_ “u... b .38. HuM2.m,_ Du. ~Hz.mow. . :4, .mm.nu u no mosau> «easy Hun haw wo.n~ n Eu van mN.@ I u “80 nHOIaNOO I U ceaQCuaav H I a nu Ao.~.ov "Sounoum -.w sauna l8.r [8.9— [3.2 18.: I”...— {8.2 [8.3 QIXU‘U MX6’17309 RUBIN 559 , cu. .. ,u_ - 0:— r.9~n ...u....— . Cu “2.1m. Eu. Luz. B 0 mo nwsau> saucy Han new wo.na u u can w~.o awaoo..m~co..noo. u u codmcoafiv a I a nu Ao.~.uv "Evapoum n.o guano 18 yveji,.nvo_,,m53.uufiw at FT .1. C tr. 0 N I u rug. '8. TB. ‘8. lit- .uu QIXU-H MH8/ 923 RUBIN Hi Cl. .7.‘ u; .a. van *8 .J 9 u I Q') '- rt H «4 I! I) 7 t O I J 60 m3“ wu.m..~.mn:._.~nz.fio.uu 4b.... nw.¢ nm.~ m. mo.aa mN.e H um.mo ma.m~ N Bu Nu H3 m..~.~ u H3 ceamcoewv a I c cw Ao.~3.~o.v ”Emanoum u.o guano C]XB*H MKBQBBB RUBIN 61 Ill l.‘ [I mcowmcmlwp m was .N.~ n s cm Ao.H.~o.v e.e segue ! .5. E c mo mozau> wmuzu aaa you mo.na u u "Eo~noum ‘1. 7 l: \ l can 1. . ~fi#. I}. u I It mm.o u u T8.9~ SJXU“8 MB/ L/ 98.9 RUB / N 62 j I, I l . l u . l . ,- . ‘p l. II I: n ‘1 4 I o \ , x. I l . . _ I . .u( .x . . . If in. fix a . [K .\ ., l\ E o No mosaa> oops» and u0u wo.- a u moo..~o..~o. u o acoameweae on u e as Ao.a.oV ”annoum «.0 cacao \f. V) .L.\_ paw w~.o n -____jn-w_vi.r___ 8 a ; la 7’ I" T“-""‘ “ 'T‘__ '-' 5T” ., 8 5‘: GIXU-U MZBO‘Q 7 ’7 RUBIN SECTION 7 In this section the squared loss problem is investigated. Graphs will be diSplayed in the same order as in Section 6. For the (c,w1,2) problem in n dimensions 1 rZL n r2 W1( 2 Cn(L,1) ‘55:) = w1(; + ‘5) , L=0 t t and rI(t) is the solution of the equation 2 r n .— W1U(r,t)(t + t?-) — l for c > :2 = (nw1)2+“ (2102“ . So n rI(t)2 rI(t) 1 r102) w1(: + t2 ) AWB(t) = get (1+U(rI(t),t)) = 20¢ ( r (t)2 3' I (1 + w1(t + 2 )) However this does not provide an upper bound for the width of the continuation region, and in fact is not nearly as good as the corresponding constant loss approximation. 2 However, by substituting for EHAHZ = %-+ £3 the quantity 1: 3 2 (3M2) E u u u 2 H3 with the numerator approximated by expanding “A around £5, 1: we obtain a better approximation which does in fact serve as an 63 64 upper bound in all the situations encountered by the author. Let X _ .2. = , .. 1 vi t . Then ui vi + Yi . Yi 11d N(0,t). So 2 3/2 \\u\\3 = (1'30!i + Y ) ) 3” =mmz+wfi+2zun> Formally expanding around Hvfiz, we get n43=WP+3mmz+nvggwu+fiwW+azqnwV + 4(2 szi + E E v.1Y iv ifj ma =wn +%‘%uuu l ij))W .......... , (n+1)"- + 0 (—) t I n H c4 0) huh) by r3 r3 3 _ rI(t) ;§'+2 (n+1): ‘§+ 2’(“+1)L 81(t) ' 80(t) = 2ct (w1( 2 2) 2/(1 + W1( 2 E_.+ —. £_.+ 2 t2 t t2 the approximation has properties similar to the corresponding con- stant loss approximation in Section 6, except that this one does not initially get worse with increasing n. However eventually (in n) it does. As an approximation which possibly serves as an upper bound in n dimensions for the general (c,W1,k) problem MW EHHU is suggested. ) 2)) h- 65 In section 6 it was shown that r£(tz) is infinite (for k = 0). In that section r£(t) was given explicitly. Now the slope of the indifference function for general k is found as a function of rI(t) itself, and shown to be infinite at tz. rI(t) depends on W k, and n, but not c. 1, For fixed k and t, we define 2 2 r - __ +k (Evie 2c §r man u a new w. u u moo..Ho..No. I u w H a ea n H cemnsoauv H I : ca A~.~.ov "Ecsnoum u.n shape 18: [No.7— v.52 E59 zly U. I . \J‘l .. .l .. . AH .1 \., : “JIUWIMI. . .ile‘ . ii. 11?. _!~< m4 : .- H(H:: -ia: PL; w ..U .4 .- n u we madam? sauna flan How mm." I u nuaoo..m~oo..noo. n u cofimcoaqv n I : ca AN.H.UV "aaunoum n.a canto SIXHrH )1' l ) 1’) , L_C _1/ I Ag //47 LAKI. 1%) 70 ;_ new m 34 n. a: a mag N azim a a: u H u 3 ofizafiOov ”EUHDOHQ sewacuauv u a c ca AN o.h cache 1.8.7 r8.- f8». ...-.... . i . ..1 mug...» . nix T: S. L. SiXU‘U MI?! 55,? RUBIN 71 ouxuup u n» on up - be u a a a? u on H on on u.> : ill!» I; .. L p L p p L . F +1 _ -9 a «\i x. .. Hm m \\ . . .\ F... runaw— FHD a... I“.WL .lHH..Oh ru.‘ an Tn:2 Imam? f { um aconcuawv M van .~.~ I : cw AN.~.~O.V m. gm 4:35.52 "aoHAOhm a; £36 F‘Nm ClXU-U MRBQB 7U RUBIN l I] Il.llll||ll Illlll. ill I 72 mm. 3:3 22.31.; _ 3w. .3». JJ . .1: .V1. . r84 T8.u~ l8...“ 18.x“ l 8. w? I n? u. N u we noaau> auhsu dun ham ¢N.o I u noelnHOQONOO I u ncofiacoaav ca I a ca A~.H.uV "538m u.h snake ‘3‘un- MZ‘JURL/B RUB IN BIBLIOGRAPHY BIBLIOGRAPHY [1] Rubin, H. and Sethuraman, J. (1965), "Bayes Risk Efficiency," Sankhya, 21, 347-356. [2] Chernoff, H. (1961), "Sequential Tests of the Mean of a Normal Distribution," Proc. Fourth Berkeley Symp. Math. Statist. Prob., 1, 79-91. [3] Richtmyer, R.D. (1957), "Difference Methods for Initial- Value Problems," Interscience Publishers, Inc., New‘York. [4] Lehmann, E.L. (1959), "Testing Statistical Hypotheses," John Wiley and Sons, Inc., London. [5] Bickel, P.J. and Yahav, J., "On the Wiener Process Approximation to Bayesian Sequential Testing Prob- lems," To appear in the Proc. Sixth Berkeley Symp. Math. Statist. Prob. held in 1970. [6] Lindley, D.V. (1960), ”The Use of Prior Probability Dis- tributions in Statistical Inference and Decisions," Proc. Fourth Berkeley Symp. Math. Statist. Prob., 1, 453-468. [7] Doob, J.L. (1955), ”A Probability Approach to the Heat Equation," Trans. Amer. Math. Soc. 80, 216-280. [8] Marchand, J. (1962), "Distributions - An Outline," Inter- science Publishers, Inc., New York. [9] Treves, F. (1967), "TOpological Vector Spaces, Distributions and Kernels," Academic Press, New York. [10] Loéve, M. (1960), "Probability Theory,” D. Van Nostrand Company, Inc., Princeton. 73 APPENDIX CTH 411 APPENDIX Printout of program employing the implicit precedure to handle the squared loss problem in any dimensions. (“3HGPAM MAIN 1INDUT.OHTPUTITAPE5=INPUToTADF6=OUTPHT) [)IMFNSYON §T(IOOO)9CTF(IOOO) COMMON IDONLQNLpoNrgfaTOSOVOHODIMOOIQDBODQQh‘SOHO9n79n89xL0nx .STZ“ lofiThol‘QCLLIQCLAZQCQAIQCRA29BL9HR9 FS,FV9F8199V19F§110FV11 29G<129GV1.7QDFNSQDFNVQCUVOCON97 IS IS IMpLICIT SQUARFC [OSS MULTI DIMENSIONAL IN] 50 K5=193 In==1nfi \AI1=1. :.0] WRI‘TF(69410) ID FORMAT(9X8HSQ LSoN=913) WRI'TE(6I411) W1 FORMAT(QX3HW1=9C604) ‘i'QTTF-(69412) C FOQMAT(QX7HC=9F6.5) KL00K=R KKQTzl’) DIM=ID DS=DIM-l. Dl=DS*05 03:014’05 04:02‘2’10 06=DIM+20 D7=DIM+6. D8=Dé+—)b 72:16..28318530/1796**(DIM/06))*((Wl*DIM)**(lo/D4)l WRITE(691) TZ T=ZOOCC CALL PLY(W1) OX=2.*(’.5**KS) DlOXfl)1*®X Dx2=nx+ox DXDX=DX*DX DFLT=DxDx U=DELT/[DXDX 6:.5-104/(5.*H) CALL CFFHhGgolox) W91T5(F».4]a) nx FopMAH QXBHDX=9F6.4) W97T¢t6:.a14) DELT FOPMAH 9X6HDFLT=oF8.6) WQIT?(€>.41S) 6 FORMAT( 9X15HRFFORE (“ROSS G=9F8o6) KKF:7{‘)* (a**(‘(q_1)) J=