This is to certify that the dissertation entitled ON THE STRUCTURE OF GERM-‘FIELD MARKOV PROCESSES 0N FINITE INTERVALS presented by Einollah Pasha has been accepted towards fulfillment of the requirements for Ph.D degree“) Statistics ij Major professor Date 4/ 9/82 MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 MSU LIBRARIES .—:_. RETURNING MATERIALS: Place in book drop to remove this checkout from your record. FINES will be charged if book is returned after the date stamped below. ON THE STRUCTURE OF GERM-FIELD MARKOV PROCESSES 0N FINITE INTERVALS By Einollah Pasha A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Statistics and Probability 1982 ABSTRACT In a general case of Hilbert space valued Gaussian processes we derived a representation for the processes having Germ Field Markov Property (GFMP) [9] on finite intervals. Aslo we studied the case where the Germ Field is generated by a family of independent Gaussian random variables. In the case where the generating family is finite, these Aprocesses is said to be N-ple reciprocal processes and we gave an ex- plicit representation of them in terms of N-ple Markov processes. In a special case these processes conincides with the reciprocal processes introduced by Jamison [5]. To my parents and my family ACKNOWLEDGEMENTS I wish to thank Dr. v.5. Mandrekar for his guidance and encourage- ment during the preparation of this thesis. Also, I would like to thank Professors H. Salehi for his critical reading of this thesis, C. Shapiro and S. Chow for serving on my guidance committee. Special thanks goes to Mrs. Clara Hanna for her excellent typing of the manuscript. TABLE OF CONTENTS Page INTRODUCTION ....................... 1 Chapter 1 MARKOV PROPERTY ............. 3 1.1. Conditional independence ...... 3 1.2. Markov property ........... 3 1.3. Germ Field Markov property ...... 4 1.4. Operator-valued processes ...... 6 1.5. Reciprocal processes ......... 16 1.6. Gaussian stationary reciprocal processes .............. 31 2 N-PLE MARKOV PROCESSES AND N-PLE RECIPROCAL PROCESSES ............ 36 2.1. N-ple Markov processes ........ 36 2.2. N-ple reciprocal processes ...... 40 2.3. HSO-valued N-ple Markov and N-ple reciprocal processes ......... 45 3 INFINITE ORDER MARKOV PROCESSES ...... 53 BIBLIOGRAPHY ....................... 58 INTRODUCTION This work studies stochastic processes having Markov properties on the family of finite-intervals for Hilbert-space-valued Gaussian processes. In view of the example in [9], these processes need not have Markov property on semi infinite intervals. We show how these processes are related to processes with Markov property on semi-infinite intervals. This allows us to obtain a structural characterization of such processes. This characterization for example allows us to say when the solution of a stochastic differential equation having white noise input with linearly independent boundary conditions is Markov giving the main result of [14]. This is derived from the result on finite-order Markov property introduced here. Under the assumption of existence of (N-l) quadratic mean derivatives, one can show that these are precisely the N-ple Markov processes introduced by Doob [2]. Our representations are motivated from the previous work in [7], [8] and have similar form. This work also constitute an alternative attack on reciprocal processes introduced by Jamison [5]. In fact, our work gives an explicit representation for what one may call "N-ple reciprocal processes" (N = 1 being Jamison's case). Thus this work extends the work in [5]. In addition, we also study q-variate case (q 5 m). Here the techniques used are from ([7], [8]). This part of the work solves complete generality the question raised by Jamison [5]. In the stationary case, the result of Jamison [5] can be derived. Finally, we study infinite-order Markov processes. Here our work is in some sense incomplete. However, this part of the study raises some questions about the relationship of these processes to so called T-positive processes. This part will be subject of continuing investigation. For the convenience of the reader we now describe the results according to chapters. In Chapter 1, after a brief review of conditional independence, Germ field Markov properly (GFMP) and Markov property [MP] operator valued stochastic processes are studied in detail and a representation for reciprocal processes is given, Theorem (1.5.14). In the special case of differentiable reciprocal processes it is shown that these are the only solution of a linear differential equation of certain type with boundary values, Theorem (1.5.21). Finally in this chapter the form of covariance functions of stationary real valued reciprocal Gaussian processes are obtained. In Chapter 2, the notion of (Generalized) N-ple Markov processes and N-ple reciprocal processes are introduced and a representation for N-ple Markov processes in the general form of Hilbert-Schmidt operator (HSO)-valued processes is given, Theorems (2.1.4), (2.3.3). The relation between N-ple Markov processes and N-ple reciprocal processes is given in Theorems (2.2.5) and (2.3.7). The notion of infinite order Markov processes is introduced in Chapter 3. Some properties of this kind of processes have been discussed. A representation for infinite order Markov processes and their T-pos- itivity is of interest. CHAPTER 1 MARKOV PROPERTY Let (n,F,p) be a probability space and X = {Xt’ t E T} be a stochastic process on (Q,F,p), where T is a topological space. In order to give a definition of Markov property we need the idea of conditional independence and some of its basic consequences. 1.1. Conditional independence [6], [9]. Let F1,F2 and G be sub-a-fields of F. We denote by F1 u F2|G the conditional independence of F1 and F2 given G, and it means that P(A1A2|G) = P(A1|G)P(A2|G) for all F, measurable sets Ai’ i = 1,2. We have the following basic results of conditional inde- pendence: 1.1.1 Lemma [9]. If F1 u F2|G then; (a) For every A satisfying G c A c G V F2, we have F1 11 FZIA, (b) For every 8 satisfying 8 c G V F2, we have F1118] G. 1.2. Markov property. Let A be a subset of T with closure A' and boundary 3A. Let: FA = o{Xt: t 6 A} past + II II FA — °{xt' t e A} future PA = o{Xt: t 6 3A} "present". 1.2.1. Definition. We say that X = {Xt’ t e T} has Markov property (M.P.) on A if ' H F+ FA Aer. The classical Markov processes are the one with T = R and having Markov property on the sets of the form At = (-w,t] and the present is given by oIXt}, t e R. In the following we discuss a generalization of this definition. 1.3 Germ field Markov property [9]. As above let (O,F,p) be the probability space and X = {Xt’ t e T} be the stochastic process with T a topological space. Let C be a closed subset of T and define 2c - n F0. For an open set 0, )0 = F0 = oIXt.t E 0}. c c O 0 open 1.3.1. Definition. We say that X = {xtz t 6 T} has Germ field Markov property (GFMP) on Ac: T if Z_.H Z__JZ A CA 3A° Germ field Markov property is a weaker condition than Markov property in the sense that if a process has Markov property on a set A then it has GFMP on A, but the converse may not be t e [9]. In this direction, a stochastic process may have GFMP(M.P.) on some particular subsets of T, such as open sets, but not on a larger class of subsets of T. The question is when can we deduce GFMP(M.P.) on some larger class by having GFMP(M.P.) on an smaller one? We have the following answer to this question. 1.3.2._ Proposition [9]. (a) If X has GFMP(M.P.) on disjoint open sets Oi’ i = 1,..., then it has GFMP(M.P.) on the union U 01. i=1 (b) If T is locally convex and X has GFMP(M.P.) on convex open sets then it has GFMP(M.P.) on all open sets. (c) X has GFMP(M.P.) on all sets if it has GFMP(M.P.) on all open sets. 1.3.3. Remark. As a result of this proposition we get that the classical Markov processes have 'M.P. on all the sets. To see this by (1.3.2) it suffices to show that it has M.P. on all bounded open intervals in addition to the intervals of type (~w,t], t 6 R. Let S < t and A = o{Xu: u 5 s}, B = o{Xu: u 3 t}, G = o{Xu: s < u < t}. By the assumption and (1.1.1.) we have: A u G|o{XS,Xt}, 3 u e|o{xs,xt} and A u B|o{XS,Xt}. We want to show that A v 8 u G|o{X5,Xt}. A typical generating element of A v 8 is of the form A n B where A e A and B e B. So we want to show that: P(A n B n c1xs,xt) = P(A n B|X5,Xt)P(C|XS,Xt), for all A E A, B e B, C E G. But: P(A n B n clxs,xt) E(IAnBIC|XS,Xt) = EEICE(IAIB|X5’Xt’IC)IXS’Xt)] EticE(JAnplxs’xt)lxs'Xt] E(IAnB|XS,Xt)E(IC|XS,Xt) P(A n B|xS,xt)P(clxs,xt). 1.3.4. Remark. If we have M.P. on bounded open intervals, then we have M.P. on all bounded open sets and obviously vice-versa. This is the case because any bounded open set on the real line is a countable union of disjoint bounded open intervals. Having M.P. on bounded open intervals, in general will not imply the Markov property on all open sets (and consequently, having a classical Markov process). But under some condition on P and the triviality of the tail o-field of the process, NGOC & ROYER [11] proved that the Markov property on all bounded open intervals imply that X is a Markov process. The processes having Markov property on bounded intervals were studied in [5] under the name "Reciprocal processes". In the next section we consider some representation for these processes under very general setting. 1.4. Operator-valued processes. In [5] Jamison studied reciprocal processes taking values in R and asked whether his result are extendable to the case of processes taking values in Rn at least in Gaussian case. Given a Gaussian process {X t E T} taking values in R", we can consider the following (finite t, dimension) operator-valuded process: where - is the inner product in R". Here for each tie T,X&: Rn-+L2(Q,F,p) with (Q,F,p) being the probability space on WhiCh;tbe original process was defined. In case of a Gaussian process Xt taking values in a Hilbert space, it is well known that euxtufi < m for each t E T. Thus the operator-valued process 5t associated to Xt given by X£(h) = (xt’h)H’ h e H has the additional property: ° 2 , ” 2 _ 2 g ElXt(ei)| - E E |(xt.e1)| - Euxtu < a, where {e1} is an orthonormal basis hi H. Therefore 1t is a Hilbert— Schmidt operator on H into Lz(n,F,p). As the problems studied here are second-order depending on the Hilbertian properties of H and L2(Q,F,p) we study them as problem involving two Hilbert spaces. Motivated from this we define Hilbert-Schmidt operator-valued processes. Let H and K, be two separable Hilbert spaces with inner products (-,-)H and (-,-)K and norms "'“H and fl-NK; respectively. The set of all linear bounded operator on H into K is denoted by B(H,K), and the dual spaces of H and K is denoted by H* and K*: respectively. Before giving a definition of Hilbert-Schmidt operators we need the following lemma: 1.4.1. Lemma ([1], p. 256): Let H and K be two separable Hilbert spaces and A in B(H,K). If the series on 2 "£1 “AenflK converges for an orthogonal basis {en} in H, then A = 1A ' = < A .f > = 1A f l _ 2 _ "gm |(Aen,fm)| , no matter what orthonormal bases {ea} of H, {fn} in K and * * {fn} of K are chosen. Now we are in a position to give the definition of.a Hilbert- Schmidt operator: 1.4.2 Definition. An element A of B(H,K) is called a Hilbert- Schmidt-operator (HSO) if on llAllz = Z 2 "=1 “AenHK converges for at least one orthonomal basis {en} of H. The set of all HSD,s on H into K is denoted by HS(H,K) and it can be con- sidered as a Hilbert space with the inner product given by: * * . (A,B) = tr B A = Z |(B Ae.,e )|, A,B, in HS(H,K), HS 1 j 1 1 * where {ei} is an orthonormal basis in H and B is the conjugate of B. 1.4.3. Remark. The space HS(H,K) is a module of operators over B(H,H), and in this view a subspace M of HS(H,K) is a subset of HS(H,K) which is a left module of operators over B(H,H); that is, (i) M is a (sub) Hilbert space, (ii) for each B in B(H,H) and A in M, AB is in M. For the subspace M of HS(H,K) we denote by M the subspace of K generated by the images of the elements of M: M = EB'{A(h): h e H, A e M}. Let M be a subspace of K, and A in HS(H,K), consider the following operator 8 on H into K given by: B(h) = PMA(h) h in H, where PE is the orthogonal projection onto M. For an orthonormal basis {ei} of H and the properties of projections we have: 2 _ 2 - 2 2 I “Be,“ - g HPMAeiH f g “Pm“ “Aeifl :IMAM<~; that is, B is an HSO on H into M. Thus in this way we have associated to each A in HS(H,K) an element B in HS(H,M), more precisely we have the fbllowing map P: P: HS(H,K)-———+ HS(H,M) given by P(A)(h) = PMA(h), for each A e HS(H,K), h e H. We observe that P has the properties of a projection operator: 2: P is linear and P P. For the linearity of P, let A,B e HS(H,K) and u. V e B(H,H), 10 then P(Au +Bv )(h) P Au + Bv)(h) (h 6 H) M( PM Au(h) + Bv(h) PM Au(h) + PM Bv(h) P(A)(U(h) + P(B)(V(h))- P(A)U(h) + P(B)V(h) so P(Au + Bv) = P(A)u + P(B)v. To see the other property, let A E HS(H,K), we have: finnm=vumnu) (hem P (h) = PMP(A)(h) . PM MA . pMA(h) = P(A)(h). A(h) We note that for A in HS(H,M), P(A)(h) = PM = A(h). for each h in H; thus P(A) = A. Motivated fromihese properties we have the following definition: 1.4.4. Definition (Payen, [13]). Let M be a subspace of HS(H,K) and M be the subspace of K generated by the images of the elements of M. For A in HS(H,K) the projection (AIM) of A onto M is the H50 in HS(H,M) given by: (AlM)(h) = PMA(h) for each h in H. 11 If N: HS(H,K), (MIN) = {(AIM): A e N}. The following is a collection of basic properties of the pro- jections, indeed we show that it is an orthogonal projection. Let us first give a definition of orthogonality. 1.4.5. Definition. Let A and B be in HS(H,K). We say A and B are orthogonal (A 1 B) if * (A,B)HS = tr B A = 0. We say two subsets M and N of HS(H,K) are orthogonal (M 1 N) if A L B for all A in M and B in N. From the definition we note that A I B if and only if 3*A = 0. 1.4.6. Properties of projections: Let M be a subspace of HS(H,K), then we have: (a) (AuIM) = (AIM)u A E HS(H,K), u E B(H,H), (b) (AIM) = A A e M. (c) If N is a subspace of HS(H,K) containing M, then ((AIMHN) = ((AlNllM) = (AIM). A e HS(H,K). (d) If N and M are two closed orthogonal subspace of HS(H,K) then, (AIM e N) = (AIM) + (AIN) and consequently (AIM e M') = (AIN)-(AIM') for M' a closed subspace of M, (e) A- (AIM) 1 M. 12 Proof. (a)-(e) are direct consequence of the definition and the properties of the orthogonal projections on the subspaces of K. We give a precise proof for (e). Let C E M and h e H, then: c*(A-B)(h) c* 6*(A(h)-PQ‘")) where M as usual is the subspace of K generated by the images of the elements of M. In order to show that C*(A(h)-Pa(h)) as an element of H is 0 we show that it is orthogonal to all the elements x of H: (x.c*(A(h)-PQ("))H = (ex. A(h)-PQI“))K. but A(h)-Pa(h) is orthogonal to M, in particular to Cx, thus (x,C*(A(h)-Pa(h)))H = o for all x in H, This implies that c*(A-B)(h) = o, for all h in H; therefore A-B I C. The interesting subspaces are the one generated by a family of the operators in HS(H,K). Let {Xt}teI’ (I an index set), be a family of HSD's. Denote by MX the closure of the set { Z xtBt’ J a finite tEJ subset of I, Bt e B(H,H)} under the norm u-HHS. Let Mx be the subspace of K generated by the images of the elements of the family {X Now we have the following: t}t€I' 1.4.7. Theorem [11]: HS(H,MX) = MX, where MX and MX are as above. Proof [11]. Let 2 e HS(H,MX) and {ei} be a complete orthonormal 13 basis in Ii. Since ZHZeiuz < w, for a given 6 > 0 there exists an integer N such that: (1.4.8) 2 uzeiu2 < §-. N+1 Let ZN = 2 PM, where PN is the projection onto the subspace of H generated by e1....,eN. Clearly ZN e HS(H,MX), therefore [by (2) page 335 [13]] there are Ai e B(H,H),i = 1,...,k, such that N k 2 6 (1.4.9) i§1 “(jg1 XjAj)ei - 2N eifl. < 2.. Let Bj = Aj PN, then from (1.4.9) we get: (1.4.10) 1 ll M2 l t} over B(H,H) and MX = n M+(X). t T - +w t t For simplicity we will write "G{...}" instead of "G{...} over B(H, H)", and M - + , t’ M: ..., for Mt(X), MU,V(X),..., unless otherwise stated. Having the remark (1.4.11) in mind, in the sequel we make the following assumption: 1.4.13. Assumption. R(r(s,t)) c R(r(t,t)), where r(s,t) is the * covariance of the process given by r(s,t) = ths. Under this assumption we will have: n (XSIG{Xt1,...th}) = iélxtiBi’Bie B(H,H). To see this let us first prove it for n = 2: XS R{Xt t1 (xS|G{x, ) = p—5 } t Xt 1 2 2 Xt X - P__s _ Rixtli e RIXZ-(XZIGIX1})} 16 x5 xS P- + 9- R{X1} RIXZ-(XZIG{X1}) (XslGixll) + (XSIGIXZ-(XZIGIX1})}). By [1.4. [11]] and assumption (1.4.13) we get: (XSIG{X1}) = xlA1 (XSIG{x2'(x2|G{X1})}) = (X2'(X2lG{x1}))A2 (leeixli) = X1A3, for some A1,A2,A3 in B(H,H). Therefore: (steixt,xt2}) - X1A1 + (X2-X1A3)A2 = X1(A1-A3A2) + XZAZ' For n > 2 we note that: (xsleix .,x i) = (xsleixt ,...,x tn 1 tn-I t1... (Xsleixn-(xnleixtl,...,xt })}). now by induction we get the result. 1.5. Reciprocal processes. As stated before, Jamison [5] introduced the notion of reciprocal processes which were called Markov-like processes by Slepian [15]. In the following we give a representation of an HSO-valued reciprocal processes in terms of HSO-valued Markov processes and under further 17 conditions in terms of HSD-valued martingales. The notations are the same as in (1.4.4) and 1.4.12). 1.5.1. Definition. ([11]) An HSO-valued process {X t e T} is called ts a (i) Martingale, if (XtIM;) = Xs for all t 3 s in T, (ii) Markov process, if (X IM') = (X IM ) for all t > s in T, t s t XS - ... . . + = (111) rec1procal process, if (xthUAI) (XtIG{Xu,Xv}), u 5 t f v. It is clear that (i) implies (ii). In fact, under some con- ditions there is a very close tie between the martingales and markov processes. In [11] it is shown that if r’(s,s)r(s,t) is one-to-one for all s 5 t, then Xt = Ut e(t) where Ut is a martingale and ¢(t) is in B(H,H), moreover this representation is unique and it is a necessary and sufficient condition for {Xt, t E T} to be a markov process. Before discussing the relations between (ii) and (iii) of definition (1.5.1) let us give some expected elementary properties of HSO-valued Markov processes. 1.5.2. Theorem: Let X = {Xt’ t e T} be an HSO-valued stochastic process, then: (a) X is Markov if and only if for each NC: Mi, t > s (NIM;) = (NIMX ) S 18 . + = (b) If X 15 Markov, then (XtIMv) (XtIMXv)’ t f v. Proof. (a) is obvious from the definition of a Markov process. (b) By definition of the projection for t 5 v we have: + xt xt (XIM)=P—, =P— —, — t v R{X . u > v} R{X } e [R(X . u > v) e R(X )] u - v u - v =Pii'tX}+P':‘tx- v} RX} { v { u‘ u 3 e { v . Let XS 1 Xv for some 5 > v, then by Markov property of X we have X - _ = _S = (1.5.3) (XSIMV) - (XslMxv) PR{XV} 0. X - " " . _§ g . Since R{Xt}c: R{Xu. u f v} and (1.5.3) we get PR{Xt} 0, that is XS is orthogonal to Xt. So XS is orthogonal to all the generator X elements of R{X”: u 3 v} 9 filxv}, thus p_t R{Xu: U3 v} e R{Xv} = 0’ and we get 4. (Xtin) ' (XtiMxv)- Now we return to the definition (1.5.1) and prove (ii) implies (iii). 1.5.4 Theorem: If {X t e T} is a Markov process, then it has t, reciprocal property. Proof: Let u < t < v and M: = G{Xt: t 3 V}, then we have: + M u,v u u,v u u E: 6 2: II 2: < +' 9 A 3: C +' 0 2: v 19 so we have: + - + (thMu,v) (xthu 0 (M u,v 9 Mil) (1.5.5) (XtIML) + (XtIM:,v e ML) + - - (XtIqu) + (XtIMu,v e Mu). 0n the other hand we have: + _ + + + (xthu,v) - (xthv) + (xthu,v 9 Mv) + + - (xthXv) + (xtIMu,v e Mv). + + - + - + we note that “u,v e Mvc Mu and Mu,v e Mu: Mv‘ Also we have 4. (comparing the two values of (XtIMu’ ))(xtIqu).(xtIMx ) - + + + - - (XtIMu,v e Mv)'(xthu,v e Mu)’ But by lemma 1.4 of [11] there ex1sts A,B E B(H,H) such that (XtIqu) = xu A (1.5.6) (xtIMX ) = Xv B v so we get _ + + + - qu — XVB - (XtIMu,v e MV)-(XtIMu,v e Mu). Now by projecting the above equality on M: we get: + + - (XUIMV)A-XVB - -(XtIMu,V e Mu) 20 [the projection of the first term on M: is O and since 4. Mu v 9 MO‘: M: the second term will remain the same]. Again using Lemma (1.4) of [11] we get: + - (1.5.7) XV(CA-B) - '[th“u,v e Mu] Now, by (1.5.5), (1.5.6) and (1.5.7) we get + (1.5.8) (XtIMu v) - qu - XV(CA-B). Therefore by (1.4.4)(c) and (1.5.6) we get + (xt|e(xu.xv)> ((XtIMu,le{Xu.XV}) (XuA-Xv((A-B)IG{XU,XV}) XuA-XV(CA-B), and by (1.5.8) this is equal to (XtIM: v)' Therefore (XtIMZ’V) = (Xt|G{Xu,Xv}), u < t < v. This completes the proof. In general (iii) of (1.5.1) does not imply (ii): 1.5.9. EXAMPLE: Let T R and X if t = O Y if t f 0 where X and Y are two HSO’s in HS(H,K) such that X(H) 1 Y(H) and none of them are constant, i.e. X(H) f D and Y(H) f 0. Then Xt is reciprocal but not Markov. 21 Now the question is under what conditions reciprocal property will imply Markov property. As it is stated in Remark [1.3.4] ROYER and NGOC [12] studied this question and gave the following answer: 1.5.10 Theorem (ROYER and NGOC [12]). Let T = R and X = {Xt’ t 6 T} be an E-valued (E any state space) stochastic process such that: (i) X has Markov property on each open bounded intervals (a.b). (ii) either n o{X , u > t} = {9,9} or n o{X : u < t} = {A,B}, t u - t u - (iii) for each t' < t < t" in T there are three finite measures vt"€t’vt" such that the JOlnt distribution ut.,t,tu of Xt.,Xt,Xt" is absolutely continous with respect to the direct product vt. e v o v of v and v t tll tl9vt tll’ then X is a Markov process. 1.5.11. Remark (i). In the case of non degenerated Gaussian processes condition (iii) of (1.5.10) is valid automatically, i suffices to take and v to be the distribution of X X and Xt"; respectively. vt""t t" t" t (ii) In case of T = [a,b], (1.5.10)(ii) implies that either Xa or Xb is constant, in this case we have the result of (1.5.10) even without the condition (1.5.10)(iii) [6]. A theorem similar to (1.5.10) for the general case of HSD-valued processes is of interest. But in the following we consider non degenerated Hilbert-space valued Gaussian processes. 1.5.12 Theorem: Let X = {Xt’ t e R} be a non degenerated Hilbert space valued reciprocal Gaussian processes and either M_0° or M+00 = {0}, then X is a Markov process. 22 Proof. Assume M4” = {0}. A similar proof can be given if M_0° = {0}. Let s < t < n, where s,t in T and n is an integer. Now, - + . . Ms V G{xn,xn+l’°'°}‘: Ms,n implies - + - (XtIMS v Gan,Xn+],...}) - ((xtIMmHMS v G{Xn,Xn+],...). By reciprocal property this equals ((Xt|G{xs’xn})lMs V G{Xn’xn+l""}): and by (1.4.6)(b) this is equal to. (thG{xs’xn})‘ Again using reciprocal property we get: (XtIG{xs’Xn}) = (Xt|G{XS} V G{Xn,Xn+1,...}). Therefore we have the following equality: (1.5.13) (XtIMS v G{Xn,X n+1....}) = (XtIMXS v GIXn,Xn+],...}). Now by the assumption on the process and [12 ] we get G{Xs} V G{Xn,X ..} + G{XS} n+1" as n + w. Therefore by the properties of the projections we get: (XtIMs V Gan,X .}) + (XtIXS) as n + m n+l"‘ Now by projecting both side of the above equality on M; we get (thMS) -- (xtle). and this completes the proof. 23 1.5.14. Remark. In the case of finite interval T = [a,b],M+°° = {D} is equivalent to Xb = 0, and'hithis case the Theorem can be stated even for the general case of HSO-valued processes and proved very easily: (XtIMS) = (XtIMS V Gixbi). by reciprocal property this equals to (XtIG{XS} V G{Xb}) which is equal to (XtIGIXSI). What follows is the main theorem of this chapter, it gives a representation of reciprocal processes. We recall that we assume (1.4.13). 1.5.14. Representation theorem. A non degenerate Gaussian Hilbert- space-valued process {Xt, t 6 T} is a reciprocal process if and only if it has the following representation: Xt = Yt + 2t where (i) {Yt, t e T} is a Markov process and orthogonal to {Zt} with Miy) = {0}, (ii) Zt is in Mno for all t in T. Moreover this representation is unique in the sense that if xt = VII) + 2:1), where VII) and .zII) satisfying (i) and (ii) instead . . (l) g (l) - of Yt and Zt’ respectively, then Yt Yt and Zt - Zt' Proof: Let Zt = (XtIMm) and Yt = Xt - 2t“ It is clear that Zt is in M3 and Y is orthogonal to Z All we have to show is the t Markov property of Y 11' To show this we prove that Y has reciprocal t' t property and M: = {D}, then by Theorem (1.5.12) we get the Markov 24 property of Yt' We note that: ”353) = GIYt: t t (u,v)} This equality gives that which implies that therefore MY = {D}. Q G{Xt - Z t: t 5 (U.V)} t é (u,v)} e G{Zt: t ¢ (u,v)} To see the reciprocal property of {Yt}’ let u < v and t ¢ (u,v), then + + (YthUS¥)) = (YtiMu, v) and by reciprocal property of {Xt} we get: + (YtIMuII)) - (Xt|G{XU.XV}) - z = XuA + XVB - 2t 4. (XtIMu,v) ' 2t’ t for some A,B in B(H,H)(A,B depend on u,v,t). Now by substituting for Xu and Xv in terms of Y and Z we get: 25 X A + X B = Y A + Y B + Z A + Z 8 U V U V U V = Y A + Y B + (X A + X BIM ) U V u V °° + - YuA + YvB + ((XtIMu’v)IM3) = YuA + YVB + (XtIMm). so we get + (Yth (Y)) - vuA + vva + (XtIMg) - (thMm) usV Y A + Y B. u v Therefore {Yt} is a reciprocal process with trivial tails, so it is a Markov process. Conversely: Let Xt be represented in the form given in the Theorem and let t E (u,v), for u < v in T, then we have: (xth:,v) = (Yt + (xtIMm)|M:,v) = (Yth:,v) + (thMm) = (YtIM+(Y) e n*(2) ) + (xtIMm). u,v U,V 1 M+(Z) we get u,v Since Yt + + (XtIMu,v) = (YtIMu(:)) + (XtIMa)° By reciprocal property of {Yt} we get: + (XtIMU,V) ' (YtIG{Yusz}) + (xtiMm) 26 YUA + YVB + (xtIMm) (*) qu + va-(qu + XVBIMQ) + (xtImg). (A,B are in B(H,H) and depend on t,u,v). 0n the other hand we have: PRIME”) = (Xt|(M:,v 9 Man) a Mac) (A) (xtIN'Z,v e M”) + (XtIMm). Comparing the two values of (XtIM: v) and noting that qu + XvB-(XUA + XVBIMg) is orthogonal to M; and the uniqueness of the representation of the form (*) we get + . (xtlmu,v 9 Ma) - XUA + va-(qu + XVBIMQ). This implies that (x M+ ) = x A + x B tI u,v u v ’ i.e. {Xt} is reciprocal. Uniqueness: Since M: = {0} and Y 1 ME we get M = MZ = MZ(‘) Q co on $0 (thMm) = (Yt + ztIMg) = zt (XtIMm) = (VII) + ZI‘)|MQ) = 2:1) = (l) = (l) . therefore Zt 2t and Yt Yt for all t in T. 27 In the case of a finite interval T = [a,b] we observe that in the above argument: M0° = G{Xa,Xb}, so Zt = (XtIMm) = AaA(t)tXbB(t) A(t),B(t) in B(H,H) Thus we get Xt = Yt + XaA(t) + XbB(t). t e T. In the following we consider a special choice of H and derive a representation for the vector valued stochastic processes. 1.5.16. Special case. Let H be the set of real or complex numbers, and A be a linear, bounded operator on H into a Hilbert space K. For each r in H we have A(r) = rA(l), this will lead us to the fact that we can identify K with HS(H,K) in the sense that there is a one-to-one norm preserving correspondence e on K onto HS(H,K). For each k in K, Q at k is given by ck where ok(r) = rk r in H. We note that if ¢(k) = o(k'), then for each r in H we have rk = rk', which gives k = k', so o is one-to-one. The linearity of o is obvious by its definition. Also o is onto and its inverse is given by e'1(A) = A(l) for A in HS(H,K) 28 Finally m is norm preserving llcpull = (rsé'i item: = i“; “run = uku. Now let K be a q-dimensional Gaussian space, and X = {Xt; t E T} be a q-variate Gassian stochastic process, then by (1.5.15) and (1.5.16) we have the following: 1.5.17. Corollary. Let {Xt’ t e T} be a q-variate Gaussian stochastic process, then it has reciprocal property if and only if it has the following representation: Xt = Yt + Zt t E T where Yt is a q-variate Markov process with trivial tails and Z is independent of Y t t and measurable with respect to the tail of Xt’ In the case of T = [a,b] we have: Xt = Yt + A(t)xa + B(t)xb where Yt is the same as before and A(t), B(t) are some q x q matrix. In this representation if we know that Yt is continous in quadratic mean and R(t,s) = E YtY: is nonsingular for all s,t in T, then by [11] we have the following representation: Yt = ¢(t)U(t) where ¢(t) is a nonsingular q x q matrix and U(t) is a q-variate martingale. The two conditions on {Yt’ t 6 T} will be satisfied if we assume that {Xt, t e T} is continous in quadratic mean, and Yt # D for all t in T. By Corrollary (1.5.17) the cotinuity of X implies t 29 the continuity of {Yt} and {Zt}’ It remains to show that R(t,s) is nonsingular. By Markov property of {Yt} and for each s f t' < t we have (1.5.18) R(t,s) = R(t,t')R'](t',t')R(t',s). Let s = s0 51 <...< sk = t be such that Isi-si_ll < E, i = 1,...,k, for a given 6 > 0, then by (1.5.18) we get k -1 _ ' -1 R(t,S)R (5.5) g R(sj.sj_1)R (sj_1.sj_1). j 1 therefore det R(t,s)det R'1(s,s) = 11°: 7,- H -1 J det R(sj,sj_1)det R (sj_1,sj_1) (det A is the determinant of matrix A). Now if we have det R(s,t) = O, we get (1.5.19) det R(Si’Si-l) = 0 for some i Let 6 + 0 and u be an accumulation point of collection {s1} satisfying (1.5.19), then by continuity of the covariance and its determinant we get 0 = lim det R(Si’si- = det R(u,u), 1) but by assumption det R(u,u) f 0, hence det R(s,t) # D for all s and t in T. Thus we have the following: 1.5.20. Corollary. Let X be a centered continuous in quadratic t mean and Gaussian reciprocal process such that in the representation 30 (1.5.17) Yt # 0 for all t, then it has the following representation: it = a(t)g(t) + Z(t) where Z(t) is as in (1.5.17) and c(t) is a nonsingular q x q matrix and B(t) is a q-variate martingale. Under the assumption of Corollary (1.5.20) we have the following result concerning differentiable reciprocal process which extends a result of ([7]): 1.5.21. Theorem. Let {Xt’ t e [0,T]} be a centered differentiable Gaussian process, then it is reciprocal if and only if it is the solution of stochastic differentical equation of the following form with boundary values Xa,Xb: X t _ b t . c t . (1.5.22) (NS-m) - dut + Y(E'Iil') dt + Z(g-I-f-I) dt where Ut is a martingale independent of Y and Z, and U0 = UT = 0, a(t), b(t), and c(t) are some real functions, and X0 = Y, XT = 2. Proof. Let Xt be reciprocal, then by Corollary (1.5.20) it has the following representation: Xt=dfl%+buflo+dflfi mm Mayo therefore xt = U + X (b t ) + X (C t ) a(t) t 0 a t T aiti and by differentiating we get (1.5.22). Conversely, if X satisfies (1.5.22), then by integrating both sides t 31 from 0 upto t we get: Hit)" up - u(o) + Y(B(t)) + 2mm where U(0) = 0 and B(t) and C(t) are the integerals of (E)' and (§)', respectively. From here we get: Xt = a(t)u(t) + Y a(t)B(t) + Z a(t)C(t), therefore by Corollary (1.5.17) Xt is reciprocal. Imposing the boundary conditions we get that a(t), b(t) and c(t) are satisfying the following relations: a(0)B(0) = 1, a(T)C(T) = 1, c(0) = B(T) = o. In the next section we consider the Gaussian stationary reciprocal process and derive Jamison's result [5] by using the representation of the process. 1.6 Gaussian stationary reciprocal processes. Let X = {Xt, t e [0,T]}, T > 0 be a real continous stationary reciprocal Gaussian process. Here by stationarity on a bounded interval [0,T] we mean that there is a stationary process on R such that on 2 t for each t in [0,T]. By (1.5.17) we have the following representation [0,T] it coincides with X. We are assuming that EXt = 0 and EX = 1, for Xt: X = Y + A(t)X t t + B(t)XT. 0 Let r(t) be the covariance function of the process X, then we have 32 r(t) = EXtX A(t) + B(t)EX X 0 T O A(t) + B(t)r(T). t e [0.11. Now we consider the following cases: (I) A(t)X0 + B(t)XT = 0, for all t in [0,T], i.e. the process is independent of the two boundary random variables Xa’xb' In this case X(t) = Y(t) is a real Gaussian stationary Markov process, so its co- variance functions is of the form: r(t) = e’at, t in [0,T], a > 0. (II) Y(t) = 0, for all t in [0,T], and X0,XT are independent. Let us first assume that Ir(t)I < 1. We have: 2( 2< 1 = r(O) = EX X = A t t t) + B t). therefore A(t) and B(t) are of the following forms: A(t) = c05(m(t)) B(t) = sin(¢(t)) for some real functions m on [0,T]. 0n the other hand for t and t + h in [0,T] we have: r(h) = Extxt+h E(A(t)X0 + B(t)XT)(A(t + h)X0 + B(t + h)XT) A(t)A(t + h) + B(t)B(t + h) c05(¢(t)c05(¢(t + h)) + sin(¢(t))sin(¢(t + h)) c05(¢(t + h) - r(t)). 33 Therefore for each s < t in [0,T] we have: r(t-s) c05(¢(t) - r(5)) cos(¢(t))c05(¢(s)) + sin(¢(t)sin(¢(s)) 2 £1 f1.(t)gi(s) where f1(t) = cos(o(t)), f2(t) = sin(o(t)) and gi(s) = fi(s), i Here we show the following two facts about {f1.f2} and {91,92}: 1,2. (i) 91 and 92 are linearly independent as elements of L2(o,c) for each c in (0,T) (ii) det(fi(tj)) r o i,j = 1,2 t t in (0,T). 1 ‘ 2 To see (i), let ag](s) + Bgz(s) = 0 for s < c, then by the continuity of the process we get that 91 and 92 are continuous, so by letting s + 0 we get a c05(cp(0)) +Bsin(cp(0)) = 0. but cos(¢(0)) = l and sin(o(0) = 0, therefore a = 0 and B sin(¢(s) 0, for each s < c. But sin(o(s))f 0 on (0,T) (if sin(o(s0)) = 0, then cos(¢(s0)) = t 1 and this implies that Ir(sO)I = 1), thus 8 = 0, this proves (i). For (ii) we note that c05(¢(t1)) c05(¢(t2)) sin(¢(t1)) sin(¢(t2)) det(f1(tj)) ll U) _.I 3 A .6 A r... N V I -6 A f,- .... v v 34 Now if sin(o(t2) - ¢(t1) = 0 then cos(¢(t2) - ¢(t])) = t l which implies that Ir(t2 - t1)| = 1, this proves (ii). Therefore all the conditions of lemma [11. 1. [3]] are satisfied, so fi's are the fundamental solution of a differential equation of order 2 and constant coefficient. Since fi's are real trigonometic functions,the only possibility is that f1(t) = cos(at). Hence in this case r(t) = cos(¢(t)) = cos(at) for some a > 0. The case Ir(t)I = 1 will be discussed after the case (III). (III) In this case all parts of the representation are present, and we are assuming that r(T) = EX0 T = -1. Since A(t)X0 + B(t)XT - X is orthogonal to X0 and XT (A(t)X0 + B(t)XT is the orthogonal t projection of X on the space generated by {X0,XT}) we have: t E(A(t)X0 + B(t)XT - X )X 0 t O E(A(t)X0 + B(t)XT - X )X = O t T which gives us: A(t) - B(t) = r(t) -A(t) + B(t) = r(T-t) Therefore by adding these two equation we get r(t) + r(T-t) = 0. One of the solution of this equation is ~lIN r(t) = 1 - a(t) with a = 35 Now we return to the case Ir(t0)| = 1, for some t0 in (0,T). 1 for all t in In this case, as it is shown in [5] we have r(t) at with a 0. (0,T) which is an special case of e' CHAPTER 2 N-PLE MARKOV PROCESSES AND N-PLE RECIPROCAL PROCESSES 2.1. N-Ple Markov Processes Let {Xt, t e R} be a real valued Gaussian process with mean zero and continous in quadratic mean, and having GFMP on the sets of the form (-m,t), t E R; i.e. UIXS: s > t} 110{Xs: s < tllrt, where rt is the Germ Field and given by: = n o{XS: It-SI < %J. 1" t n By [10] this property is equivialent to the following: - + ZtuZtlrt where -- . .1:- Zt‘2°{xs' s t n}. n If the process is N-l times continuously differentiable and the Germ Field rt is generated by X(t), X'(t),...,X(N'1)(t), the process has 36 37 N-ple Markov property in the sense of Doob [2]. Here it is understood that X(t),...,X(N'1)(t) are linearly indpendent as elements of L2(Q,gtP), where gt = n o{XS: It-sI < %J. The following is a general- n ization of this notion. 2.1.1. Definition. A process X = {Xt, t 6 T} is called a Generalized N-ple Markov process with respect to the processes {Yi(t), t e T}i=1 if: ..,N (i) for each t in T, Y1(t),...,YN(t) are linearly independent as elements of L2(n,gt,P), where g = n oIXS: It-sl < %i. n t .. + - (ii) 2t u Zt II‘t where; + Z = n o{X : u > t - E} t E>0 u 2; = O 0{Xu: u < t + E} €>0 Ft - C{Y1(t),..ogYN(t)}s and Ft is the Germ Field at t. We have the following immediate result concerning the process Z(t) = (Y1(t),...,YN(t))* (* means the transopose of a matrix). 2.1.2. Theorem. If {Xt’ t 6 T} is a generalized N-ple Markov process * with respect to {Yi(t)}i=1 N’ then the process Z(t) = (Y1(t),...,YN(t)) is a Markov process. Proof: By assumption we have (2.1.3) o{Xu: u 3 s} Hoixu: u 5 s}lo(Z(s)). 38 where A H BIG means that given G, A and B are conditionally independent. For each 6 > 0 we have: o{Z : u u s} IV S + E}: O{Xu: U IV and o{ZU: u IA S - €}<: oiXu: u s - 6}, IA therefore by (2.1.3) we have: o{Zu: u 3 S + €}11 o{Zu: u f S - E}|o{Z(u)} so V o{Z : u > s + e} n V o{Z : u < S - €}Io{Z(u)}, €>O u ' €>0 u ' thus o{Zu: u > s} u o{Zu: u < s}Io(Z(u)). Finally by (1.1.1)(b) we get o{Zu: u 3 s} u o{Zu: u 5 s}Io(Z(u)), and this completes the proof. This simple fact leads us to a Goursat type ([8], p. 74) re- presentation of Generalized N-ple Markov processes. 2.1.4. Theorem: Let {Xt’ t E T} be a Gaussian Generalized N-ple Markov process with respect to the Gaussian processes {Y.(t), t e T}. If the covariance matrix r(t,s) = E(Z(t)Z*(s)) 'l i=1,...,N' of Z(t) = (Y1( ) .. 'A‘ t , .,YN(t)) is nonsingular, then: 39 ' N (2.1.5) xt = .21 wiIt)U1(t) 1 = where wi(t), i = 1,...,N, are N real functions and U(t) = (U1(t),...,UN(t)) is an N-variate martingale [ ]. Proof. From (2.1.2), Z(t) is an N-variate Gaussian Markov process. Therefore by (3.1 [7]) it has the following representation: Z(t) = A(tlyjt) where a(t) is an NxN non-singular matrix and U(t) is an N-variate martingale. 0n the other hand by the Markov property of {Xt} we have: Xt E(XtIXu: u f t) a(xt12(t)) A(t)Z(t) where A(t) is a IXN matrix, so we have: >< ll A(t)A(t)QIt) N A(tlylt) = 5 where a(t) = (w1(t),....wN(t)) = A(t)¢(t), and ylt) = 0 t Zu v = n o{Xt: t é (u + e, v - 6)} ’ €>0 "1 I u,v ‘ o{Y1(U).....YN(u); Y1(v),...,YN(v)}, Parallel to the Generalized N-ple Markov processes, Theorem (2.1.2), we have the following. Its proof is essentially the same as the one in (2.1.2): 41 2.2.2 Theorem: If {X t e T} is an N-ple reciprocal process with t: * respect to {Yi(t)}i=1,...,N’ then Z(t) = (Y1(t),...,YN(t)) is a reciprocal process. Proof: By the assumption we have o{X ° t' t 6 [u,v]} H oIXt: t é [u,v]}|o{Z(u),Z(v)} Therefore for each 6 > 0 we have: oizt: t e (u + e.v-€)} lioizt: t 4 [U - e.v + 6]}Io{Z(u),Z(V)} so V oIZt: t E (u + €,v - 6)} H V oIZt: t e [u - €,v + €]}Io{Z(u),Z(v)}, €>0 €>0 thus cIZt: t 6 (u,v)} H oiZt: t é (u,v)}Io{Z(u),Z(v)}, and this completes the proof. In the following we give a representation for the N-ple reciprocal processes. For this we need a similar result to [ROY & NGOC ] for the N-ple reciprocal processes. We will use the following notations: t a (u,v)}. F; = oIXt: t f u} u,v t + a: nr ,Fm=nofl;tMa “(v u,v .. u t - 'I’ U t - as well as the notations in definition (2.1.1) and (2.2.1). 2.2.3 Lemma. Let {Xt’ t e R} be a Gaussian N-ple reciprocal process with respect to the process {Yi(t), i = 1,...,N}, if either fm(Y) or E (Y) is trivial, then {Xt’ t E R} is an N-ple Markov process with respect to {Yi(t), i = 1,...,N}. 42 * Proof. Let Z(t) = (Y1(t),...,YN(t)) , then by (2.2.2) {Z(t), t e R} is a reciprocal process with trivial tail, therefore from ROYER & NGOC [12] we get: (2.2.4) lim q{Zt} V o{Zn,Z n+°° n+1,...} = o{Zt} Therefore for f bounded and measurable function with respect to o{X ' t' t e (u,v)} and an integer 11 with u < v < n we have: E(fIZ'(X)) E(fIZu,Zn) (by reciprocal property) u,n (2.2.5) E(ro{Zu} v o{Zn,Z .}). n+1"° The last equality is because of the reciprocal property and the fact that oIZn,Z .}<: oIXt: t 3 u} as we showed in the proof of the Theorems n+1"’ (2.1.2) and (2.2.2). Therefore by Martingale theorem, (2.2.4) and (2.2.5) we get lim E(f|z‘(x)) = E(f|o(2u)). n+~ u,n taking conditional expection with respect to F;(X) and use dominated convergence theorem for conditional expection we get E(f|{u) = E(f|Zu). This completes the proof. For the case of T = [a,b] the proof being similar is omitted. Now we are in a position to give a representation for the N-ple reciprocal processes. 43 2.2.6. Theorem: Let X = {X t e T} be a Gaussian N-ple reciprocal t9 process, then it has the following representation: Xt = Ut + Vt where Ut is at most an N-ple Markov process and independent of Vt' X F: = {9,9} and V is measurable with respect to F3. t Proof: Xt Let Ut = Xt - PH on where H0° = n EEIX X U>0 to v = P t. t H ItI > u}. It is clear that Ut is orthogonal t: We note that ItI > u} e n sp{PH : ItI > u} U on n Efilut: It] > u} n Efikxt: u U H 9 H3 = {0} on This implies that F” (D = n o{Ut: ItI > u} = {0,9}. Now we show that u Ut is at most an N-ple reciprocal process. Let u < t < v, and H(X) =‘s‘nixsz s¢ (u,v)}: U,V ( l i < )) Ut Ut Ut E U V : s u,v = P = P = P t s Hw) H(x) e H H(x) u,v 9 a) u,v = th'vt = th _ p t H(x) H(x) H, : u,v u,v by reciprocal property of Xt we get: X X E(UtIUS: s ¢ (u,v)) = P-E- - P t. SP{XIU)’XIV)} H 44 where X(u) = {Y1(u),...,YN(u)} is the process that X is N-ple t reciprocal with respect to that. So we have: X EIUtIUs‘ s¢ (u,v)) = A Y(u) + 3 _Y_(v) -th. Cb Also we have X X x t _ t = t = X(u)A + X(v)B th ‘ PH(X) PH” PH PHx PH” u,v (U,V) therefore we get A Y(u) + B [m - pg flu) + B [M E(UtIUS: s 4 (u,v)) A(yu) - PIN) + B(xm - PIP”). This equation shows that Ut( )5 at most N-ple reciprocal with respect Y. u l to the process {Yi(u) - PHm }i=1,...N. Since we are assuming that the involved processes are Gaussian and for the Gaussian processes the conditional expectations are orthogonal projections on some sub-Hilbert spaces we could write the definitions (2.1.1) and (2.2.1) in terms of projections instead of conditional expectations as follows: ‘0 '0 II ‘U .0 II P (Markov property) + - - + Ht Ht Ht Ht t where H: = n 361x“ u 3 t - e} €>0 H' = n 361x“ u 5 t + E} 45 and Ft = SP{Y1(t).---.YN(t)} where EET...} is the linear span closure of {...} under the norm of L2(Q,F,P). For the reciprocal property we have P + P _ = P _ P = Pr(u,v) + H(u,v) H(u,V) H(u,v) H(u,V) with + .... Hw’v) - 0 spIX t6 (u - 6, v + 6)} e>0 t. HE = n Eilxt: t 4 (u - e, v + e)} u,v) €>0 r(u,v) = SETY1(u),...,YN(u); Y1(v),...,YN(v)}. This is the motivation for giving the definition of N-ple Markov and N-ple reciprocal properties in the case of HSO-valued processes in the next section. 2.3. HSO-valued N-ple Markov and N-ple Reciprocal Processes. Let H and K be two separable Hilbert spaces and X = {Xt’ t e T} be an HSO-valued process on H into K as introduced in section (1.4). Also we assume (1.4.13). 2.3.1. Definition: Let Y = {Yi(t), t e T} i=1,...,N, be N linearly independent HSO-valued processes in n G{XS: It-sI < %4. We say n with respect to Y, X is an: 46 (i) N-ple Markov process if a: a; G; a: 1"t (ii) N-ple reciprocal process if PGian) PG(u,v) = PG(u,v) PGiu.v) = Pr‘”:V) where G: = 620 GiXu: u 3 t - 6}, G; = 620 G{Xu: u f t + G}, G‘EM) = 6:10 G{Xt: t i (u - €,v + 6)}, Giuw) = 6:10 GIXt: t e (u + e. V ' 6)}: Ft = G{Y1(t),...,YN(t)}, F(u,v) = G{Y1(u),...,YN(u); Y1(v),...,YN(v)}. Now we are going to establish results similar to (2.1.2), (2.1.4), (2.2.2), (2.2.3) and finally (2.2.6) for HSO-valued processes. 2.3.2. Theorem. Let X = {xtz t e T} be an N-ple HSO-valued Markov process with respect to Y = {Y1(t),...,YN(t)}, then the process (Y1(t),...,YN(t)) is a Markov process. Proof: Let t > s be two points in T, since Y1(u) e G; and GE, i=1,...,N, u E T, we have: (Yi(t)lMlY)) = ((v,|M;< II N . t ,gl ”1(t)¢i(t)Ai(t) II II M 2 H C do A fl V ‘6- —l A FI' v where pi(t) = °i(t)Ai(t) is in B(H,H), i = 1,...,N. Now we study HSO-valued reciprocal processes and give a repre- sentation for them. (2.3.5). Theorem. Let X = {Xt: ciprocal process with respect to {Yi(t), t e T}, i = 1,...,N, then t e T} be an N-ple HSD-valued re- (Y1(t),...,YN(t)) is a reciprocal process. Proof: Let u < v and t e (u,v), then for each i = 1,...,N; (v,|M:(:>) = (|M:<:». therefore by reciprocal property of X we get (Yi(t)lM+(Y)) U V + ((YIIt)IF ))|MU(Y)) UV (, ,v (Yiltllr ) u,v 49 = (Yi(t)|G{Y1(u),...,YN(u); Y1(v),...,YN(v)}) and this completes the proof. For the next theorem we need the following lemma which states under some conditions we get the Markov property of a reciprocal process. 2.3.6. Lemma. Let X = {X ° t e T} be Gaussian Hilbert—space valued t' reciprocal process with respect to {Y1(t),...,YN(t)} and M: = n M: v = {0}, then {xtz t e T} is an N-ple Markov process with u0 51 n Gixt - Vt: t 6 (U + 6. v - 6)} 6>0 n Gixt: t e (U + 6. v - 6)} e 0 GIVtz t 4 (U + e. v - 6)} 6>0 6>0 G'(X) e G'(V) . (U.V) (u,v) therefore for u < t < v, we have: U G' U = U G' x - U G' v ( t] (3,3)) ( t' (3.3)) ( t' (3.3) = U G’ x ( t' (3,3) x G' x - v ( t' (3.3)) t therefore by reciprocal property of X we get: (UtIG(iii)) (Xt|G{Yi(u), Yi(v), i = 1,...,N}) - vt IIMZ II IIMZ Yi(u)Ai + 1 i Yi(v)Bi - V i 1 t for some Ai’Bi in B(H,H) (these are functions of u,v,t). On the other hand _ X _ + X vt - (thMm) - ((thG (x) >|M,) (u,V) ' ((XtIGIYi(U). Yi(v). i = 1,. ,N})IM:) N N x = (131 Yi(“)Ai + 121 Yi(v)BiIMm) N x N x = .21 (Yi(U)IM;)A1 + .21 (Y1(V)IM,°)B1 i= i= 52 Hence: (Uth+(u) )= , [v.(u) - (Y.(u)IM:)]A. + (u,v) i 1 1 1 l IIMZ [Yi(v) - (Yi(v)IM:]Bi. IIMZ u-Jo H This relation shows that {U t e T} has reciprocal property with t) respect to the process {Yi(t) - (Y1.(t)IM:)}i=1 N' Since these N processes may not be linearly independent we will not get exactly N-ple reciprocal process. Now we show that M3 = {0}. Indeed: M3 = n G{Ut: t E (u,v)} = U0 s 2+: n o{X° s>t-€} t €>0 S rt = O{Yu(t)s U E R}- If the process is infinitely many defferentiable and all derivatives form a splilling field, then {Xén), n 6 N} can serve as an example for {Yu(t), u e R}. Since in the case of Gaussian processes the conditional expecta- tions are orthogonal projections, we note that for each t > s, we have: - Xt E(Xt|ZS) = P _ H s where H; = n 361x”: u < s + 6}. Having infinite Markov property €>0 gives that - xt E 0 we have: oiYu(S): u 6 R. s < t - 6}: o{X(s)= s < t} 3.1.8 o{Yu(s): u E R, s > t + €}<: o{X(s): s > t}. By assumption we have: o{X(S): S < t} IIo{X(S): S > t}|rt. Therefore by (3.1.8) for each 6 > 0 we have: o{Yu(S): U E R, S < t - E} IIo{Yu(S): u E R, S > t + €}IFt. or V o{Yu(s): u 6 R, s < t - 6} H V o{Yu(s): u 6 R, s > t + Ellrt. €>0 E>O and finally: o{Yu(s): u E R, s < t} H o{Yu(s): u 6 R. s > t}|[t. This completes the proof. 57 By this Theorem we have: E(Yu(t)IYv(r), v 6 R. T < s) E(Yu(t)IYV(S), v 6 R) f YV(S) 9 (t.s.dV). U for some finite Borel measure 9. Remark (1). For the Markov processes and N-ple Markov processes a representation is given, [7]. [8] and Theorem (2.1.4). Here a represen- tation of infinite order Markov processes is under consideration. Remark (2). A generalization to the simple stationary Markov processes is the notion of T-positivity [4]. By definition a process X = {Xt, t e R} is called T-positive if for the times reflection operator T on SBiXt, t e R} given by Tl=l and TX(t) = X(-t), t 6 R we have the following T-positivity property: (*) P+TP+ 3 D. where P+ is the projection onto EBTXS: s 3 O}. In the stationary Gaussian case (*) is equivialent to: Z a 3' r(t t ) 3 0, v,u6t v11 u+ v where I is any finite index set and r(-) is the covariance function of the processes. For the infinite order stationary Markov process X = {Xt, t e R} under certain conditions on {Y (t), u 6 R, t 6 R} u we have the T-positivity of X. BIBLIOGRAPHY IO. 11. 12. 13. BIBLIOGRAPHY Aubin, J.P. (1979). Applied Functional Analysis. John Wiley and Sons, Inc. Doob, J.L. (1944). The Elementary Gaussian Processes. Ann. Math. Stat. 15, 229-282. Hida, T. (1960). Canonical Representation of Gaussian Processes and their Applications. Mem. Coll. Sci. Univ. KYOTO, Ser. A, 33, 109-155. Hida, T. and Streit, L. (1977). On Quantum Theory in Terms of White Noise. Nagoya Math. J. Vol. 68, 21-34. Jamison, B. (1970). Reciprocal Processes: The Stationary Gaussian Case. Ann. Math. Statist. 41, 1624-1630. Loeve, M. (1978). Probability Theory 11, 4th Edition. Springer- Verlag, New York, Inc. Mandrekar, V. (1968). On Multivariate Wide-Sense Markov Processes. Nagoya Math. J. Vol. 33, 7-12. Mandrekar, V. (1974). On the Multiple Markov Property of Levy- Hida for Gaussian Processes. Nagoya Math. J. Vol. 54, 69-78. Mandrekar, V. (1977). Markov Fields. Mandrekar, V. (1976). Germ Field Markov Property for Multiparameter processes. Seminaire de probabilites X, Lecture notes 511, Springer-Verlag, 78-85. Mandrekar, V. and Salehi, H. (1970). Operator-Valued Wide-Sense Markov Processes and Solutions of Infinite-Dimensional Linear Differential Systems Driven by White Noise. Mathematical Systems Theory Vol. 4 Number 4. Ngoc, N. and Royer, G. (1978). Markov Property of Extermal Local Fields. Proceeding of American Mathematical Society Vol. 70, Number 2. Payen, R. (1967). Fonctions Aleatoires Du Second 0rdre A Valeurs Dans Un Espace De Hilbert. Ann. Inst. Henri Poincare 3, 323-396. 58 59 14. Russek, A. Gaussian N-Markovian Processes and Stochastic Boundary Value Problems (T0 Appear). 15. Slepian, D. (1961). First Passage Time for a Particular Gaussian Process. Ann. Math. Statist. 32, 610-612. ill} ll'llllll}