THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS By Dan Cheng A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Statistics – Doctor of Philosophy 2013 ABSTRACT THE EXCURSION PROBABILITY OF GAUSSIAN AND ASYMPTOTICALLY GAUSSIAN RANDOM FIELDS By Dan Cheng The purpose of this thesis is to develop the asymptotic approximation to excursion probability of Gaussian and asymptotically Gaussian random fields. It is composed of two parts. The first part is to study smooth Gaussian random fields. We extend the expected Euler characteristic approximation to a wide class of smooth Gaussian random fields with non-constant variances. Applying similar techniques, we also find that the joint excursion probability of vector-valued smooth Gaussian random fields can be approximated via the expected Euler characteristic of related excursion sets. As useful applications, the excursion probabilities over random intervals and infinite intervals are also investigated. The second part focuses on non-smooth Gaussian and asymptotically Gaussian random fields. We study the excursion probability of Gaussian random fields on the sphere and obtain an asymptotics based on the Pickands’ constant. Using double sum method, we also derive the approximation, which involves the generalized Pickands’ constant, to excursion probability of anisotropic Gaussian and asymptotically Gaussian random fields. Copyright by DAN CHENG 2013 ACKNOWLEDGMENTS I would like to express my sincere gratitude to my advisor Professor Yimin Xiao for his excellent guidance and continuous support during my Ph.D. study and research. He guided me not only to write this thesis but to set a career path to the future. He is extraordinarily kind to students. Besides academic communications, we chatted and shared life experience like friends. His enthusiasm on mathematics encourages me to keep working in the research. I also wish to thank Professor V. S. Mandrekar, Professor Lifeng Wang and Professor Xiaodong Wang for serving on my dissertation committee. I am grateful to the Department of Statistics and Probability and the Graduate School who provided me the assistantships, Dissertation Continuation Fellowship and Dissertation Completion Fellowship for working on the dissertation. This dissertation is also supported in part by the NSF Grant DMS-1006903. Finally, I would like to thank my family for their love and support that enable me to pursue my career goal. iv TABLE OF CONTENTS Chapter 1 Introduction and Review of Existing Literature . . . . . . . . . 1.1 Gaussian Random Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Excursion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 2 Smooth Gaussian Random Fields with 2.1 Gaussian Fields with Stationary Increments . . . 2.2 The Mean Euler Characteristic . . . . . . . . . . 2.3 Excursion Probability . . . . . . . . . . . . . . . . 2.4 Further Remarks and Examples . . . . . . . . . . 2.5 Some Auxiliary Facts . . . . . . . . . . . . . . . . Stationary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Increments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3 Smooth Gaussian Random Fields with Non-constant Variances 3.1 Gaussian Fields on Rectangles . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Applications for Gaussian Fields with a Unique Maximum Point of the Variance 3.3 Gaussian Fields on Manifolds without Boundary . . . . . . . . . . . . . . . . 3.4 Gaussian Fields on Convex Sets with Smooth Boundary . . . . . . . . . . . . 3.5 Gaussian Fields on Convex Sets with Piecewise Smooth Boundary . . . . . . 8 8 18 26 51 59 62 62 75 80 83 91 Chapter 4 4.1 4.2 4.3 The Expected Euler Characteristic of Non-centered Stationary Gaussian Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preliminary Gaussian Computations . . . . . . . . . . . . . . . . . . . . . . Stationary Gaussian Fields on Rectangles . . . . . . . . . . . . . . . . . . . . Isotropic Gaussian Random Fields on Sphere . . . . . . . . . . . . . . . . . . 1 1 2 103 104 108 115 Chapter 5 5.1 5.2 Excursion Probability of Smooth Gaussian Processes over Random Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Stationary Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Gaussian Processes with Increasing Variance . . . . . . . . . . . . . . . . . . 142 Chapter 6 6.1 6.2 6.3 Ruin Probability of a Certain Class cesses . . . . . . . . . . . . . . . . . . . Self-similar Processes . . . . . . . . . . . . . . Integrated Fractional Brownian Motion . . . . More General Gaussian Processes . . . . . . . of Smooth Gaussian Pro. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7 Excursion Probability of Gaussian Random 7.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Non-smooth Gaussian Fields on Sphere . . . . . . . . . 7.2.1 Locally Isotropic Gaussian Fields on Sphere . . v Fields on Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 152 155 157 169 170 170 170 7.3 7.2.2 Standardized Spherical Fractional Brownian Motion Smooth Isotropic Gaussian Fields on Sphere . . . . . . . . 7.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . 7.3.2 Excursion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 178 179 182 Excursion Probability of Anisotropic Gaussian and Asymptotically Gaussian Random Fields . . . . . . . . . . . . . . . . . . . . Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asymptotically Gaussian Random Fields . . . . . . . . . . . . . . . . . . . . Proof of Theorem 8.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Standardized Fractional Brownian Sheet . . . . . . . . . . . . . . Example: Standardized Random String Processes . . . . . . . . . . . . . . . 188 188 190 205 209 212 Chapter 8 8.1 8.2 8.3 8.4 8.5 Chapter 9 Vector-valued Smooth Gaussian Random Fields . . . . . . . . . 215 9.1 Joint Excursion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 9.2 Vector-valued Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . 239 BIBLIOGRAPHY . . . . . . . . . . . . . vi . . . . . . . . . . . . . . . . . 250 Chapter 1 Introduction and Review of Existing Literature 1.1 Gaussian Random Fields A real-valued random field is simply a stochastic process defined over a parameter space T , which could be a subset of RN or even a manifold, etc. The following is the rigorous definition [cf. Adler and Taylor (2007)]. Definition 1.1.1 Let (Ω, F, P) be a complete probability space and T a topological space. Then a measurable mapping X : Ω → RT (the space of all real-valued functions on T ) is called a real-valued random field. Measurable mappings from Ω to (RT )d , d ≥ 1, are called vector-valued random fields. Thus, X is a real-valued function X(ω, t), where ω ∈ Ω and t ∈ T . For convenience, usually, we abbreviate X(ω, t) as X(t) or X. We define a real-valued Gaussian (random) field to be a real-valued random field X on a parameter space T for which the finite dimensional distributions of (X(t1 ), . . . , X(tn )) are multivariate Gaussian ( i.e., multivariate Normal) for each 1 ≤ n < ∞ and each (t1 , . . . , tn ) ∈ T n . The functions m(t) = E{X(t)} and C(t, s) = E{(X(t) − m(t))(X(s) − m(s))} are called respectively the mean and covariance functions of X. If m(t) ≡ 0, we call X a centered 1 Gaussian field. A vector-valued Gaussian field X taking values in Rd is the random field for which ξ, X(t) is a real-valued Gaussian field for every ξ ∈ Rd . The following result is Theorem 1.4.1 in Adler and Taylor (2007), which gives a sufficient condition such that a Gaussian field X is continuous and bounded. Theorem 1.1.2 Let {X(t) : t ∈ T } be a centered Gaussian field, where T is a compact set of RN . If there exist positive constants K, α and η such that E{|X(t) − X(s)|2 } ≤ K| log t − s |−1−α , ∀ t − s ≤ η, then X is continuous and bounded on T with probability one. Note that the sufficient condition in the above theorem only depends on the covariance function of X. This is a huge advantage for studying centered Gaussian random fields: all of their properties only depend on the covariance structure. Similar sufficient conditions for the differentiability of Gaussian fields can also be obtained, see Chapter 1 in Adler and Taylor (2007) for more details. 1.2 Excursion Probability The excursion probability above level u > 0 is defined as P{supt∈T X(t) ≥ u}. Due to the wide applications in statistics and many other related areas, computing the excursion probability becomes a classical and very important problem in probability theory. However, usually, the exact probability is unable to obtain, instead, we try to find the asymptotic approximation as u tends to infinity. There is a classical result of Landau and Shepp (1970) and Marcus and Shepp (1972) that 2 gives a logarithmic asymptotics for the excursion probability of a general centered Gaussian process. If we assume that X(t) is a.s. bounded, then they showed that lim u−2 log P sup X(t) ≥ u = − u→∞ t∈T 1 , 2 2σT (1.2.1) 2 where σT = supt∈T Var(X(t)). We present here a non-asymptotic result due to Borell (1975) and Tsirelson, Ibragimov and Sudakov (TIS) (1976). Theorem 1.2.1 (Borell-TIS inequality). Let {X(t) : t ∈ T } be a centered Gaussian field, a.s. bounded, where T is a compact subset of RN . Then E{supt∈T X(t)} < ∞ and for all u > 0, P sup X(t) − E sup X(t) ≥ u ≤ e t∈T 2 −u2 /(2σT ) . t∈T It is evident to check that the Borell-TIS inequality implies (1.2.1). There are also several non-asymptotic bounds for the excursion probability of general (only assume continuity and boundedness a.s.) Gaussian fields, see Chapter 4 in Adler and Taylor (2007) for more details. If assume X to be stationary or locally stationary, then there is a famous approximation obtained by the double sum method. This technique was developed by Pickands (1969a, 1969b) for Gaussian processes, extended to Gaussian fields by Qualls and Watanabe (1973), and surveyed and developed in a monograph of Piterbarg (1996a). Theorem 1.2.2 Let T be a bounded Jordan measurable set in RN such that dim(T ) = N , and let {X(t) : t ∈ T } be a centered Gaussian field with covariance function C(·, ·) satisfying C(t, s) = 1 − t − s α (1 + o(1)) 3 as t − s → 0. Then as u → ∞, P sup X(t) ≥ u = Hα Vol(T )u2N/α Ψ(u)(1 + o(1)), (1.2.2) t∈T 2 ∞ where Hα is the Pickankds’ constant and Ψ(u) = (2π)−1/2 u e−x /2 dx. This result was developed further by Chan and Lai (2006) for Gaussian fields with a wider class of covariance structures. The coefficient Hα Vol(T ) above was generalized as T Hα (t)dt, where Hα (·) is a function on T . Moreover, the result in Chan and Lai (2006) is applicable to certain asymptotically Gaussian random fields. In Chapter 7, we investigate Gaussian random fields on the sphere and obtain Theorem 7.2.4, which is similar to Theorem 1.2.2. In Chapter 8, we extend the result in Chan and Lai (2006) to anisotropic and asymptotically anisotropic Gaussian random fields, see Theorem 8.1.1 and Theorem 8.2.6. Can we get more accurate approximation to the excursion probability of “nicer” Gaussian random fields? The answer is yes. Sun (1993) used the tube method to find the approximation for Gaussian fields with finite Karhunen-Lo`ve expansion. Also, many authors applied e the Rice method to get accurate approximations for smooth Gaussian fields, see Piterbarg (1996a), Adler (2000) and Aza¨ and Wschebor (2005, 2008, 2009), etc. Later on, these ıs approximations were conjectured by statisticians that they should have close connection to the geometry of the excursion set Au = {t ∈ T : X(t) ≥ u}. Taylor, Takemura and Adler (2005) showed the rigorous proof that the expected Euler characteristic of the excursion set, denoted by E{ϕ(Au )}, can approximate the excursion probability very accurately. Their result is stated as follows. Theorem 1.2.3 Let X = {X(t) : t ∈ T } be a unit-variance smooth Gaussian random field 4 parameterized on a manifold T . Under certain conditions on the regularity of X and topology of T , the following approximation holds: 2 P sup X(t) ≥ u = E{ϕ(Au )}(1 + o e−αu )), as u → ∞, (1.2.3) t∈T where α is some positive constant. Moreover, E{ϕ(Au )} can be computed by the Kac-Rice formula, see Adler and Taylor (2007), dim(T ) E{ϕ(Au )} = C0 Ψ(u) + 2 Cj uj−1 e−u /2 , (1.2.4) j=1 where Cj , j = 0, 1, . . . , dim(T ), are constants depending on X and T . Here is a simple example. Let X be a smooth isotropic Gaussian field with unit variance and T = [0, L]N , then N E{ϕ(Au )} = Ψ(u) + j=1 N j Lj λj/2 (2π)(j+1)/2 2 Hj−1 (u)e−u /2 , where λ = Var( ∂X (t)) and Hj−1 (u) are Hermite polynomials of order j − 1. It is worth ∂ti mentioning here that if X is not centered or not stationary, then E{ϕ(Au )} becomes complicated to compute. In the recent monograph Adler and Taylor (2007), the authors only considered centered Gaussian random fields with constant variance. In Chapter 4 here, we study non-centered stationary Gaussian fields and derive exact formulae for computing E{ϕ(Au )}. Comparing (1.2.3) and (1.2.4) with (1.2.2), we see that the approximation in (1.2.2) only 2 uses one of the terms, which involves uN −1 e−u /2 , in E{ϕ(Au )}. Also, we note that the error term in (1.2.2) is only o(1), and the expected Euler characteristic approximation in (1.2.3) is much more accurate since the error is exponentially smaller than the major term 5 E{ϕ(Au )}. The requirement of “constant variance” on the Gaussian random fields in Theorem 1.2.3 is too restrictive for many applications. However, the original proof in Taylor, Takemura and Adler (2005) relies on this requirement heavily. If the constant variance condition is not satisfied, little had been known on whether the approximation (1.2.3) still holds. In a recent paper Aza¨ and Wschebor (2008, Theorem 5), the authors proved (1.2.3) for a special case ıs when the variance of the Gaussian field attains its maximum only in the interior of T . But this special case excludes many important Gaussian fields in which we are interested. As a major contribution in this thesis, we shall use the Rice method to show (1.2.3) for more general smooth Gaussian fields without constant-variance. In Chapter 2, we study smooth Gaussian random fields with stationary increments and obtain the desired results in Theorem 2.3.7 and Theorem 2.3.8. Meanwhile, we provide a specific formula for computing E{ϕ(Au )} in Theorem 2.2.2. To develop the theory further, we show in Chapter 3 that the expected Euler characteristic approximation also holds for a large class of smooth Gaussian random fields with non-constant variances. When computing E{ϕ(Au )}, we also find that it can be simplified in certain sense depending on the variance function of X. As useful applications, we study the excursion probabilities of Gaussian processes over random intervals and infinite intervals in Chapter 5 and Chapter 6. The approximations we derived are also more accurate than the existing ones, since the errors are super-exponentially small. Lastly, Chapter 9 is on a new topic: the excursion probability for vector-valued Gaussian random fields. There has been little research on this. The only exceptions are Piterbarg and Stamatovic (2005) and Debicki et al. (2010) who obtained some logarithmic asymptotics, and Ladneva and Piterbarg (2000) and Anshin (2006) who obtained certain asymptotics for 6 non-smooth vector-valued Gaussian random fields with special covariance functions. Let {(X(t), Y (s)) : t ∈ T, s ∈ S} be an R2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in RN . Define the excursion set Au (X, T ) × Au (Y, S) = {(t, s) ∈ T × S : X(t) ≥ u, Y (s) ≥ u}. We show in Theorem 9.1.9 that under certain smoothness and regularity conditions, as u → ∞, P sup X(t) ≥ u, sup Y (s) ≥ u t∈T s∈S = E{ϕ(Au (X, T ) × Au (Y, S))} + o exp − u2 − αu2 1 + ρ(T, S) . where ρ(T, S) = supt∈T,s∈S E{X(t)Y (s)}. Let {(X(t), Y (t)) : t ∈ T } be an R2 -valued, centered, unit-variance Gaussian process, where T = [a, b] is a finite interval in R. Define the excursion set Au (T, X ∧ Y ) = {t ∈ T : (X ∧ Y )(t) ≥ u}. We show in Theorem 9.2.5 that under certain smoothness and regularity conditions, as u → ∞, P{∃t ∈ T such that X(t) ≥ u, Y (t) ≥ u} = P sup(X ∧ Y )(t) ≥ u t∈T = E{ϕ(Au (T, X ∧ Y ))} + o exp where ρ(T ) = supt∈T E{X(t)Y (t)}. 7 − u2 1 + ρ(T ) − αu2 , Chapter 2 Smooth Gaussian Random Fields with Stationary Increments 2.1 Gaussian Fields with Stationary Increments Let X = {X(t) : t ∈ RN } be a real-valued centered Gaussian random field with stationary increments. We assume that X has continuous covariance function C(t, s) = E{X(t)X(s)} and X(0) = 0. Then it is known [cf. Yaglom (1957)] that C(t, s) = RN (ei t,λ − 1)(e−i s,λ − 1) F (dλ) + t, Θs (2.1.1) where x, y is the ordinary inner product in RN , Θ is an N × N non-negative definite (or positive semidefinite) matrix and F is a non-negative symmetric measure on RN \{0} which satisfies λ 2 F (dλ) < ∞. 2 RN 1 + λ (2.1.2) Similarly to stationary random fields, the measure F and its density (if it exists) f (λ) are called the spectral measure and spectral density of X, respectively. 8 By (2.1.1) we see that X has the following stochastic integral representation X(t) = RN (ei t,λ − 1)W (dλ) + Y, t , (2.1.3) where Y is an N -dimensional Gaussian random vector and W is a complex-valued Gaussian random measure (independent of Y) with F as its control measure. It is known that many probabilistic, analytic and geometric properties of a Gaussian field with stationary increments can be described in terms of its spectral measure F and, on the other hand, various interesting Gaussian random fields can be constructed by choosing their spectral measures appropriately. See Xiao (2009), Xue and Xiao (2011) and the references therein for more information. For simplicity we assume that Y = 0. It follows from (2.1.1) that the variogram ν of X is given by ν(h) := E(X(t + h) − X(t))2 = 2 RN (1 − cos h, λ )F (dλ). (2.1.4) Mean-square directional derivatives and sample path differentiability of Gaussian random fields have been well studied. See, for example, Adler (1981), Adler and Taylor (2007), Potthoff (2010), Xue and Xiao (2011). In particular, general sufficient conditions for a Gaussian random field to have a modification whose sample functions are in C k are given by Adler and Taylor (2007). For a Gaussian random field X = {X(t) : t ∈ RN } with stationary increments, Xue and Xiao (2011) provided conditions for its sample path differentiability in terms of the spectral density function f (λ). Similar arguments can be applied to give the spectral condition for the sample functions of X to be in C k (RN ). Definition 2.1.1 [Adler and Taylor (2007, p.22)]. Let t, v1 , . . . , vk ∈ RN ; v = (v1 , . . . , vk ) ∈ ⊗k RN . 9 We say X has a kth-order L2 partial derivative at t, in the direction v, which we denote by Dv 2 X(t), if the limit L Dv 2 X(t) L := lim k i=1 hi h1 ,...,hk →0 k i=1 hi vi ) exists in L2 , where GX (t, k 1 hi vi i=1 k k (−1)k− i=1 si X t + = i=1 hi vi is the symmetrized difference k GX t, GX t, si hi vi . (2.1.5) i=1 s∈{0,1}k Remark 2.1.2 Recall the fact that a sequence of random variables ξn converges in L2 if and only if E{ξn ξm } converges to a constant as n, m → ∞. It follows immediately that Dv 2 X(t) L exists in L2 if and only if lim ˆ ˆ h1 ,...,hk ,h1 ,...,hk →0 1 k ˆ i=1 hi hi k k ˆ hi vi hi vi GX t, E GX t, i=1 (2.1.6) i=1 exists. Let e1 , e2 , . . . , eN be the standard orthonormal basis of RN . If the direction v consists of ki many ei , 1 ≤ i ≤ N , and k = N i=1 ki , then we write Dv 2 X(t) simply as L ∂ k X(t) kN k1 ∂t1 ···∂tN . Lemma 2.1.3 Let X = {X(t) : t ∈ RN } be a real-valued centered Gaussian random field with stationary increments and let k = if ∂ 2k ν(0) 2k 2k ∂t1 1 ···∂tN N N i=1 ki . Then ∂ k X(t) kN k ∂t11 ···∂tN exists in L2 if and only exists. Proof To simplify the notations, we only show the proof for k = 2 and the proof for general 10 k will be similar. By the definition of the symmetric difference GX in (2.1.5), 1 ˆ ˆ E{GX (t, h1 ei + h2 ej )GX (t, h1 ei + h2 ej )} ˆ 1 h2 ˆ h1 h2 h 1 = E{[X(t + h1 ei + h2 ej ) − X(t + h1 ei ) − X(t + h2 ej ) + X(t)] ˆ ˆ h1 h2 h1 h2 (2.1.7) ˆ ˆ ˆ ˆ × [X(t + h1 ei + h2 ej ) − X(t + h1 ei ) − X(t + h2 ej ) + X(t)]}. Expanding the product above and applying the variogram ν defined in (2.1.4), we obtain that (2.1.7) becomes −1 ˆ ˆ ˆ {ν(h1 ei + h2 ej − h1 ei − h2 ej ) − ν(h1 ei + h2 ej − h1 ei ) ˆ ˆ 2h1 h2 h1 h2 ˆ ˆ ˆ − ν(h1 ei + h2 ej − h2 ej ) + ν(h1 ei + h2 ej ) − ν(h1 ei − h1 ei − h2 ej ) ˆ ˆ ˆ ˆ + ν(h1 ei − h1 ei ) + ν(h1 ei − h2 ej ) − ν(h1 ei ) − ν(h2 ej − h1 ei − h2 ej ) (2.1.8) ˆ ˆ ˆ ˆ + ν(h2 ej − h1 ei ) + ν(h2 ej − h2 ej ) − ν(h2 ej ) + ν(−h1 ei − h2 ej ) ˆ ˆ − ν(−h1 ei ) − ν(−h2 ej ) + ν(0)} = −1 ˆ ˆ Gν (0, h1 ei + h2 ej + (−h1 )ei + (−h2 )ej ). ˆ ˆ 2h1 h2 (−h1 )(−h2 ) ˆ ˆ Note that as h1 , h2 , h1 , h2 → 0, the limit (if it exists) of the last term in (2.1.8) is just ∂ 4 ν(0) − 2 2 , together with Remark 2.1.2, we obtain the desired result. ∂ti ∂tj Proposition 2.1.4 Let X = {X(t) : t ∈ RN } be a real-valued centered Gaussian random field with stationary increments and let ki (1 ≤ i ≤ N ) be non-negative integers. If there is a constant ε > 0 such that N λ >1 i=1 |λi |2ki +ε F (dλ) < ∞, 11 (2.1.9) then X has a modification X such that the partial derivative N i=1 ki . RN almost surely, where k = ∂ k X(t) kN k ∂t11 ···∂tN is continuous on Moreover, ∀T > 0 and η ∈ (0, ε ∧ 1), there exists a constant κ such that ∂ k X(t) E Proof kN k ∂t11 · · · ∂tN − 2 ∂ k X(s) kN k ∂s11 · · · ∂sN ≤ κ t − s η, ∀t, s ∈ [−T, T ]N . Applying the dominated convergence theorem, ∂ 2k ν(0) 2k 2k ∂t1 1 · · · ∂tN N = RN 2k 2k λ1 1 · · · λN N F (dλ) = λ ≤1 2k 2k λ1 1 · · · λN N F (dλ) + λ 2 F (dλ) + ≤ λ ≤1 λ >1 λ >1 2k 2k λ1 1 · · · λN N F (dλ) (2.1.10) 2k 2k λ1 1 · · · λN N F (dλ) < ∞, where the last inequality is due to the requirement (2.1.2) and condition (2.1.9). By Lemma 2.1.3, the partial derivative ∂ k X(t) kN k1 ∂t1 ···∂tN exists in L2 . Next, we show that for any η ∈ (0, ε ∧ 1), there exists a constant κ such that ∂ k X(t) E kN k ∂t11 · · · ∂tN − ∂ k X(s) kN k ∂s11 · · · ∂sN 2 ≤ κ t − s η, ∀t, s ∈ [−T, T ]N . (2.1.11) Recall that C(t, s) = = RN RN ei t,λ − 1 e−i s,λ − 1 F (dλ) (2.1.12) (cos t − s, λ − cos t, λ − cos s, λ + 1)F (dλ), 12 taking the derivative gives ∂ 2k C(t, s) = kN k k ∂t11 · · · ∂tN ∂s11 · · · ∂skN 2k 2k λ1 1 · · · λN N cos t − s, λ F (dλ). RN It follows that ∂ k X(t) E k k N ∂t11 · · · ∂tN − 2 N ∂t11 · · · ∂tN =2 2 ∂ k X(s) +E k k k k N ∂s11 · · · ∂sN ∂ k X(t) =E 2 ∂ k X(s) k k N ∂s11 · · · ∂sN − 2E ∂ k X(t) k k ∂ k X(s) k k N N ∂t11 · · · ∂tN ∂s11 · · · ∂sN 2k 2k λ1 1 · · · λN N 1 − cos t − s, λ F (dλ). RN Let s0 = t, s1 = (s1 , t2 , . . . , tN ), s2 = (s1 , s2 , t3 . . . , tN ), ..., sN −1 = (s1 , . . . , sN −1 , tN ) and ˆ ˆ ˆ ˆ sN = s. Let h = s − t := (h1 , . . . , hN ). Then, by Jensen’s inequality, ˆ ∂ k X(t) E k k N ∂t11 · · · ∂tN − E j=1 k k N ∂s11 · · · ∂sN N ≤N 2 ∂ k X(s) ∂ k X(ˆj ) s kj k kj+1 − k kj−1 kj k N ∂s11 · · · ∂sj−1 ∂tj · · · ∂tN N N = 2N k N ∂s11 · · · ∂sj ∂tj+1 · · · ∂tN 2 ∂ k X(ˆj−1 ) s N j=1 R |λi |2ki F (dλ) 1 − cos(hj λj ) i=1 N (2.1.13) N ≤ 2N j=1 λ ≤1 |λi |2ki F (dλ) 1 − cos(hj λj ) i=1 N N + 2N j=1 λ >1 |λi |2ki F (dλ) 1 − cos(hj λj ) i=1 := I1 + I2 . 13 Combining the result in (2.1.10) with the elementary inequality 1 − cos x ≤ x2 yields N |hj |2 I1 ≤ 2N j=1 λ ≤1 λ 2 F (dλ) ≤ c1 t − s 2 (2.1.14) for some positive constant c1 . √ To bound the jth integral in I2 , we note that, when λ > 1, either |λj | > 1/ N or there √ is j0 = j such that λj0 > 1/ N . We break the integral according to these two possibilities. N λ >1 |λi |2ki F (dλ) 1 − cos(hj λj ) i=1 N ≤ √ |λj |>1/ N |λi |2ki F (dλ) 1 − cos(hj λj ) i=1 (2.1.15) N + √ j0 =j |λj |≤1,|λj0 |>1/ N |λi |2ki F (dλ) 1 − cos(hj λj ) i=1 := I3 + I4 . Combining condition (2.1.9) with the elementary inequality 1 − cos x ≤ x2 yields N 1 − cos(hj λj ) ε I3 ≤ |λj | |λi |2ki F (dλ) √ |λj |ε 1/ N <|λj |≤1/|hj | i=1 N 1 ε |λi |2ki F (dλ) + ε |λj | |λj |>1/|hj | |λj | i=1 (2.1.16) ≤ c2 |hj |ε for some positive constant c2 . Similarly, it is evident to check that I4 ≤ c3 |hj |2 for some positive constant c3 . Therefore, the H¨der condition for L2 partial derivative in (2.1.11) o holds, and then the desired result follows from Kolmogorov’s continuity theorem. 14 For simplicity we will not distinguish X from its modification X. As a consequence of Proposition 2.1.4, we see that, if X = {X(t) : t ∈ RN } has a spectral density f (λ) which satisfies f (λ) = O 1 λ as λ → ∞, N +2k+H (2.1.17) for some integer k ≥ 1 and H ∈ (0, 1), then the sample functions of X are in C k (RN ) a.s. When X(·) ∈ C 2 (RN ) almost surely, we write Denote by 2 X(t) X(t) and ∂X(t) ∂ti ∂ 2 X(t) = Xi (t) and ∂t ∂t = Xij (t). i j the column vector (X1 (t), . . . , XN (t))T and the N ×N matrix (Xij (t))i,j=1,...,N , respectively. It follows from (2.1.1) that for every t ∈ RN , λij := RN λi λj F (dλ) = ∂ 2 C(t, s) = E{Xi (t)Xj (t)}. ∂ti ∂sj s=t (2.1.18) Define the N × N matrix Λ = (λij )i,j=1,...,N , then (2.1.18) shows that Λ = Cov( X(t)) for all t. In particular, the distribution of λij (t) := RN X(t) is independent of t. Let λi λj cos t, λ F (dλ), Λ(t) := (λij (t))i,j=1,...,N . Then we have λij (t) − λij = RN λi λj (cos t, λ − 1) F (λ) = ∂ 2 C(t, s) = E{X(t)Xij (t)}, ∂ti ∂tj s=t or equivalently, Λ(t) − Λ = E{X(t) 2 X(t)}. Let T = N i=1 [ai , bi ] be a closed rectangle on RN , where ai < bi for all 1 ≤ i ≤ N and 0 ∈ T (the case of 0 ∈ T will be discussed in Remark 2.4.1). In addition to the stationary / increments, we will make use of the following conditions on X: 15 (H1). X(·) ∈ C 2 (T ) almost surely and its second derivatives satisfy the uniform mean-square H¨lder condition: there exist constants L, η > 0 such that o E(Xij (t) − Xij (s))2 ≤ L t − s 2η , ∀t, s ∈ T, i, j = 1, . . . , N. (2.1.19) (H2). For every t ∈ T , the matrix Λ − Λ(t) is non-degenerate. (H3). For every pair (t, s) ∈ T 2 with t = s, the Gaussian random vector (X(t), X(t), Xij (t), X(s), X(s), Xij (s), 1 ≤ i ≤ j ≤ N ) is non-degenerate. (H3 ). For every t ∈ T , (X(t), X(t), Xij (t), 1 ≤ i ≤ j ≤ N ) is non-degenerate. Clearly, by Proposition 2.1.4, condition (H1) is satisfied if (2.1.17) holds for k = 2. Also note that (H3) implies (H3 ). We shall use conditions (H1), (H2) and (H3) to prove Theorems 2.3.7 and 2.3.8. Condition (H3 ) will be used for computing E{ϕ(Au )} in Theorem 2.2.2. The following lemma shows that for Gaussian fields with stationary increments, (H2) is equivalent to Λ − Λ(t) being positive definite. Lemma 2.1.5 For every t = 0, Λ − Λ(t) is non-negative definite. Hence, under (H2), Λ − Λ(t) is positive definite. Proof Let t = 0 be fixed. For any (a1 , . . . , aN ) ∈ RN \{0}, N N ai aj (λij − λij (t)) = i,j=1 RN 2 ai λ i i=1 16 (1 − cos t, λ ) F (λ). (2.1.20) Since ( N 2 i=1 ai λi ) (1 − cos t, λ ) ≥ 0 for all λ ∈ RN , (2.1.20) is always non-negative, which implies Λ − Λ(t) is non-negative definite. If (H2) is satisfied, then all the eigenvalues of Λ − Λ(t) are positive. This completes the proof. It follows from (2.1.20) that, if the spectral measure F is carried by a set of positive Lebesgue measure (i.e., there is a set B ⊂ RN with positive Lebesgue measure such that F (B) > 0), then (H2) holds. Hence, (H2) is in fact a very mild condition for smooth Gaussian fields with stationary increments. Lemma 2.1.5 and the following two lemmas indicate some significant properties of Gaussian fields with stationary increments. They will play important roles in later sections. Lemma 2.1.6 For each t, Xi (t) and Xjk (t) are independent for all i, j, k; and E{Xij (t)Xkl (t)} is symmetric in i, j, k, l. Proof By (2.1.1), one can verify that for t, s ∈ RN , ∂ 3 C(t, s) = λi λj λk sin t − s, λ F (dλ), ∂ti ∂sj ∂sk RN ∂ 4 C(t, s) E{Xij (t)Xkl (s)} = λi λj λk λl cos t − s, λ F (dλ). = ∂ti ∂tj ∂sk ∂sl RN E{Xi (t)Xjk (s)} = Letting s = t we obtain the desired results. It follows immediately from Lemma 2.1.6 that the following result holds. Lemma 2.1.7 Let A = (aij )1≤i,j≤N be a symmetric matrix, then St (i, j, k, l) = E{(A 2 X(t)A)ij (A 2 X(t)A)kl } is a symmetric function of i, j, k, l. 17 (2.1.21) 2.2 The Mean Euler Characteristic The rectangle T = N i=1 [ai , bi ] can be decomposed into several faces of lower dimensions. We use the same notations as in Adler and Taylor (2007, p.134). A face J of dimension k, is defined by fixing a subset σ(J) ⊂ {1, . . . , N } of size k and a subset ε(J) = {εj , j ∈ σ(J)} ⊂ {0, 1}N −k of size N − k, so that / J = {t = (t1 , . . . , tN ) ∈ T : aj < tj < bj if j ∈ σ(J), tj = (1 − εj )aj + εj bj if j ∈ σ(J)}. / Denote by ∂k T the collection of all k-dimensional faces in T , then the interior of T is given ◦ by T = ∂N T and the boundary of T is given by ∂T = ∪N −1 ∪J∈∂ T J. For J ∈ ∂k T , denote k=0 k by X|J (t) and 2X |J (t) the column vector (Xi1 (t), . . . , Xik (t))T ,...,i ∈σ(J) and the k × k i1 k matrix (Xmn (t))m,n∈σ(J) , respectively. If X(·) ∈ C 2 (RN ) and it is a Morse function a.s. [cf. Definition 9.3.1 in Adler and Taylor (2007)], then according to Corollary 9.3.5 or page 211-212 in Adler and Taylor (2007), the Euler characteristic of the excursion set Au = {t ∈ T : X(t) ≥ u} is given by N k (−1)k ϕ(Au ) = k=0 J∈∂k T (−1)i µi (J) (2.2.1) i=0 with µi (J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = i, ε∗ Xj (t) j (2.2.2) ≥ 0 for all j ∈ σ(J)}, / where ε∗ = 2εj − 1 and the index of a matrix is defined as the number of its negative j 18 eigenvalues. We also define µi (J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = i}. (2.2.3) 2 2 2 Let σt = Var(X(t)) and let σT = supt∈T σt be the maximum variance. For Gaussian 2 fields with stationary increments, it follows from (2.1.4) that ν(t) = σt . For t ∈ J ∈ ∂k T , where k ≥ 1, let ΛJ = (λij )i,j∈σ(J) , ΛJ (t) = (λij (t))i,j∈σ(J) , 2 θt = Var(X(t)| X|J (t)), 2 γt = Var(X(t)| X(t)), (2.2.4) {J1 , . . . , JN −k } = {1, . . . , N }\σ(J), E(J) = {(tJ1 , . . . , tJ N −k ) ∈ RN −k : tj ε∗ ≥ 0, j = J1 , . . . , JN −k }. j Then for all t ∈ J, ΛJ = Cov( X|J (t)), ΛJ (t) − ΛJ = E{X(t) 2 X|J (t)}. (2.2.5) 2 2 2 2 Note that θt ≥ γt for all t ∈ T and θt = γt if t ∈ ∂N T . For {t} ∈ ∂0 T , then X|J (t) is 2 2 not defined, in this case we set θt as σt by convention. Let Cj (t) be the (1, j + 1) entry of (Cov(X(t), X(t)))−1 , i.e. Cj (t) = M1,j+1 /detCov(X(t), X(t)), where M1,j+1 is the cofactor of the (1, j + 1) entry, E{X(t)Xj (t)}, in the covariance matrix Cov(X(t), X(t)). k 2 2 Denote by Hk (x) the Hermite polynomial of order k, i.e., Hk (x) = (−1)k ex /2 d k (e−x /2 ). dx Then the following identity holds [cf. Adler and Taylor (2007, p.289)]: ∞ u 2 2 Hk (x)e−x /2 dx = Hk−1 (u)e−u /2 , 19 (2.2.6) where u > 0 and k ≥ 1. For a matrix A, |A| denotes its determinant. Let R+ = [0, ∞), 2 ∞ R− = (−∞, 0] and Ψ(u) = (2π)−1/2 u e−x /2 dx. The following lemma is an analogue of Lemma 11.7.1 in Adler and Taylor (2007). It provides a key step for computing the mean Euler characteristic in Theorem 2.2.2, meanwhile, it has close connection with Theorem 2.3.7. Lemma 2.2.1 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field with stationary increments satisfying (H1), (H2) and (H3 ). Then for each J ∈ ∂k T with k ≥ 1, k (−1)i µi (J) E = i=0 Proof (−1)k u −u2 /(2θ2 ) |ΛJ − ΛJ (t)| t dt. (2.2.7) Hk−1 e k θt θt (2π)(k+1)/2 |ΛJ |1/2 J Let Di be the collection of all k × k matrices with index i. Recall the definition of µi (J) in (2.2.3), thanks to (H1) and (H3 ), we can apply the Kac-Rice metatheorem [cf. Theorem 11.2.1 or Corollary 11.2.2 in Adler and Taylor (2007)] to get that the left hand side of (2.2.7) becomes k J p X (t) (0)dt |J (−1)i E{|det 2 X|J (t)|1{ 2 X (t)∈D } 1{X(t)≥u} | X|J (t) = 0}. i |J i=0 (2.2.8) Note that on the event Di , the matrix 2X |J (t) has i negative eigenvalues, which implies (−1)i |det 2 X|J (t)| = det 2 X|J (t). Also, ∪k { 2 X|J (t) ∈ Di } = Ω a.s., hence (2.2.8) i=0 20 equals J p X (t) (0)dt E{det 2 X|J (t)1{X(t)≥u} | X|J (t) = 0} |J 2 2 e−x /(2θt ) = J (2π)(k+1)/2 |ΛJ |1/2 θt (2.2.9) ∞ dt u dx E{det 2X |J (t)|X(t) = x, X|J (t) = 0}. Now we turn to computing E{det 2 X|J (t)|X(t) = x, X|J (t) = 0}. By Lemma 2.1.5, under (H2), Λ − Λ(t) and hence ΛJ − ΛJ (t) are positive definite for every t ∈ J. Thus there exists a k × k positive definite matrix Qt such that Qt (ΛJ − ΛJ (t))Qt = Ik , (2.2.10) where Ik is the k × k identity matrix. By (2.2.5), E{X(t)(Qt 2 X|J (t)Qt )ij } = −(Qt (ΛJ − ΛJ (t))Qt )ij = −δij , where δij is the Kronecker delta function. One can write E{det(Qt 2 X|J (t)Qt )|X(t) = x, X|J (t) = 0} = E{det∆(t, x)}, (2.2.11) where ∆(t, x) = (∆ij (t, x))i,j∈σ(J) with all elements ∆ij (t, x) being Gaussian variables. To study ∆(t, x), we only need to find its mean and covariance. Note that 21 X(t) and 2 X(t) are independent by Lemma 2.1.6, then we apply Lemma 2.5.1 to obtain E{∆ij (t, x)} = E{(Qt 2 X|J (t)Qt )ij |X(t) = x, X|J (t) = 0} = (E{X(t)(Qt 2 X|J (t)Qt )ij }, 0, . . . , 0)(Cov(X(t), X|J (t)))−1 (x, 0, . . . , 0)T (2.2.12) x = (−δij , 0, . . . , 0)(Cov(X(t), X|J (t)))−1 (x, 0, . . . , 0)T = − 2 δij , θ t where the last equality comes from the fact that the (1, 1) entry of (Cov(X(t), X|J (t)))−1 2 is detCov( X|J (t))/detCov(X(t), X|J (t)) = 1/θt . For the covariance, applying Lemma 2.5.1 again gives E{(∆ij (t, x) − E{∆ij (t, x)})(∆kl (t, x) − E{∆kl (t, x)})} = E{(Qt 2 X|J (t)Qt )ij (Qt 2 X|J (t)Qt )kl } − (E{X(t)(Qt 2 X|J (t)Qt )ij }, 0, . . . , 0) · (Cov(X(t), X|J (t)))−1 (E{X(t)(Qt 2 X|J (t)Qt )kl }, 0, . . . , 0)T = St (i, j, k, l) − (−δij , 0, . . . , 0)(Cov(X(t), X|J (t)))−1 (−δkl , 0, . . . , 0)T = St (i, j, k, l) − δij δkl 2 θt , where St is a symmetric function of i, j, k, l by applying Lemma 2.1.7 with A replaced by Qt . Therefore (2.2.11) becomes E 1 det(θt Qt ( 2 X|J (t))Qt ) X(t) = x, k θt X|J (t) = 0 1 x = k E det ∆(t) − Ik θt θt where ∆(t) = (∆ij (t))i,j∈σ(J) and all ∆ij (t) are Gaussian variables satisfying E{∆ij (t)} = 0, 2 E{∆ij (t)∆kl (t)} = θt St (i, j, k, l) − δij δkl . 22 , −k By Corollary 11.6.3 in Adler and Taylor (2007), (2.2.11) is equal to (−1)k θt Hk (x/θt ), hence E{det 2 X|J (t)|X(t) = x, X|J (t) = 0} = E{det(Q−1 Qt 2 X|J (t)Qt Q−1 )|X(t) = x, t t X|J (t) = 0} = |ΛJ − ΛJ (t)|E{det(Qt 2 X|J (t)Qt )|X(t) = x, = X|J (t) = 0} (−1)k x |ΛJ − ΛJ (t)|Hk . k θt θt Plugging this into (2.2.9) and applying (2.2.6), we obtain the desired result. Theorem 2.2.2 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field with stationary increments such that (H1), (H2) and (H3 ) are fulfilled. Then N P(X(t) ≥ u, E{ϕ(Au )} = {t}∈∂0 T k=1 J∈∂k T ∞ × dt J × Hk dx ··· u E(J) (2π)k/2 |ΛJ |1/2 |ΛJ − ΛJ (t)| k N −k γt dyJ1 · · · dyJ (2.2.13) x + γt CJ1 (t)yJ1 + · · · + γt CJ (t)yJ N −k N −k γt × pX(t),X (t),...,X J J 1 Proof 1 X(t) ∈ E({t})) + N −k (t) (x, yJ1 , . . . , yJN −k | X|J (t) = 0). According to Corollary 11.3.2 in Adler and Taylor (2007), (H1) and (H3 ) imply that X is a Morse function a.s. It follows from (2.2.1) that N k (−1)k E E{ϕ(Au )} = (−1)i µi (J) . (2.2.14) i=0 k=0 J∈∂k T If J ∈ ∂0 T , say J = {t}, it turns out E{µ0 (J)} = P(X(t) ≥ u, X(t) ∈ E({t})). If J ∈ ∂k T with k ≥ 1, we apply the Kac-Rice metatheorem to obtain that the expectation on the right 23 hand side of (2.2.14) becomes k J p X (t) (0)dt |J (−1)i E{|det 2 X|J (t)|1{ 2 X (t)∈D } 1{(X (t),...,X J1 JN −k (t))∈E(J)} i |J i=0 × 1{X(t)≥u} | X|J (t) = 0} = ∞ 1 (2π)k/2 |ΛJ |1/2 J dt dx ··· u E(J) dyJ1 · · · dyJ N −k × E{det 2 X|J (t)|X(t) = x, XJ1 (t) = yJ1 , . . . , XJ N −k × pX(t),X (t),...,X J J 1 N −k (t) (x, yJ1 , . . . , yJN −k | (t) = yJ N −k , X|J (t) = 0} X|J (t) = 0). (2.2.15) For fixed t, let Qt be the positive definite matrix in (2.2.10). Then, similarly to the proof in Lemma 2.2.1, we can write E{det(Qt 2 X|J (t)Qt )|X(t) = x, XJ1 (t) = yJ1 , . . . , XJ = yJ , X|J (t) = 0} N −k N −k as E{det∆(t, x)}, where ∆(t, x) is a matrix consisting of Gaussian entries ∆ij (t, x) with mean E{(Qt 2 X|J (t)Qt )ij |X(t) = x, XJ1 (t) = yJ1 , . . . , XJ = yJ , X|J (t) = 0} N −k N −k = (−δij , 0, . . . , 0)(Cov(X(t), XJ1 (t), . . . , XJ · (x, yJ1 , . . . , yJ N −k N −k (t), X|J (t)))−1 , 0, . . . , 0)T δij 2 2 (t)yJ ), = − 2 (x + γt CJ1 (t)yJ1 + · · · + γt CJ N −k N −k γt (2.2.16) 24 and covariance E{(∆ij (t, x) − E{∆ij (t, x)})(∆kl (t, x) − E{∆kl (t, x)})} = St (i, j, k, l) − δij δkl 2 γt . Following the same procedure in the proof of Lemma 2.2.1, we obtain that the last conditional expectation in (2.2.15) is equal to x (−1)k |ΛJ − ΛJ (t)| Hk + γt CJ1 (t)yJ1 + · · · + γt CJ (t)yJ . k N −k N −k γt γt (2.2.17) Plug this into (2.2.15) and (2.2.14), yielding the desired result. Remark 2.2.3 Usually, for nonstationary (including constant-variance) Gaussian field X on RN , its mean Euler characteristic involves at least the third-order derivatives of the covariance function. For Gaussian random fields with stationary increments, as shown in Lemma 2.1.6, E{Xij (t)Xk (t)} = 0 and E{Xij (t)Xkl (t)} is symmetric in i, j, k, l, so the mean Euler characteristic becomes relatively simpler, contains only up to the second-order derivatives of the covariance function. In various practical applications, (2.2.13) could be simplified with only an exponentially smaller difference, see the discussions in Section 2.4. 25 2.3 Excursion Probability N k=0 ∂k T As in Section 3.1, we decompose T into several faces as T = = N k=0 J∈∂k T J. For each J ∈ ∂k T , define the number of extended outward maxima above level u as E Mu (J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = k, / ε∗ Xj (t) ≥ 0 for all j ∈ σ(J)}. j E In fact, Mu (J) is the same as µk (J) defined in (2.2.2) with i = k. We will make use of the following lemma. Lemma 2.3.1 Let X = {X(t) : t ∈ RN } be a Gaussian random field satisfying (H1) and (H3 ), then for any u > 0, N E {Mu (J) ≥ 1} a.s. sup X(t) ≥ u = t∈T Proof k=0 J∈∂k T E By the definition of Mu (J), it is clear that N E {Mu (J) ≥ 1} a.s. sup X(t) ≥ u ⊃ t∈T k=0 J∈∂k T Suppose supt∈T X(t) ≥ u, since X(t) ∈ C 2 (RN ) a.s., there exists t0 ∈ T such that X(t0 ) = supt∈T X(t). Without loss of generality, assume t0 ∈ J ∈ ∂k T . Note that t0 is a local maximum restricted on J, thus X|J (t0 ) = 0 and 2X |J (t0 ) is non-positive definite. Due to (H1) and (H3 ), we apply Lemma 11.2.11 in Adler and Taylor (2007) to obtain that almost surely, det( 2 X|J (t0 )) = 0 and hence index( 2 X|J (t0 )) = k. If ε∗ Xj (t0 ) < 0 for j some j ∈ σ(J), then we can find t1 ∈ T such that X(t1 ) > X(t0 ), which contradicts / 26 E X(t0 ) = supt∈T X(t). Hence ε∗ Xj (t0 ) ≥ 0 for all j ∈ σ(J). These indicate Mu (J) ≥ 1, / j therefore N E {Mu (J) ≥ 1} a.s., sup X(t) ≥ u ⊂ t∈T k=0 J∈∂k T completing the proof. It follows from Lemma 2.3.1 that N N E P{Mu (J) P sup X(t) ≥ u ≤ t∈T E E{Mu (J)}. ≥ 1} ≤ k=0 J∈∂k T (2.3.1) k=0 J∈∂k T On the other hand, by the Bonferroni inequality, N E P{Mu (J) ≥ 1} − P sup X(t) ≥ u ≥ t∈T k=0 J∈∂k T E E P{Mu (J) ≥ 1, Mu (J ) ≥ 1}. J=J E E Let pi = P{Mu (J) = i}, then P{Mu (J) ≥ 1} = ∞ i=1 pi E and E{Mu (J)} = ∞ i=1 ipi , follows that ∞ E E E{Mu (J)} − P{Mu (J) ≥ 1} = (i − 1)pi i=2 ∞ ≤ i=2 1 i(i − 1) E E pi = E{Mu (J)(Mu (J) − 1)}. 2 2 27 it E E E E Together with the obvious bound P{Mu (J) ≥ 1, Mu (J ) ≥ 1} ≤ E{Mu (J)Mu (J )}, we obtain the following lower bound for the excursion probability, N P sup X(t) ≥ u ≥ t∈T k=0 J∈∂k T 1 E E E E{Mu (J)} − E{Mu (J)(Mu (J) − 1)} 2 (2.3.2) E E E{Mu (J)Mu (J )}. − J=J Define the number of local maxima above level u as Mu (J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = k}, E then obviously Mu (J) ≥ Mu (J), and Mu (J) is the same as µk (J) defined in (2.2.3) with i = k. It follows similarly that N E{Mu (J)} ≥ P sup X(t) ≥ u t∈T k=0 J∈∂k T N ≥ k=0 J∈∂k T (2.3.3) 1 E{Mu (J)} − E{Mu (J)(Mu (J) − 1)} − 2 E{Mu (J)Mu (J )}. J=J We will use (2.3.1) and (2.3.2) to estimate the excursion probability for the general case, see Theorem 2.3.8. Inequalities in (2.3.3) provide another method to approximate the excursion probability in some special cases, see Theorem 2.3.7. The advantage of (2.3.3) is that the principal term induced by compared with the one induced by N k=0 N k=0 J∈∂k T J∈∂k T E{Mu (J)} is much easier to compute E E{Mu (J)}. The following two lemmas provide the estimations for the principal terms in approximating the excursion probability. 28 Lemma 2.3.2 Let X be a Gaussian field as in Theorem 2.2.2. Then for each J ∈ ∂k T with k ≥ 1, there exists some constant α > 0 such that E{Mu (J)} = 2 u −u2 /(2θ2 ) |ΛJ − ΛJ (t)| t dt(1 + o(e−αu )). Hk−1 e k (k+1)/2 |Λ |1/2 J θt θt (2π) J 1 (2.3.4) Proof Following the notations in the proof of Lemma 2.2.1, we obtain similarly that E{Mu (J)} = p X (t) (0)dt E{|det 2 X|J (t)|1{ 2 X (t)∈D } 1{X(t)≥u} | X|J (t) = 0} |J k |J J ∞ dx dt = J u 2 2 (−1)k e−x /(2θt ) (2π)(k+1)/2 |ΛJ |1/2 θt × E{det 2 X|J (t)1{ 2 X (t)∈D } |X(t) = x, k |J X|J (t) = 0}. (2.3.5) Recall 2X |J (t) = Q−1 Qt 2 X|J (t)Qt Q−1 and we can write (2.2.12) as t t E{Qt 2 X|J (t)Qt |X(t) = x, x X|J (t) = 0} = − 2 Ik . θt Make change of variables x V (t) = Qt 2 X|J (t)Qt + 2 Ik , θt where V (t) = (Vij (t))1≤i,j≤k . Then (V (t)|X(t) = x, X|J (t) = 0) is a Gaussian matrix whose mean is 0 and covariance is the same as that of (Qt 2 X|J (t)Qt |X(t) = x, 0). Denote the density of Gaussian vectors ((Vij (t))1≤i≤j≤k |X(t) = x, 29 X|J (t) = X|J (t) = 0) by ht (v), v = (vij )1≤i≤j≤k ∈ Rk(k+1)/2 , then E{det(Qt 2 X|J (t)Qt )1{ 2 X (t)∈D } |X(t) = x, k |J X|J (t) = 0} = E{det(Qt 2 X|J (t)Qt )1{Q 2 X (t)Q ∈D } |X(t) = x, t t k |J x = det (vij ) − 2 Ik ht (v) dv, θt v:(vij )− x Ik ∈Dk X|J (t) = 0} (2.3.6) 2 θt 2 where (vij ) is the abbreviation of matrix (vij )1≤i,j≤k . Since {θt : t ∈ T } is bounded, there exists a constant c > 0 such that x (vij ) − 2 Ik ∈ Dk , θt k 2 vij ∀ (vij ) := 1/2 < i,j=1 x . c Thus we can write (2.3.6) as Rk(k+1)/2 x det (vij ) − 2 Ik ht (v)dv − det (vij ) − θt v:(vij )− x Ik ∈Dk 2 / θ x I ht (v) dv 2 k θt t = E{det(Qt 2 X|J (t)Qt )|X(t) = x, X|J (t) = 0} + Z(t, x), (2.3.7) where Z(t, x) is the second integral in the first line of (2.3.7) and it satisfies |Z(t, x)| ≤ x det (vij ) − 2 Ik θt (vij ) ≥ x c ht (v)dv. Denote by G(t) the covariance matrix of ((Vij (t))1≤i≤j≤k |X(t) = x, X|J (t) = 0), then by Lemma 2.5.2 in the Appendix, the eigenvalues of G(t) and hence those of (G(t))−1 are bounded for all t ∈ T . It follows that there exists some constant α > 0 such that 30 ht (v) = o(e −α (vij ) 2 ) 2 and hence |Z(t, x)| = o(e−αx ) for some constant α > 0 uniformly for all t ∈ T . Combine this with (2.3.5), (2.3.6), (2.3.7) and the proof of Lemma 2.2.1, yielding the result. Lemma 2.3.3 Let X be a Gaussian field as in Theorem 2.2.2. Then for each J ∈ ∂k T with k ≥ 1, there exists some constant α > 0 such that E E{Mu (J)} = ∞ 1 dx · · · dyJ1 · · · dyJ N −k (2π)k/2 |ΛJ |1/2 J u E(J) x |Λ − Λ (t)| Hk + γt CJ1 (t)yJ1 + · · · + γt CJ (t)yJ × J kJ N −k N −k γt γt dt × pX(t),X (t),...,X J J 1 N −k (t) (x, yJ1 , . . . , yJN −k | 2 X|J (t) = 0)(1 + o(e−αu )). (2.3.8) Proof Under the notations in the proof of Theorem 2.2.2, applying the Kac-Rice formula, E we see that E{Mu (J)} equals p X (t) (0)dt E{|det 2 X|J (t)|1{ 2 X (t)∈D } 1{X(t)≥u} |J k |J J × 1{(X (t),··· ,X J J 1 = N −k (−1)k (2π)k/2 |ΛJ |1/2 J (t))∈E(J)} | X|J (t) = 0} ∞ dt ··· dx u E(J) dyJ1 · · · dyJ N −k E{det 2 X|J (t)1{ 2 X (t)∈D } |X(t) = x, XJ1 (t) = yJ1 , · · · , XJ (t) = yJ , N −k N −k k |J X|J (t) = 0}pX(t),X (t),··· ,X J J 1 N −k (t) (x, yJ1 , · · · 31 , yJ N −k | X|J (t) = 0). Recall 2X |J (t) = Q−1 Qt 2 X|J (t)Qt Q−1 and we can write (2.2.16) as t t E{Qt 2 X|J (t)Qt |X(t) = x, XJ1 (t) = yJ1 , · · · , XJ (t) = yJ , X|J (t) = 0} N −k N −k =− x Ik . (t)yJ + CJ1 (t)yJ1 + · · · + CJ 2 N −k N −k γt Make change of variables x W (t) = Qt 2 X|J (t)Qt + 2 Ik , γt where W (t) = (Wij (t))1≤i,j≤k . Denote the density of ((Wij (t))1≤i≤j≤k |X(t) = x, XJ1 (t) = yJ1 , · · · , XJ (t) = yJ , X|J (t) = 0) N −k N −k (w), w = (wij )1≤i≤j≤k ∈ Rk(k+1)/2 . Similarly to the proof in Lemma by ft,yJ ,··· ,yJ 1 N −k 2.3.2, to estimate X(t)=x, X|J (t)=0, E det 2 X|J (t)1{ 2 X (t)∈D } X (t)=y ,··· ,X , J1 J1 JN −k (t)=yJN −k k |J 32 we will get an expression similar to (2.3.7) with Z(t, x) replaced by Z(t, x, yJ1 , · · · , yJ ). N −k Then, similarly, we have I(t, x) := ··· E(J) dyJ1 · · · dyJ p (x, yJ1 , · · · N −k X(t),XJ1 (t),··· ,XJN −k (t) , yJ | , yJ | N −k | X|J (t) = 0)|Z(t, x, yJ1 , · · · , yJ )| N −k ≤ ··· E(J) dyJ1 · · · dyJ | X|J (t) = 0)| p (x, yJ1 , · · · N −k X(t),XJ1 (t),··· ,XJN −k (t) x det (wij ) − 2 Ik γt (wij ) ≥ x c ≤ pX(t) (x| X|J (t) = 0) N −k (w)dw ft,yJ ,··· ,yJ 1 N −k x det (wij ) − 2 Ik γt (wij ) ≥ x c ft (w)dw, where the last inequality comes from replacing the integral region E(J) by RN −k , and ft (w) is the density of ((Wij (t))1≤i≤j≤k |X(t) = x, X|J (t) = 0). Hence by the same discussions 2 −αu2 −u2 /(2σT ) in the proof of Lemma 2.3.2, I(t, x) = o(e ) uniformly for all t ∈ T and some constant α > 0. Combining the proofs of Lemma 2.3.2 and Theorem 2.2.2, we obtain the result. We call a function h(u) super-exponentially small (when compared with P(supt∈T X(t) ≥ u)), if there exists a constant α > 0 such that h(u) = o(e 2 −αu2 −u2 /(2σT ) ) as u → ∞. The following lemma is Lemma 4 in Piterbarg (1996b). It shows that the factorial moments are usually super-exponentially small. Lemma 2.3.4 Let {X(t) : t ∈ RN } be a centered Gaussian field satisfying (H1) and (H3). Then for any ε > 0, there exists ε1 > 0 such that for any J ∈ ∂k T and u large enouth, E{Mu (J)(Mu (J) − 1)} ≤ e 2 −u2 /(2βJ +ε) 33 +e 2 −u2 /(2σJ −ε1 ) , 2 2 where βJ = supt∈J supe∈Sk−1 Var(X(t)| X|J (t), 2 X|J (t)e) and σJ = supt∈J Var(X(t)). Here Sk−1 is the (k − 1)-dimensional unit sphere. Corollary 2.3.5 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field with stationary increments satisfying (H1), (H2) and (H3). Then for all J ∈ ∂k T , E{Mu (J)(Mu (J)− E E 1)} and E{Mu (J)(Mu (J) − 1)} are super-exponentially small. E Proof Since Mu (J) ≤ Mu (J), we only need to show that E{Mu (J)(Mu (J) − 1)} is super- exponentially small. If k = 0, then Mu (J) is either 0 or 1 and hence E{Mu (J)(Mu (J)−1)} = 2 2 0. If k ≥ 1, then, thanks to Lemma 2.3.4, it suffices to show that βJ is strictly less than σT . 2 Clearly, Var(X(t)| X|J (t), 2 X|J (t)e) ≤ σT . Applying Lemma 2.5.1 yields that 2 Var(X(t)| X|J (t), 2 X|J (t)e) = σT ⇒ E{X(t)( 2 X|J (t)e)} = 0. Note that the right hand side above is equivalent to (ΛJ (t) − ΛJ )e = 0. By (H2), ΛJ (t) − ΛJ is negative definite, which implies (ΛJ (t) − ΛJ )e = 0 for all e ∈ Sk−1 , so that 2 sup Var(X(t)| |J X(t), 2 X(t)e) < σT . |J e∈Sk−1 2 2 Therefore βJ < σT by continuity. The following lemma shows that the cross terms in (2.3.2) and (2.3.3) are super-exponentially small if the two faces are not adjacent. For the case when the faces are adjacent, the proof is more technical, see the proofs in Theorems 2.3.7 and 2.3.8. Lemma 2.3.6 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field with stationary increments satisfying (H1) and (H3). Let J and J be two faces of T such that their 34 distance is positive, i.e., inf t∈J,s∈J s − t > δ0 for some δ0 > 0, then E{Mu (J)Mu (J )} is super-exponentially small. Proof We first consider the case when dim(J) = k ≥ 1 and dim(J ) = k ≥ 1. By the Kac-Rice metatheorem for higher moments (the proof is the same as that of Theorem 11.5.1 in Adler and Taylor (2007)), E{Mu (J)Mu (J )} = dt J J ds E{|det 2 X|J (t)||det 2 X|J (s)|1{X(t)≥u,X(s)≥u} × 1{ 2 X (t)∈D , 2 X (s)∈D } |X(t) = x, X(s) = y, X|J (t) = 0, k |J |J k X|J (s) = 0}pX(t),X(s), X (t), X (s) (x, y, 0, 0) |J |J ∞ ∞ ≤ J J dx ds dt u u dy E{|det 2 X|J (t)||det 2 X|J (s)| |X(t) = x, X(s) = y, X|J (t) = 0, X|J (s) = 0}pX(t),X(s) (x, y) × p X (t), X (s) (0, 0|X(t) = x, X(s) = y). |J |J (2.3.9) Note that the following two inequalities hold: for constants ai and bj , k k |ai | i=1 |bj | ≤ j=1 1 k+k k k |ai |k+k + i=1 |bj |k+k ; j=1 and for any Gaussian variable ξ and positive integer l, E|ξ|l ≤ E(|Eξ| + |ξ − Eξ|)l ≤ 2l (|Eξ|l + E|ξ − Eξ|l ) ≤ 2l (|Eξ|l + Cl (Var(ξ))l/2 ), 35 where the constant Cl depends only on l. Combining these two inequalities with Lemma 2.5.1, we get that there exist some positive constants C1 and N1 such that for large x and y, sup t∈J,s∈J E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X(s) = y, X|J (t) = 0, X|J (s) = 0} ≤ C1 xN1 y N1 . (2.3.10) Also, there exists a positive constant C2 such that sup p X (t), X (s) (0, 0|X(t) = x, X(s) = y) |J |J t∈J,s∈J ≤ sup (2π)−(k+k )/2 [detCov( X|J (t), X|J (s)|X(t) = x, X(s) = y)]−1/2 ≤ C2 . t∈J,s∈J (2.3.11) Let ρ(δ0 ) = sup s−t >δ 0 E{X(t)X(s)} σt σs which is strictly less than 1 due to (H3), then ∀ε > 0, there exists a positive constant C3 such that for all t ∈ J, s ∈ J and u large enough, ∞ u ∞ u ≤ xN1 y N1 pX(t),X(s) (x, y)dxdy = E{[X(t)X(s)]N1 1{X(t)≥u,X(s)≥u} } E{[X(t) + X(s)]2N1 1{X(t)+X(s)≥2u} } ≤ C3 exp εu2 u2 . − 2 (1 + ρ(δ0 ))σT (2.3.12) E E Combine (2.3.9) with (2.3.10), (2.3.11) and (2.3.12), yielding that E{Mu (J)Mu (J )} is super-exponentially small. 36 When only one of the faces, say J, is a singleton, then let J = {t0 } and we have ∞ E{Mu (J)Mu (J )} ≤ ds J ∞ dx u × E{|det u 2X |J dy pX(t ),X(s), X (s) (x, y, 0) 0 |J (2.3.13) (s)||X(t0 ) = x, X(s) = y, X|J (s) = 0}. Following the previous discussions yields that E{Mu (J)Mu (J )} is super-exponentially small. Finally, if both J and J are singletons, then E{Mu (J)Mu (J )} becomes the joint probability of two Gaussian variables exceeding level u and hence is trivial. Theorem 2.3.7 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field with stationary increments such that (H1), (H2) and (H3) are fulfilled. Suppose that for any face J, 2 {t ∈ J : ν(t) = σT , νj (t) = 0 for some j ∈ σ(J)} = ∅. / (2.3.14) Then there exists some constant α > 0 such that N 2 −αu2 −u2 /(2σT ) P sup X(t) ≥ u = t∈T E{Mu (J)} + o(e ) k=0 J∈∂k T = {t}∈∂0 T × u Ψ + σt N 1 k=1 J∈∂k T (2π)(k+1)/2 |Λ 1/2 J| 2 2 2 |ΛJ − ΛJ (t)| u −u2 /(2θ2 ) t dt + o(e−αu −u /(2σT ) ). Hk−1 e k θt θt J (2.3.15) Proof Since the second equality in (2.3.15) follows from Lemma 2.3.2 directly, we only need to prove the first one. By (2.3.3) and Corollary 2.3.5, it suffices to show that the last term in (2.3.3) is super-exponentially small. Thanks to Lemma 2.3.6, we only need to consider 37 ¯ ¯ the case when the distance of J and J is 0, or I := J ∩ J = ∅. Without loss of generality, assume σ(J) = {1, . . . , m, m + 1, . . . , k}, σ(J ) = {1, . . . , m, k + 1, . . . , k + k − m}, (2.3.16) where 0 ≤ m ≤ k ≤ k ≤ N and k ≥ 1. If k = 0, we consider σ(J) = ∅ by convention. Under such assumption, J ∈ ∂k T , J ∈ ∂k T and dim(I) = m. 2 Case 1: k = 0, i.e. J is a singleton, say J = {t0 }. If ν(t0 ) < σT , then by (2.3.13), it is trivial to show that E{Mu (J)Mu (J )} is super-exponentially small. Now we consider 2 the case ν(t0 ) = σT . Due to (2.3.14), E{X(t0 )X1 (t0 )} = 0 and hence by continuity, there exists δ > 0 such that E{X(s)X1 (s)} = 0 for all s − t0 ≤ δ. It follows from (2.3.13) that E{Mu (J)Mu (J )} is bounded from above by ∞ ∞ dx ds u u s∈J : s−t0 >δ dy E{|det 2 X|J (s)||X(t0 ) = x, X(s) = y, X|J (s) = 0} × pX(t ),X(s), X (s) (x, y, 0) 0 |J ∞ ds + s∈J : s−t0 ≤δ u dy E{|det 2 X|J (s)||X(s) = y, X|J (s) = 0}pX(s), X (s) (y, 0) |J := I1 + I2 . Following the proof of Lemma 2.3.6 yields that I1 is super-exponentially small. We apply Lemma 2.5.1 to obtain that there exists ε0 > 0 such that sup s∈J : s−t0 ≤δ Var(X(s)| X|J (s)) ≤ sup s∈J : s−t0 ≤δ 2 Var(X(s)|X1 (s)) ≤ σT − ε0 . Then I2 and hence E{Mu (J)Mu (J )} are super-exponentially small. 38 2 Case 2: k ≥ 1. For all t ∈ I with ν(t) = σT , by assumption (2.3.14), E{X(t)Xi (t)} = 0, ∀ i = m + 1, . . . , k + k − m. Note that I is a compact set, by Lemma 2.5.1 and the uniform continuity of conditional variance, there exist ε1 , δ1 > 0 such that sup t∈B,s∈B 2 Var(X(t)|Xm+1 (t), . . . , Xk (t), Xk+1 (s), . . . , Xk+k −m (s)) ≤ σT − ε1 , (2.3.17) where B = {t ∈ J : dist(t, I) ≤ δ1 } and B = {s ∈ J : dist(s, I) ≤ δ1 }. It follows from (2.3.9) that E{Mu (J)Mu (J )} is bounded by ∞ dtds (J×J )\(B×B ) ∞ dx u u dy pX(t),X(s), X (t), X (s) (x, y, 0, 0) |J |J × E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X(s) = y, X|J (t) = 0, X|J (s) = 0} ∞ dtds + B×B u dx pX(t) (x| X|J (t) = 0, X|J (s) = 0)p X (t), X (s) (0, 0) |J |J × E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, X|J (s) = 0} := I3 + I4 . Note that (J × J )\(B × B ) = (J\B) × B B × (J\B) (J\B) × (J\B) . (2.3.18) Since each product set on the right hand side of (2.3.18) consists of two sets with positive distance, following the proof of Lemma 2.3.6 yields that I3 is super-exponentially small. For I4 , taking into account (2.3.17), one has sup t∈B,s∈B 2 Var X(t)| X|J (t), X|J (s) ≤ σT − ε1 . 39 (2.3.19) To estimate p X (t), X (s) (0, 0) = (2π)−(k+k )/2 (detCov( X|J (t), X|J (s)))−1/2 , |J |J (2.3.20) we write the determinant on the right hand side of (2.3.20) as detCov(Xm+1 (t), . . . , Xk (t), Xk+1 (s), . . . , Xk+k −1 (s)|X1 (t), . . . , Xm (t), X1 (s), . . . , Xm (s)) × detCov(X1 (t), . . . , Xm (t), X1 (s), . . . , Xm (s)), (2.3.21) where the first determinant in (2.3.21) is bounded away from zero due to (H3). By (H1), as shown in Piterbarg (1996b), applying Taylor’s formula, we can write X(s) = X(t) + 2 X(t)(s − t)T + s − t 1+η Yt,s , (2.3.22) 1 N where Yt,s = (Yt,s , . . . , Yt,s )T is a Gaussian vector field with bounded variance uniformly for all t ∈ J, s ∈ J . Hence as s − t → 0, the second determinant in (2.3.21) becomes detCov(X1 (t), . . . , Xm (t), X1 (t) + Xm (t) + 1 X1 (t), s − t + s − t 1+η Yt,s , . . . , m Xm (t), s − t + s − t 1+η Yt,s ) = detCov(X1 (t), . . . , Xm (t), 1 X1 (t), s − t + s − t 1+η Yt,s , . . . , m Xm (t), s − t + s − t 1+η Yt,s ) = s − t 2m detCov(X1 (t), . . . , Xm (t), X1 (t), et,s , . . . , Xm (t), et,s )(1 + o(1)), (2.3.23) 40 where et,s = (s − t)T / s − t and due to (H3), the last determinant in (2.3.23) is bounded away from zero uniformly for all t ∈ J and s ∈ J . It then follows from (2.3.21) and (2.3.23) that detCov( X|J (t), X|J (s)) ≥ C1 s − t 2m (2.3.24) for some constant C1 > 0. Similarly to (2.3.10), there exist constants C2 , N1 > 0 such that sup t∈J,s∈J E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, X|J (s) = 0} (2.3.25) ≤ C2 (1 + xN1 ). Combining (2.3.19) with (2.3.20), (2.3.24) and (2.3.25), and noting that m < k implies 1/ s − t m is integrable on J × J , we conclude that I4 and hence E{Mu (J)Mu (J )} are finite and super-exponentially small. Theorem 2.3.8 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field with stationary increments such that (H1), (H2) and (H3) are fulfilled. Then there exists some constant α > 0 such that N E E{Mu (J)} + o(e P sup X(t) ≥ u = t∈T 2 −αu2 −u2 /(2σT ) k=0 J∈∂k T ) (2.3.26) 2 −αu2 −u2 /(2σT ) = E{ϕ(Au )} + o(e ), where E{ϕ(Au )} is formulated in Theorem 2.2.2. It is worth mentioning here that the main idea for the proof of Theorem 2.3.8 comes from Aza¨ and Delmas (2002) (especially Theorem 4). Before showing the proof, we list the ıs 41 following two lemmas. Lemma 2.3.9 Under (H2), there exists a constant α0 > 0 such that e, (Λ − Λ(t))e ≥ α0 , ∀ t ∈ T, e ∈ SN −1 . Proof Let MN ×N be the set of all N ×N matrices. Define a mapping φ : RN ×MN ×N → R by (ξ, A) → ξ, Aξ , then φ is continuous. Since Λ−Λ(t) is positive definite, φ(e, Λ−Λ(t)) > 0 for each t ∈ T and e ∈ SN −1 . On the other hand, {(e, Λ − Λ(t)) : t ∈ T, e ∈ SN −1 } is a compact subset of RN × MN ×N and φ is continuous, completing the proof. Lemma 2.3.10 Let {ξ1 (t) : t ∈ T1 } and {ξ2 (t) : t ∈ T2 } be two Gaussian random fields. Let 2 σi (t) = Var(ξi (t)), ρ(t, s) = E{ξ1 (t)ξ2 (s)} , σ1 (t)σ2 (s) σ i = sup σi (t), t∈Ti ρ= sup t∈T1 ,s∈T2 σ i = inf σi (t), ρ(t, s), t∈Ti ρ= inf t∈T1 ,s∈T2 ρ(t, s), and assume 0 < σ i ≤ σ i < ∞, where i = 1, 2. If 0 < ρ ≤ ρ < 1, then for any N1 , N2 > 0, there exists some α > 0 such that as u → ∞, sup t∈T1 ,s∈T2 2 2 2 E{(1 + |ξ1 (t)|N1 + |ξ2 (s)|N2 )1{ξ (t)≥u,ξ (s)<0} } = o(e−αu −u /(2σ1 ) ). 1 2 Similarly, if −1 < ρ ≤ ρ < 0, then sup t∈T1 ,s∈T2 2 2 2 E{(1 + |ξ1 (t)|N1 + |ξ2 (s)|N2 )1{ξ (t)≥u,ξ (s)>0} } = o(e−αu −u /(2σ1 ) ). 1 2 42 Proof We only prove the first case, since the second case follows from the first one. By elementary computation on the joint density of ξ1 (t) and ξ2 (s), we obtain sup t∈T1 ,s∈T2 ≤ E{(1 + |ξ1 (t)|N1 + |ξ2 (s)|N2 )1{ξ (t)≥u,ξ (s)<0} } 1 2 ∞ 1 2πσ 1 σ 2 (1 − ρ2 )1/2 u 0 −∞ (1 + |x1 |N1 = o exp − + |x2 exp |N2 ) exp x2 − 12 2σ 1 dx1 σ 2 ρx1 2 1 x2 − − 2 dx2 σ1 2σ 2 (1 − ρ2 ) σ 2 ρ 2 u2 u2 − 2 2 + εu2 2 2 )σ 2 2σ 1 2σ 2 (1 − ρ 1 , as u → ∞, for any ε > 0. Proof of Theorem 2.3.8 Note that the second equality in (2.3.26) follows from Theorem 2.2.2 and Lemma 2.3.3, and similarly to the proof in Theorem 2.3.7, we only need to show E E that E{Mu (J)Mu (J )} is super-exponentially small when J and J are neighboring. Let ¯ ¯ I := J ∩ J = ∅. We follow the assumptions in (2.3.16) and assume also that all elements in ε(J) and ε(J ) are 1, which implies E(J) = RN −k and E(J ) = RN −k . + + 43 We first consider the case k ≥ 1. By the Kac-Rice metatheorem, E{Mu (J)Mu (J )} is bounded from above by ∞ dt J ds J ∞ dx u ∞ dy u 0 ∞ dzk+1 · · · 0 ∞ dzk+k −m 0 ∞ dwm+1 · · · 0 dwk E{ det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X(s) = y, X|J (t) = 0, Xk+1 (t) = zk+1 , . . . , Xk+k −m (t) = zk+k −m , X|J (s) = 0, Xm+1 (s) = wm+1 , . . . , Xk (s) = wk } × pt,s (x, y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk ) := A(t, s) dtds, J×J (2.3.27) where pt,s (x, y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk ) is the density of (X(t), X(s), X|J (t), Xk+1 (t), . . . , Xk+k −m (t), X|J (s), Xm+1 (s), . . . , Xk (s)) evaluated at (x, y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk ). Let {e1 , e2 , . . . , eN } be the standard orthonormal basis of RN . For t ∈ J and s ∈ J , let et,s = (s − t)T / s − t and let αi (t, s) = ei , (Λ − Λ(t))et,s , then N (Λ − Λ(t))et,s = N ei , (Λ − Λ(t))et,s ei = i=1 αi (t, s)ei . (2.3.28) i=1 By Lemma 2.3.9, there exists some α0 > 0 such that et,s , (Λ − Λ(t))et,s ≥ α0 44 (2.3.29) for all t and s. Under the assumptions (2.3.16) and that all elements in ε(J) and ε(J ) are 1, we have the following representation, t = (t1 , . . . , tm , tm+1 , . . . , tk , bk+1 , . . . , bk+k −m , 0, . . . , 0), s = (s1 , . . . , sm , bm+1 , . . . , bk , sk+1 , . . . , sk+k −m , 0, . . . , 0), where ti ∈ (ai , bi ) for all i ∈ σ(J) and sj ∈ (aj , bj ) for all j ∈ σ(J ). Therefore, ei , et,s ≥ 0, ∀ m + 1 ≤ i ≤ k, ei , et,s ≤ 0, ∀ k + 1 ≤ i ≤ k + k − m, ei , et,s = 0, ∀ k + k − m < i ≤ N. (2.3.30) Let Di = {(t, s) ∈ J × J : αi (t, s) ≥ βi }, Di = {(t, s) ∈ J × J : αi (t, s) ≤ −βi }, if m + 1 ≤ i ≤ k, if k + 1 ≤ i ≤ k + k − m, (2.3.31) m D0 = αi (t, s) ei , et,s ≥ β0 , (t, s) ∈ J × J : i=1 where β0 , β1 , . . . , βk+k −m are positive constants such that β0 + k+k −m i=m+1 βi < α0 . It follows from (2.3.30) and (2.3.31) that, if (t, s) does not belong to any of D0 , Dm , . . . , Dk+k −m , then by (2.3.28), k+k −m N (Λ − Λ(t))et,s , et,s = αi (t, s) ei , et,s ≤ β0 + i=1 βi < α0 , i=m+1 45 which contradicts (2.3.29). Thus D0 ∪ ∪k+k −m Di is a covering of J × J , by (2.3.27), i=m+1 k+k −m E E E{Mu (J)Mu (J We first show that )} ≤ A(t, s) dtds + D0 D0 A(t, s) dtds A(t, s) dtds. i=m+1 Di is super-exponentially small. Similarly to the proof of Theorem 2.3.7, applying (2.3.20), (2.3.24) and (2.3.25), we obtain A(t, s) dtds D0 ∞ ≤ dtds u D0 dx p X (t), X (s) (0, 0)pX(t) (x| X|J (t) = 0, X|J (s) = 0) |J |J × E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, ∞ ≤ C1 dtds D0 u X|J (t) = 0, X|J (s) = 0} dx(1 + xN1 ) s − t −m pX(t) (x| X|J (t) = 0, X|J (s) = 0), (2.3.32) for some positive constants C1 and N1 . Due to Lemma 2.3.6, we only need to consider the case when s − t is small. It follows from Taylor’s formula (2.3.22) that as s − t → 0, Var(X(t)| X|J (t), X|J (s)) ≤ Var(X(t)|X1 (t), . . . , Xm (t), X1 (s), . . . , Xm (s)) 1 X1 (t), s − t + s − t 1+η Yt,s , . . . , = Var(X(t)|X1 (t), . . . , Xm (t), X1 (t) + Xm (t) + m Xm (t), s − t + s − t 1+η Yt,s ) = Var(X(t)|X1 (t), . . . , Xm (t), 1 X1 (t), et,s + s − t η Yt,s , . . . , m Xm (t), et,s + s − t η Yt,s ) ≤ Var(X(t)| 1 X1 (t), et,s + s − t η Yt,s , . . . , = Var(X(t)| X1 (t), et,s , . . . , m Xm (t), et,s + s − t η Yt,s ) Xm (t), et,s ) + o(1). 46 (2.3.33) By Lemma 2.5.2, the eigenvalues of [Cov( X1 (t), et,s , . . . , Xm (t), et,s )]−1 are bounded Xi (t), et,s } = −αi (t, s). Applying these facts and uniformly in t and s. Note that E{X(t) Lemma 2.5.1 to the last line of (2.3.33), we see that there exist constants C2 > 0 and ε0 > 0 such that for s − t sufficiently small, m Var(X(t)| X|J (t), X|J (s)) ≤ 2 σT 2 2 αi (t, s) + o(1) < σT − ε0 , − C2 (2.3.34) i=1 where the last inequality is due to the fact that (t, s) ∈ D0 implies m m 2 αi (t, s) 2 αi (t, s)| ≥ ei , et,s i=1 i=1 |2 1 ≥ m m 2 αi (t, s) et,s , ei i=1 2 β0 ≥ . m Plugging (2.3.34) into (2.3.32) and noting that 1/ s−t m is integrable on J ×J , we conclude that D0 A(t, s) dtds is finite and super-exponentially small. Next we show that Di follows from (2.3.27) that Di dx u Di A(t, s) dtds is bounded by ∞ ∞ dtds A(t, s) dtds is super-exponentially small for i = m + 1, . . . , k. It 0 dwi pX(t), X (t),X (s), X (s) (x, 0, wi , 0) i |J |J × E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, Xi (s) = wi , X|J (s) = 0}. (2.3.35) We can write pX(t),X (s) (x, wi |Xi (t) = 0) = i 1 2πσ1 (t)σ2 (t, s)(1 − ρ2 (t, s))1/2 × exp − w2 1 x2 2ρ(t, s)xwi + 2 i − 2 (t, s)) σ 2 (t) 2(1 − ρ σ2 (t, s) σ1 (t)σ2 (t, s) 1 47 , where 2 σ1 (t) = Var(X(t)|Xi (t) = 0), ρ(t, s) = 2 σ2 (t, s) = Var(Xi (s)|Xi (t) = 0) = E{X(t)Xi (s)|Xi (t) = 0} , σ1 (t)σ2 (t, s) detCov(Xi (s), Xi (t)) , λii and ρ2 (t, s) < 1 due to (H3). Therefore, pX(t), X (t),X (s), X (s) (x, 0, wi , 0) i |J |J = p X (s),X (t),...,X (t),X (t),...,X (t) (0|X(t) = x, Xi (s) = wi , Xi (t) = 0) 1 i−1 i+1 k |J × pX(t),X (s) (x, wi |Xi (t) = 0)pX (t) (0) i i ≤ C3 exp (2.3.36) 2 wi 1 x2 2ρ(t, s)xwi − + 2 − 2 (t, s)) σ 2 (t) 2(1 − ρ σ2 (t, s) σ1 (t)σ2 (t, s) 1 × (detCov(X(t), X|J (t), Xi (s), X|J (s)))−1/2 for some positive constant C3 . Also, by similar arguments in the proof of Theorem 2.3.7, there exist positive constants C4 , C5 , C6 , C7 , N2 and N3 such that detCov( X|J (t), Xi (s), X|J (s)) ≥ C4 s − t 2(m+1) , (2.3.37) 2 C5 s − t 2 ≤ σ2 (t, s) ≤ C6 s − t 2 , (2.3.38) 48 and E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, Xi (s) = wi , X|J (s) = 0} = E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, Xi (t), et,s = wi / s − t + o(1), X|J (s) = 0} ≤ C7 (xN2 + (wi / s − t )N3 + 1). (2.3.39) Combining (2.3.35) with (2.3.36), (2.3.37) and (2.3.39), and making change of variable w = wi / s − t , we obtain that for some positive constant C8 , A(t, s) dtds Di dx Di × exp − − dwi (xN2 + (wi / s − t )N3 + 1) w2 1 2ρ(t, s)xwi + 2 i − 2 2(1 − ρ2 (t, s)) σ1 (t) σ2 (t, s) σ1 (t)σ2 (t, s) −m Di × exp 0 u x2 dtds s − t = C8 ∞ ∞ dtds s − t −m−1 ≤ C8 ∞ ∞ dx u (2.3.40) dw(xN2 + wN3 + 1) 0 1 x2 w2 2ρ(t, s)xw + 2 − 2 (t, s)) σ 2 (t) 2(1 − ρ σ2 (t, s) σ1 (t)σ2 (t, s) 1 , where σ2 (t, s) = σ2 (t, s)/ s − t is bounded by (2.3.38). Applying Taylor’s formula (2.3.22) to Xi (s) and noting that E{X(t) ρ(t, s) = = Xi (t), et,s } = −αi (t, s), we obtain 1 1 E{X(t)Xi (s)} − E{X(t)Xi (t)}E{Xi (s)Xi (t)} σ1 (t)σ2 (t, s) λii s−t σ1 (t)σ2 (t, s) i − αi (t, s) + s − t η E{X(t)Yt,s } − s−t η i E{X(t)Xi (t)}E{Xi (t)Yt,s } . λii 49 (2.3.41) By (2.3.38) and the fact that (t, s) ∈ Di implies αi (t, s) ≥ βi > 0 for i = m + 1, . . . , k, we conclude that ρ(t, s) ≤ −δ0 for some δ0 > 0 uniformly for t ∈ J, s ∈ J with s − t sufficiently small. Then applying Lemma 2.3.10 to (2.3.40) yields that Di A(t, s) dtds is super-exponentially small. It is similar to prove that Di A(t, s) dtds is super-exponentially small for i = k + 1, . . . , k + k − m. In fact, in such case, ∞ dtds Di Di A(t, s) dtds is bounded by ∞ dx u 0 dzi pX(t), X (t),X (t), X (s) (x, 0, zi , 0) i |J |J × E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, Xi (t) = zi , X|J (s) = 0}. We can follow the proof in the previous stage by exchanging the positions of Xi (s) and Xi (t) and replacing wi with zi . The details are omitted since the procedure is very similar. If k = 0, then m = 0 and σ(J ) = {1, . . . , k }. Since J becomes a singleton, we may let J = {t0 }. By the Kac-Rice metatheorem, E{Mu (J)Mu (J )} is bounded by ∞ ds J ∞ dx u ∞ dy u 0 ∞ dz1 · · · 0 dzk pt0 ,s (x, y, z1 , . . . , zk , 0) × E{|det 2 X|J (s)||X(t0 ) = x, X(s) = y, X1 (t0 ) = z1 , . . . , Xk (t0 ) = zk , X|J (s) = 0} := J A(t0 , s) ds, where pt0 ,s (x, y, z1 , . . . , zk , 0) is the density of (X(t0 ), X(s), X1 (t0 ), . . . , Xk (t0 ), X|J (s)) evaluated at (x, y, z1 , . . . , zk , 0). Similarly, J could be covered by ∪k Di with Di = {s ∈ i=1 50 J : αi (t0 , s) ≤ −βi } for some positive constants βi , 1 ≤ i ≤ k . On the other hand, ∞ Di A(t0 , s) ds ≤ ds Di ∞ dx u 0 dzi pX(t ),X (t ), X (s) (x, zi , 0) 0 i 0 |J × E{|det 2 X|J (s)||X(t0 ) = x, Xi (t0 ) = zi , X|J (s) = 0}. E E By similar discussions, we obtain that E{Mu (J)Mu (J )} is super-exponentially small and hence complete the proof. 2.4 Further Remarks and Examples Remark 2.4.1 (The case when T contains the origin). We now show that Theorem 2.3.7 and Theorem 2.3.8 still hold when T contains the origin. In such case, (H3) is actually not satisfied since X(0) = 0 is degenerate. However, we may construct a small open cube 2 T0 containing 0 such that supt∈T0 σt is sufficiently small, then according to the Borell-TIS inequality, P{supt∈T0 X(t) ≥ u} is super-exponentially small. Let T = T \T0 , then P sup X(t) ≥ u ≤ P sup X(t) ≥ u ≤ P sup X(t) ≥ u + P sup X(t) ≥ u . (2.4.1) t∈T t∈T t∈T0 t∈T To estimate P{supt∈T X(t) ≥ u}, similarly to the rectangle T , we decompose T into several faces by lower dimensions such that T = ∪N ∂k T = ∪N ∪L∈∂ T L. Then we can get the k=0 k=0 k bounds similar to (2.3.3) with T replaced with T and J replaced with L. Following the proof of Theorem 2.3.7 yields N P sup X(t) ≥ u = t∈T 2 −αu2 −u2 /(2σT ) E{Mu (L)} + o(e k=0 L∈∂ T k 51 ). 2 Due to the fact that supt∈T0 σt is sufficiently small, E{Mu (L)} are super-exponentially small ¯ ¯ for all faces L such that L ⊂ ∂k T0 with 0 ≤ k ≤ N − 1 (note that T0 is a closed rectangle). The same reason yields that for 1 ≤ k ≤ N , L ∈ ∂k T , J ∈ ∂k T such that L ⊂ J, the difference between E{Mu (L)} and E{Mu (J)} is super-exponentially small. Hence we obtain P sup X(t) ≥ u = t∈T Ψ {t}∈∂0 T × u + σt N 1 k=1 J∈∂k T (2π)(k+1)/2 |ΛJ |1/2 2 2 2 |ΛJ − ΛJ (t)| u −u2 /(2θ2 ) t dt + o(e−αu −u /(2σT ) ). Hk−1 e k θt θt J (2.4.2) 2 2 Here, by convention, if θt = 0, we regard e−u /(2θt ) as 0. Combining (2.4.1) with (2.4.2), we conclude that Theorem 2.3.7 still holds when T contains the origin. The arguments for Theorem 2.3.8 are similar. Example 2.4.2 (Refinements of Theorem 2.3.7). Let Gaussian field X be as in Theorem 2 2 2.3.7. Suppose that ν(t0 ) = σT for some t0 ∈ J ∈ ∂k T (k ≥ 0) and ν(t) < σT for all t = t0 . 2 2 (i). If k = 0, then, due to (2.3.14), supt∈T \{t } θt ≤ σT −ε0 for some ε0 > 0. This implies 0 that E{Mu (J )} are super-exponentially small for all faces J other than {t0 }. Therefore, P sup X(t) ≥ u = Ψ t∈T u σT + o(e 2 −u2 /(2σT )+αu2 ), as u → ∞. (2.4.3) 2 For example, let Y be a stationary Gaussian field with covariance ρ(t) = e− t and define X(t) = Y (t) − Y (0), then X is a smooth Gaussian field with stationary increments satisfying conditions (H1)-(H3). Let T = [0, 1]N , then we can apply (2.4.3) to approximate the excursion probability of X with t0 = (1, . . . , 1). 52 (ii). If k ≥ 1, then similarly, E{Mu (J )} are super-exponentially small for all faces J = J. It follows from Theorem 2.3.7 that P sup X(t) ≥ u = t∈T uk−1 |ΛJ − ΛJ (t)| −u2 /(2θ2 ) t dt(1 + o(1)). e 2k−1 (2π)(k+1)/2 |ΛJ |1/2 J θt 2 Let τ (t) = θt , then ∀i ∈ σ(J), τi (t0 ) = 0, since t0 is a local maximum point of τ restricted on J. Assume additionally that the Hessian matrix ΘJ (t0 ) := (τij (t0 ))i,j∈σ(J) (2.4.4) 2 is negative definite, then the Hessian matrix of 1/(2θt ) at t0 restricted on J, ΘJ (t0 ) = − 1 2τ 2 (t 0) (τij (t0 ))i,j∈σ(J) = − 1 Θ (t ), 4 J 0 2σT 2k−1 and h(t) = 1/(2θ 2 ), applying Lemma is positive definite. Let g(t) = |ΛJ − ΛJ (t)|/θt t 2.5.3 with T replaced with J gives us that as u → ∞, P sup X(t) ≥ u = t∈T uk−1 |ΛJ − ΛJ (t0 )| (2π)k/2 2k−1 uk |Θ (t )|1/2 (2π)(k+1)/2 |ΛJ |1/2 θt J 0 e 2 −u2 /(2θt ) 0 (1 + o(1)) 0 2k/2 |Λ (2.4.5) u J − ΛJ (t0 )| = Ψ (1 + o(1)). 1/2 | − Θ (t )|1/2 σT |ΛJ | J 0 Example 2.4.2 (Continued: the cosine field). We consider the cosine random field on R2 : 2 1 Z(t) = √ (ξi cos ti + ξi sin ti ), 2 i=1 53 t = (t1 , t2 ) ∈ R2 , where ξ1 , ξ1 , ξ2 , ξ2 are independent, standard Gaussian variables. Z is a well-known centered, unit-variance and smooth stationary Gaussian field [cf. Adler and Taylor (2007, p.382)]. Note that Z is periodic and Z(t) = −Z11 (t) − Z22 (t). To avoid such degeneracy, let X(t) = ξ0 + Z(t) − Z(0), where t ∈ T ⊂ [0, 2π)2 and ξ0 is a standard Gaussian variable independent of Z. Then X is a centered and smooth Gaussian field with stationary increments. The variance and covariance of X are given respectively by 2 ν(t) = σt = 3 − cos t1 − cos t2 , 1 C(t, s) = 2 + 2 (2.4.6) 2 [cos(ti − si ) − cos ti − cos si ]. i=1 Therefore, X satisfies conditions (H1), (H2) and (H3) [though X12 (t) ≡ 0, it can be shown that this does not affect the validity of Theorems 2.3.7 and 2.3.8]. Taking the partial derivatives of C gives us that 1 E{X(t) X(t)} = (sin t1 , sin t2 )T , 2 1 Λ = Cov( X(t)) = I2 , 2 1 Λ − Λ(t) = −E{X(t) 2 X(t)} = [I2 − diag(cos t1 , cos t2 )], 2 (2.4.7) where I2 is the 2 × 2 unit matrix and diag denotes the diagonal matrix. (i). Let T = [0, π/2]2 . Then by (2.4.6), ν attains its maximum 3 only at the cor- ner (π/2, π/2), where both partial derivatives of ν are positive. Applying the result (i) in √ 2 Example 2.4.2, we obtain P{supt∈T X(t) ≥ u} = Ψ(u/ 3)(1 + o(e−αu )). (ii). Let T = [0, 3π/2] × [0, π/2]. Then ν attains its maximum 4 only at the boundary point t∗ = (π, π/2), where ν2 (t∗ ) > 0 so that the condition (2.3.14) is satisfied. In this case, 1 t∗ ∈ J = (0, 3π/2) × {π/2}. By (2.4.7), we obtain ΛJ = 2 and ΛJ − ΛJ (t∗ ) = 1 (1 − cos t∗ ) = 1 2 54 1. On the other hand, for t ∈ J, by Lemma 2.5.1 and (2.4.7), 2 τ (t) = θt = Var(X(t)|X1 (t)) = 3 − cos t1 − cos t2 − 1 2 sin t1 , 2 (2.4.8) therefore ΘJ (t∗ ) = τ11 (t∗ ) = −2. Plugging these into (2.4.5) with k = 1 gives us that √ P{supt∈T X(t) ≥ u} = 2Ψ(u/2)(1 + o(1)). (iii). Let T = [0, 3π/2]2 . Then ν attains its maximum 5 only at the interior point t∗ = (π, π). In this case, t∗ ∈ J = (0, 3π/2)2 . By (2.4.7), we obtain ΛJ = 1 I2 and 2 ΛJ − ΛJ (t∗ ) = I2 . On the other hand, for t ∈ J, by Lemma 2.5.1 and (2.4.7), 2 τ (t) = θt = Var(X(t)|X1 (t), X2 (t)) = 3 − cos t1 − cos t2 − 1 2 1 sin t1 − sin2 t2 , 2 2 (2.4.9) therefore ΘJ (t∗ ) = (τij (t∗ ))i,j=1,2 = −2I2 . Plugging these into (2.4.5) with k = 2 gives us √ that P{supt∈T X(t) ≥ u} = 2Ψ(u/ 5)(1 + o(1)). Example 2.4.3 (Refinements of Theorem 2.3.8). Let X be a Gaussian field as in Theorem 2 2.3.8. Suppose t0 ∈ J ∈ ∂k T is the only point in T such that ν(t0 ) = σT . Assume σ(J) = {1, . . . , k}, all elements in ε(J) are 1, νk (t0 ) = 0 for all k + 1 ≤ k ≤ N . Then by Theorem 2.3.8, N P sup X(t) ≥ u = t∈T 2 −αu2 −u2 /(2σT ) E E{Mu (J)} + E E{Mu (J )} + o(e ). ¯ ¯ k =k+1 J ∈∂ T,J ∩J=∅ k (2.4.10) 55 E Lemma 2.3.3 indicates E{Mu (J)} = (−1)k E{ E E{Mu (J)} = (−1)k k i −αx2 )), i=0 (−1) µi (J)}(1 + o(e therefore p X (t) (0)dt E{det 2 X|J (t)1 {(X |J N −k } k+1 (t),...,XN (t))∈R+ J 2 × 1{X(t)≥u} | X|J (t) = 0}(1 + o(e−αx )) 2 2 (−1)k e−x /(2θt ) ∞ = dx u dt J (2π)(k+1)/2 |Λ 1/2 θ t J| E{det 2 X|J (t)1 {(X |X(t) = x, ∞ N −k } k+1 (t),...,XN (t))∈R+ 2 X|J (t) = 0}(1 + o(e−αu )) 2 AJ (x)dx(1 + o(e−αu )), := u (2.4.11) and similarly, E E{Mu (J 2 2 (−1)k e−x /(2θt ) ∞ dt dx )} = J u (2π)(k +1)/2 |Λ ×1 {(X (t),...,X J1 J N −k 1/2 θ t J | E{det 2 X|J (t) |X(t) (t))∈RN −k } + = x, 2 X|J (t) = 0}(1 + o(e−αu )). (i). First we consider the case k ≥ 1. We shall follow the notations τ (t), ΘJ (t) and 2 ΘJ (t) in Example 2.4.2. Let h(t) = 1/(2θt ) and gx (t) = (−1)k (2π)(k+1)/2 |ΛJ |1/2 θt E{det 2 X|J (t)1 {(X N −k } k+1 (t),...,XN (t))∈R+ |X(t) = x, X|J (t) = 0}. Note that supt∈T |gx (t)| = o(xN1 ) for some N1 > 0 as x → ∞, which implies that the growth 2 of gx (t) can be dominated by the exponential decay e−x h(t) , hence both Lemma 2.5.3 and 2.5.4 are still applicable. Applying Lemma 2.5.3 with T replaced by J and u replaced by x2 , 56 we obtain that as x → ∞, AJ (x) = (2π)k/2 xk (detΘJ (t0 ))1/2 2 −x2 /(2σT ) gx (t0 )e (1 + o(1)). (2.4.12) On the other hand, it follows from (2.2.17) that gx (t) = 1 (2π)(k+1)/2 |Λ × 1/2 θ t J| ··· RN −k + dyk+1 · · · dyN |ΛJ − ΛJ (t)| x Hk + γt Ck+1 (t)yk+1 + · · · + γt CN (t)yN k γt γt × pX k+1 (t),...,XN (t) Note that X(t0 ) and gx (t0 ) = (yk+1 , . . . , yN |X(t) = x, X|J (t) = 0). X(t0 ) are independent, and Cj (t0 ) = 0 for all 1 ≤ j ≤ N . Therefore, |ΛJ − ΛJ (t0 )| H (k+1)/2 |Λ |1/2 σ k+1 k (2π) J T x σT × P{(Xk+1 (t0 ), . . . , XN (t0 )) ∈ RN −k | X|J (t0 ) = 0}. + Plugging this and (2.4.12) into (2.4.11), we obtain E E{Mu (J)} = u 2k/2 |ΛJ − ΛJ (t0 )| Ψ σT |ΛJ |1/2 | − ΘJ (t0 )|1/2 (2.4.13) × P{(Xk+1 (t0 ), . . . , XN (t0 )) ∈ RN −k | X|J (t0 ) = 0}(1 + o(1)). + ¯ ¯ For J ∈ ∂k T with J ∩ J = ∅, similarly, applying Lemma 2.5.4 with T replaced by J , we 57 obtain that E E{Mu (J )} = 2k /2 |ΛJ − ΛJ (t0 )| u Ψ P{ZJ (t0 ) ∈ Rk −k } − 1/2 | − Θ (t )|1/2 σT |ΛJ | J 0 × P{(XJ (t0 ), . . . , XJ 1 N −k (t0 )) ∈ RN −k | X|J (t0 ) = 0}(1 + o(1)), + (2.4.14) where ZJ (t0 ) is a centered (k − k)-dimensional Gaussian vector with covariance matrix −(τij )i,j∈σ(J )\σ(J) . Plugging (2.4.13) and (2.4.14) into (2.4.10), we obtain the asymptotic result. (ii). k = 0, say J = {t0 }. Note that X(t0 ) and E E{Mu (J)} = Ψ X(t0 ) are independent, therefore u P{ X(t0 ) ∈ RN }. + σT (2.4.15) E ¯ ¯ For J ∈ ∂k T with J ∩ J = ∅, then E{Mu (J )} is given by (2.4.14) with k = 0. Plug- ging (2.4.15) and (2.4.14) into (2.4.10), we obtain the asymptotic formula for the excursion probability. Example 2.4.3 (Continued: the cosine field). We consider the Gaussian field X defined in the continued part of Example 2.4.2. (i). Let T = [0, π]2 . Then ν attains its maximum 5 only at the corner t∗ = (π, π), where ν(t∗ ) = 0 so that the condition (2.3.14) is not satisfied. Instead, we will use the result (ii) in Example 2.4.3 with J = {t∗ } and k = 0. Let J = (0, π) × {π}, J = {π} × (0, π). Combining the results in the continued part of Example 2.4.2 with (2.4.15) and (2.4.14), and 58 noting that Λ = 1 I2 implies X1 (t) and X2 (t) are independent for all t, we obtain 2 √ 1 E E{Mu (J)} = Ψ(u/ 5), 4 √ 1 E E{Mu (∂2 T )} = Ψ(u/ 5)(1 + o(1)), 2 √ √ 2 E E E{Mu (J )} = E{Mu (J )} = Ψ(u/ 5)(1 + o(1)). 4 √ √ Summing these up, we have P{supt∈T X(t) ≥ u} = [(3 + 2 2)/4]Ψ(u/ 5)(1 + o(1)). (ii). Let T = [0, 3π/2] × [0, π]. Then ν attains its maximum 5 only at the boundary point t∗ = (π, π), where ν2 (t∗ ) = 0. Applying the result (i) in Example 2.4.3 with J = (0, 3π/2) × {π} and k = 1, we obtain √ E E{Mu (J)} = √ 2 Ψ(u/ 5), 2 which implies P{supt∈T X(t) ≥ u} = [(2 + √ E E{Mu (∂2 T )} = Ψ(u/ 5)(1 + o(1)), √ √ 2)/2]Ψ(u/ 5)(1 + o(1)). Remark 2.4.4 Note that we only provide the first-order approximation for the examples in this section. However, as shown in the theory of the approximations of integrals (see e.g. Wong (2001)), the integrals in (2.3.15) and (2.3.26) can be expanded with more terms once the covariance function of the Gaussian field is smooth enough. Hence for the examples above, higher-order approximation is available. Since the procedure is similar and the computation is tedious, we omit such arguments here. 2.5 Some Auxiliary Facts The following lemma is well-known and is quoted here for reader’s convenience. Lemma 2.5.1 Let Y and Z be two Gaussian random vectors of dimension p and q, respectively. Then Y |Z = z is still a p-dimensional Gaussian random vector having the following 59 mean and covariance: E{Y |Z = z} = EY + E{(Y − EY )(Z − EZ)T }[Cov(Z)]−1 (z − EZ), Cov(Y |Z = z) = Cov(Y ) − E{(Y − EY )(Z − EZ)T }[Cov(Z)]−1 E{(Z − EZ)(Y − EY )T }. In particular, if p = q = 1 and EY = EZ = 0, then E{Y |Z = z} = zE(Y Z) , Var(Z) Var(Y |Z = z) = Var(Y ) − (E(Y Z))2 . Var(Z) Using similar arguments in the proof of Lemma 2.3.9, we can obtain the following result. Lemma 2.5.2 Let {A(t) = (aij (t))1≤i,j≤N : t ∈ T } be a family of positive definite matrices such that all elements aij (·) are continuous. Denote by x and x the infimum and supremum of the eigenvalues of A(t) over t ∈ T respectively, then 0 < x ≤ x < ∞. The following two formulas state the results on the Laplace method approximation. Lemma 2.5.3 can be found in many books on the approximations of integrals, here we refer to Wong (2001). Lemma 2.5.4 can be derived by following similar arguments in the proof of Laplace method for the case of boundary point in Wong (2001). Lemma 2.5.3 (Laplace method for interior point). Let t0 be an interior point of T . Suppose the following conditions hold: (i) g(t) ∈ C(T ) and g(t0 ) = 0; (ii) h(t) ∈ C 2 (T ) and attains its unique minimum at t0 ; and (iii) g(t)e−uh(t) dt = T 2 h(t ) 0 is positive definite. Then as u → ∞, (2π)N/2 uN/2 (det 2 h(t 0 ))1/2 g(t0 )e−uh(t0 ) (1 + o(1)). Lemma 2.5.4 (Laplace method for boundary point). Let t0 ∈ J ∈ ∂k T with 0 ≤ k ≤ N −1. Suppose that conditions (i), (ii) and (iii) in Lemma 2.5.3 hold, and additionally 60 h(t0 ) = 0. Then as u → ∞, g(t)e−uh(t) dt T = (2π)N/2 P{ZJ (t0 ) ∈ (−E(J))} uN/2 (det 2 h(t 0 ))1/2 g(t0 )e−uh(t0 ) (1 + o(1)), where ZJ (t0 ) is a centered (N − k)-dimensional Gaussian vector with covariance matrix (hij (t0 ))J1 ≤i,j≤J N −k , −E(J) = {x ∈ RN : −x ∈ E(J)}, and the definitions of J1 , . . . , JN −k and E(J) are in (2.2.4). 61 Chapter 3 Smooth Gaussian Random Fields with Non-constant Variances 3.1 Gaussian Fields on Rectangles Let {X(t) : t ∈ RN } be a smooth centered Gaussian random field with non-constant variance and let T = N i=1 [ai , bi ] be a closed rectangle in RN . In this Chapter, we study the excursion probability of X over T . 2 Let ν(t) := σt = Var(X(t)) and assume supt∈T ν(t) = 1. A matrix is called negative semidefinite if all of its eigenvalues are nonpositive. In addition to conditions (H1) and (H3) in the previous chapter, we will make use of the following condition. (H4). ∀t ∈ J ∈ ∂k T such that ν(t) = 1 and 0 ≤ k ≤ N − 2, (E{X(t)Xij (t)})i,j∈ζ(t) is negative definite, where ζ(t) = {n : νn (t) = 0, 1 ≤ n ≤ N }. Proposition 3.1.1 Let X(·) ∈ C 2 (RN ) a.s. If (νij (t))i,j∈ζ(t) is negative semidefinite for each t ∈ J ∈ ∂k T such that ν(t) = 1 and 0 ≤ k ≤ N − 2, then (H4) holds. Proof Since 1 νij (t) = E{X(t)Xij (t)} + E{Xi (t)Xj (t)}, 2 1 (E{X(t)Xij (t)})i,j∈ζ(t) = (νij (t))i,j∈ζ(t) − (E{Xi (t)Xj (t)})i,j∈ζ(t) . 2 62 (3.1.1) But (νij (t))i,j∈ζ(t) is negative semidefinite and (E{Xi (t)Xj (t)})i,j∈ζ(t) is positive definite, it follows that (E{X(t)Xij (t)})i,j∈ζ(t) is negative definite and hence (H4) holds. Remark 3.1.2 In (H4), ν(t) = 1 implies νn (t) = 0 for all n ∈ σ(J) and thus ζ(t) ⊃ σ(J). Additionally, we only consider t ∈ J ∈ ∂k T with 0 ≤ k ≤ N − 2, this is because for N − 1 ≤ k ≤ N , (E{X(t)Xij (t)})i,j∈ζ(t) is automatically negative definite due to ν(t) = 1, as shown below. (i). If k = N , then t becomes a maximum point of ν in the interior of T , and ζ(t) = σ(J) = {1, · · · , N }, hence (νij (t))i,j∈ζ(t) is always negative semidefinite. By (3.1.1), we see that (E{X(t)Xij (t)})i,j∈ζ(t) is negative definite. (ii). If k = N − 1, we distinguish two cases. If ζ(t) = σ(J), then (E{X(t)Xij (t)})i,j∈ζ(t) is negative by the same arguments in the previous step. If ζ(t) = {1, · · · , N }, it follows from Taylor’s formula that ν(s) = ν(t) + ν(t)(s − t)T + (s − t) 2 ν(t)(s − t)T + o( s − t 2 ) (3.1.2) = ν(t) + (s − t) 2 ν(t)(s − t)T + o( s − t 2 ), for all s ∈ T such that s − t is small enough. Since t ∈ J ∈ ∂N −1 T , {± s−t : s ∈ s−t T } contains all the directions e ∈ SN −1 . Note that ν(t) = 1, any positive eigenvalue and hence (νij (t))i,j∈ζ(t) = 2 ν(t) 2 ν(t) does not have is negative semidefinite, then (E{X(t)Xij (t)})i,j∈ζ(t) is negative by (3.1.1). If D is a subset (not necessary open) of J ∈ ∂k T , we define E Mu (D) = #{t ∈ D\∂D : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = k, ε∗ Xj (t) > 0 for all j ∈ σ(J)}. / j 63 For t ∈ J ∈ ∂k T , let Λ(t) = (E{Xi (t)Xj (t)})1≤i,j≤N , ΛJ (t) = (E{Xi (t)Xj (t)})i,j∈σ(J) , Σ(t) = (E{X(t)Xij (t)})1≤i,j≤N , ΣJ (t) = (E{X(t)Xij (t)})i,j∈σ(J) , (3.1.3) {1, · · · , N }\σ(J) = {J1 , · · · , JN −k }, E(J) = {(tJ1 , · · · , tJ N −k ) ∈ RN −k : tj ε∗ ≥ 0, j = J1 , · · · , JN −k }. j M 1,j+1 Let Cj (t) be the (1, j + 1) entry of (Cov(X(t), X(t)))−1 , i.e. Cj (t) = detCov(X(t), X(t)) , where M1,j+1 is the cofactor of the (1, j +1) element, E{X(t)Xj (t)}, in the covariance matrix Cov(X(t), X(t)). Note that the notations Λ(t) and ΛJ (t) are different from those defined in Chapter 2. The result Lemma 3.1.3 below follows immediately from similar argumentss in the proof of Proposition 3.1.1. Lemma 3.1.3 If t0 ∈ J ∈ ∂k T satisfies ν(t0 ) = 1, where k ≥ 1, then E{X(t0 )Xi (t0 )} = 0 for all i ∈ σ(J) and ΣJ (t0 ) is negative definite. Corollary 3.1.4 Let {X(t) : t ∈ RN } be a Gaussian random field satisfying (H1), (H3) and (H4), then there exists some constant α > 0 such that N 2 2 E E E{Mu (J)(Mu (J) − 1)} = o(e−αu −u /2 ). k=0 J∈∂k T Proof 2 Due to Lemma 2.3.4, it suffices to show that βJ < 1 for each J ∈ ∂k T , which is ¯ equivalent to Var(X(t)| X|J (t), 2 X|J (t)e) < 1 for all t ∈ J = J ∪ ∂J and e ∈ Sk−1 by continuity. 64 (i). Suppose Var(X(t0 )| X|J (t0 ), 2 X|J (t0 )e) = 1 for some t0 ∈ J, then 1 = Var(X(t0 )| X|J (t0 ), 2 X|J (t0 )e) ≤ Var(X(t0 )| 2 X|J (t0 )e) ≤ Var(X(t0 )) ≤ 1. Note that Var(X(t0 )| 2 X|J (t0 )e) = Var(X(t0 )) ⇔ E{X(t0 )( 2 X|J (t0 )e)} = 0 ⇔ ΣJ (t0 )e = 0. Since t0 is a maximum point, by Lemma 3.1.3, ΣJ (t0 ) is negative definite and hence ΣJ (t0 )e = 0 for all e ∈ Sk−1 , which is a contradiction. (ii). Suppose Var(X(t1 )| X|J (t1 ), 2 X|J (t1 )e) = 1 for some t1 ∈ ∂J. It then follows from similar arguments in step (i) that Var(X(t1 )| X|J (t1 )) = 1 and hence νi (t1 ) = 0 for all i ∈ σ(J), which implies (E{X(t1 )Xij (t1 )})i,j∈σ(J) is negative definite by (H4). Thus there will be a contradiction as in step (i), completing the proof. Lemma 3.1.5 Let {X(t) : t ∈ RN } be a Gaussian random field satisfying (H1), (H3) and (H4). Then there exists some constant α > 0 such that as u → ∞, N 2 2 E E{Mu (J)} = E{ϕ(Au )} + o(e−αu −u /2 ). (3.1.4) k=0 J∈∂k T Proof Due to (2.2.1), we only need to show that for each k ∈ {0, 1, . . . , N } and J ∈ ∂k T , k E E{Mu (J)} = (−1)k 2 2 (−1)i E{µi (J)} + o(e−αu −u /2 ). i=0 65 (3.1.5) Without loss of generality, let σ(J) = {1, · · · , k} and assume all elements in ε(J) are 1. Let ¯ O(J) = {t ∈ J : ν(t) = 1} ∪ {t ∈ ∂J : ν(t) = 1, νi (t) = 0, ∀1 ≤ i ≤ k}. ¯ Our aim is to find an open neighborhood of O(J) restricted on J, say Uδ (J) = {t ∈ J : ¯ d(t, O(J)) < δ}, such that as u → ∞, 2 2 E E E{Mu (J)} = E{Mu (Uδ (J))} + o(e−αu −u /2 ) k = 2 2 (−1)i E{µi (Uδ (J))} + o(e−αu −u /2 ) (−1)k i=0 k = (−1)k (3.1.6) 2 2 (−1)i E{µi (J)} + o(e−αu −u /2 ). i=0 For n = k + 1, . . . , N , let ¯ On (J) = {t ∈ J : ν(t) = 1, νn (t) = 0} ∪ {t ∈ ∂J : ν(t) = 1, νn (t) = 0, νi (t) = 0, ∀1 ≤ i ≤ k}. ¯ ¯ Firstly, we consider the subset U 1 (J) = ∩N n=k+1 On (J) and define its open neighborhood in 1 ¯ J, Uδ (J) = {t ∈ J : d(t, U 1 (J)) < δ1 }, where δ1 is a small positive number to be specified. 1 E 1 Then, by the Kac-Rice metatheorem, E{Mu (Uδ (J))} becomes 1 ∞ (−1)k U 1 (J) δ1 (2π)k/2 (detΛJ (t))1/2 |X(t) = x, × pX(t),X dt u RN −k + E{det 2 X|J (t)1{ 2 X (t)∈D } | k |J X|J (t) = 0, Xk+1 (t) = yk+1 , . . . , XN (t) = yN } k+1 (t),...,XN (t) (x, yk+1 , . . . , yN | X|J (t) = 0) dxdyk+1 · · · dyN , where Dk is the collection of all k × k matrices with k negative eigenvalues. 66 (3.1.7) Due to (H4) and continuity, we can choose δ1 small enough such that ΣJ (t) are neg1 ative definite for all t ∈ Uδ (J). Write 1 2X |J (t) = Q−1 Qt,J 2 X|J (t)Qt,J Q−1 , where t,J t,J Qt,J (−ΣJ (t))Qt,J = Ik . Let al (t) = E{Xl (t)(Qt,J 2 X|J (t)Qt,J )ij } for l = 1, · · · , N , then ij E{(Qt,J 2 X|J (t)Qt,J )ij |X(t) = x, X|J (t) = 0, Xk+1 (t) = yk+1 , · · · , XN (t) = yN } = (E{X(t)(Qt,J 2 X|J (t)Qt,J )ij }, a1 (t), · · · , aN (t))(Cov(X(t), X(t)))−1 ij ij · (x, 0, · · · , 0, yk+1 , · · · , yN )T = (−δij , a1 (t), · · · , aN (t))(Cov(X(t), X(t)))−1 (x, 0, · · · , 0, yk+1 , · · · , yN )T . ij ij Make change of variables W (t) = (Wij (t))1≤i,j≤k , where x − 2 δij + γt Wij (t) = (Qt,J 2 X|J (t)Qt,J )ij − N al (t)Cl (t)x , ij l=1 i.e., Qt,J 2X x |J (t)Qt,J = W (t) − 2 Ik + x γt N al (t)Cl (t) ij l=1 1≤i,j≤k = W (t) − xB(t), where B(t) = 12 Ik − ( γt N l l=1 aij (t)Cl (t))1≤i,j≤k . ((Wij (t))1≤i≤j≤k |X(t) = x, Denote the density of X|J (t) = 0, Xk+1 (t) = yk+1 , · · · , XN (t) = yN ) by gt (w), w = (wij : 1 ≤ i, j ≤ k) ∈ Rk(k+1)/2 , then gt (w) is independent of x. Let (wij ) be 67 the abbreviation of the k × k symmetric matrix (wij )1≤i≤j≤k , then E{det 2 X|J (t)1D ( 2 X|J (t))|X(t) = x, k X|J (t) = 0, Xk+1 (t) = yk+1 , · · · , XN (t) = yN } = det(−ΣJ (t))E{det(Qt,J 2 X|J (t)Qt,J )1{Q 2 X (t)Q ∈D } | t,J t,J k |J |X(t) = x, = det(−ΣJ (t)) X|J (t) = 0, Xk+1 (t) = yk+1 , · · · , XN (t) = yN } (wij ):(wij )−xB(t)∈Dk det((wij ) − xB(t))gt (w) dw. ¯ Since νn (t) = 0 for all t ∈ U 1 (J) and n = k +1, · · · , N , we can find δ1 small enough such that 2 1 Cl (t) are close to 0 for all l = 1, · · · , N and t ∈ Uδ (J). Together with the fact {γt : t ∈ J} 1 is bounded, there exists a constant c1 > 0 such that (wij ) − xB(t) ∈ Dk , x ∀ (wij ) < . c1 It then follows from similar arguments in Lemma 2.3.2 and Lemma 2.3.3 that k E 1 E{Mu (Uδ (J))} = (−1)k 1 2 2 1 (−1)i E{µi (Uδ (J))} + o(e−αu −u /2 ). 1 i=0 1 ¯ ¯ Next, we consider the subset U 2 (J) = (∩N −1 On (J)) \ Uδ (J), and define its neighborn=k+1 1 2 1 ¯ hood Uδ (J) = {t ∈ J : d(t, U 2 (J)) < δ2 } \ Uδ (J), where δ2 is a small positive number to 2 1 68 E 2 be specified. Then we can write E{Mu (Uδ (J))} as 2 U 2 (J) δ2 p X (t) (0) dt E{|det 2 X|J (t)|1{ 2 X (t)∈D } 1{X(t)≥u} |J k |J × 1{X k+1 (t)>0,...,XN −1 (t)>0} − U 2 (J) δ2 p X (t) (0) dt E{|det |J 2X | X|J (t) = 0} (3.1.8) |J (t)|1{ 2 X (t)∈D } 1{X(t)≥u} k |J × 1{X k+1 (t)>0,...,XN −1 (t)>0,XN (t)≤0} | X|J (t) = 0}. The second term in (3.1.8) is bounded by ∞ U 2 (J) δ2 0 dx dt u −∞ E{|det 2 X|J (t)||X(t) = x, XN (t) = yN , X|J (t) = 0} (3.1.9) × pX(t),X (t), X (t) (x, yN , 0, · · · , 0)dyN . N |J Note that the conditional expectation in (3.1.9) can be bounded by c2 (|x|N1 + |yN |N2 ) when u is large, for some positive constants c2 , N1 and N2 ; and pX(t),X (t), X (t) (x, yN , 0, · · · , 0) N |J = p X (t) (0, · · · , 0|X(t) = x, XN (t) = yN )pX(t),X (t) (x, yN ) N |J ≤ c3 pX(t),X (t (x, yN ) N ¯ for some positive constant c3 . On the other hand, U 2 (J) is a compact set and for all ¯ t ∈ U 2 (J), νN (t) = 0 which implies νN (t) > 0 due to ν(t) = 1, thus we can choose δ2 2 sufficiently small such that E{X(t)XN (t)} > δ0 for all t ∈ Uδ (J) and some δ0 > 0. Hence 2 69 (3.1.9) is super-exponentially small by Lemma 2.3.10. Similar arguments give k 2 (−1)i E{µi (Uδ (J))} (−1)k 2 i=0 = (−1)k U 2 (J) δ2 p X (t) (0) dt E{det 2 X|J (t)1{X(t)≥u} |J 2 2 X|J (t) = 0} + o(e−αu −u /2 ). × 1{X | k+1 (t)>0,...,XN −1 (t)>0} E 1 Combining this with (3.1.8), and following the same arguments to simplify E{Mu (Uδ (J))}, 1 we obtain k E 2 E{Mu (Uδ (J))} 2 = (−1)k 2 2 2 (−1)i E{µi (Uδ (J))} + o(e−αu −u /2 ). 2 i=0 Continue this procedure at most finite many times, and take the union of those disjoint 1 2 ¯ neighborhoods (Uδ (J), Uδ (J), . . . ), we can find Uδ (J) = {t ∈ J : d(t, O(J)) < δ} inside 1 2 the union for some δ > 0 such that the second equality in (3.1.6) holds. On the other hand, By the Kac-Rice metatheorem, E E E E{Mu (J)} − E{Mu (Uδ (J))} = E{Mu (J \ Uδ (J))} ≤ ∞ 1 1 (2π)k/2 J\Uδ (J) (detΛJ (t))1/2 dt × E{|det 2 X|J (t)||X(t) = x, u pX(t) (x| X|J (t) = 0) X|J (t) = 0} dx. But, by the definition of Uδ (J) and continuity, sup t∈J\Uδ (J) Var(X(t)| X|J (t) = 0) < 1 − ε0 70 (3.1.10) for some ε0 > 0, hence the first equality in (3.1.6) holds and the third equality in (3.1.6) follows similarly. We finish the proof. Following the same proof in Lemma 2.3.6, we obtain the following result. Lemma 3.1.6 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field satisfying (H1) and (H3). Let J and J be two faces of T such that their distance is positive, i.e., inf t∈J,s∈J E E s − t > δ0 for some δ0 > 0, then E{Mu (J)Mu (J )} is super-exponentially small. The next result is an extension of Theorem 2.3.7. Lemma 3.1.7 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field satisfying ¯ ¯ (H1) and (H3). Let J and J be two neighboring faces, that is J ∩ J = ∅. Suppose ¯ ¯ {t ∈ J ∩ J : ν(t) = 1, νj (t) = 0, ∀j ∈ σ(J) ∪ σ(J )} = ∅, (3.1.11) E E then E{Mu (J)Mu (J )} is super-exponentially small. Proof Condition (3.1.11) implies that there exists ε0 > 0 such that sup Var(X(t)| X|J (t), X|J (t)) ≤ 1 − ε0 , t∈U (δ) ¯ ¯ where U (δ) = {t ∈ J ∪ J : d(t, J ∩ J ) ≤ δ} and δ is a sufficiently small positive number. Following the same proof in Theorem 2.3.7 yields our desired result. The next result follows from similar arguments in the proof of Theorem 2.3.8. 71 Lemma 3.1.8 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field satisfying ¯ ¯ (H1), (H3) and (H4). Let J and J be two neighboring faces, that is J ∩ J = ∅. Then E E E{Mu (J)Mu (J )} is super-exponentially small. Proof ¯ ¯ Let I = J ∩ J . We follow the assumptions in (2.3.16) and assume also that all elements in ε(J) and ε(J ) are 1, which implies E(J) = RN −k and E(J ) = RN −k . + + E E If condition (3.1.11) is satisfied, then E{Mu (J)Mu (J )} is super-exponentially small by Lemma 3.1.7. So we will focus on the alternative case, which is I0 := {t ∈ I : ν(t) = 1, νj (t) = 0, ∀1 ≤ j ≤ k + k − m} = ∅. Let B(I0 , δ) = {t ∈ J ∪ J : d(t, I0 ) < δ}, where δ is a small number to be specified. E E Note that E{Mu (J)Mu (J )} can be written as E E E E E{[Mu (J ∩ B(I0 , δ)) + Mu (J ∩ B c (I0 , δ))][Mu (J ∩ B(I0 , δ)) + Mu (J ∩ B c (I0 , δ))]} 2 2 E E = E{Mu (J ∩ B(I0 , δ))Mu (J ∩ B(I0 , δ))} + o(e−αu −u /2 ), since the rest terms are super-exponentially small by the same arguments in Lemma 3.1.7. E E Therefore, to prove the result, we may estimate E{Mu (J ∩ B(I0 , δ))Mu (J ∩ B(I0 , δ))} E E instead of E{Mu (J)Mu (J )} itself, with only a super-exponentially small difference. By (H4), ΣJ∪J (t) := E{X(t)Xij (t)}i,j=1,...,k+k −m are negative definite for all t ∈ I0 , so that by continuity (similar to Lemma 2.3.9), we can 72 choose δ small enough such that e, −ΣJ∪J (t)e ≥ α0 , ∀t ∈ B(I0 , δ), e ∈ Sk+k −m−1 (3.1.12) for some constant α0 > 0. We first consider the case k ≥ 1. By the Kac-Rice metatheorem, E E E{Mu (J ∩ B(I0 , δ))Mu (J ∩ B(I0 , δ))} ∞ ≤ dt J∩B(I0 ,δ) ∞ 0 dzk+1 · · · ds J ∩B(I0 ,δ) ∞ 0 dzk+k −m ∞ dx u dy u ∞ ∞ 0 dwm+1 · · · 0 dwk E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X(s) = y, X|J (t) = 0, Xk+1 (t) = zk+1 , . . . , Xk+k −m (t) = zk+k −m , X|J (s) = 0, Xm+1 (s) = wm+1 , . . . , Xk (s) = wk } × pt,s (x, y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk ) A(t, s) dtds, := (J∩B(I0 ,δ))×(J ∩B(I0 ,δ)) (3.1.13) where pt,s (x, y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk ) is the density of (X(t), X(s), X|J (t), Xk+1 (t), . . . , Xk+k −m (t), X|J (s), Xm+1 (s), . . . , Xk (s)) evaluated at (x, y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk ). Let {e1 , e2 , . . . , ek+k −m } be the standard orthonormal basis of Rk+k −m . For t ∈ J and s ∈ J , let et,s be the projection of (s − t)T / s − t on span{J, J }, and let αi (t, s) = 73 ei , −ΣJ∪J (t)et,s , then k+k −m k+k −m ei , −ΣJ∪J (t)et,s ei = −ΣJ∪J (t)et,s = αi (t, s)ei . i=1 (3.1.14) i=1 It follows from (3.1.12) that et,s , −ΣJ∪J (t)et,s ≥ α0 (3.1.15) for all t ∈ J ∩ B(I0 , δ) and s ∈ J ∩ B(I0 , δ). Let Di = {(t, s) ∈ (J ∩ B(I0 , δ)) × (J ∩ B(I0 , δ)) : αi (t, s) ≥ βi }, if m + 1 ≤ i ≤ k, Di = {(t, s) ∈ (J ∩ B(I0 , δ)) × (J ∩ B(I0 , δ)) : αi (t, s) ≤ −βi }, if k + 1 ≤ i ≤ k + k − m, m D0 = αi (t, s) ei , et,s ≥ β0 , (t, s) ∈ (J ∩ B(I0 , δ)) × (J ∩ B(I0 , δ)) : i=1 (3.1.16) where β0 , β1 , . . . , βk+k −m are positive constants such that β0 + k+k −m i=m+1 βi < α0 . As in the proof of Theorem 2.3.8, we see that D0 ∪ ∪k+k −m Di is a covering of (J ∩ B(I0 , δ)) × i=m+1 (J ∩ B(I0 , δ)). By (3.1.13), E E E{Mu (J ∩ B(I0 , δ))Mu (J ∩ B(I0 , δ))} k+k −m ≤ A(t, s) dtds + D0 A(t, s) dtds. i=m+1 Di Following the same arguments in the proof of Theorem 2.3.8, we obtain that and Di D0 A(t, s) dtds A(t, s) dtds are all super-exponentially small for i = m + 1, . . . , k + k − m, com- pleting the proof. 74 Theorem 3.1.9 Let X = {X(t) : t ∈ RN } be a centered Gaussian random field satisfying (H1), (H3) and (H4). Then there exists some constant α > 0 such that as u → ∞, 2 2 P sup X(t) ≥ u = E{ϕ(Au )} + o(e−αu −u /2 ). t∈T Proof The result follows from the combination of (2.3.1), (2.3.2), Corollary 3.1.4, Lemma 3.1.5, Lemma 3.1.6, Lemma 3.1.7 and Lemma 3.1.8. 3.2 Applications for Gaussian Fields with a Unique Maximum Point of the Variance In this section, we consider the case when ν(t) attains its maximum 1 at a unique point t0 ∈ J ∈ ∂k T such that νj (t0 ) = 0 for all j ∈ σ(J). / Lemma 3.2.1 Let X be as in Theorem 3.1.9. Suppose ν(t0 ) = 1, where t0 ∈ J ∈ ∂k T and k ≥ 1, then as x → ∞, E{det 2 X|J (t0 )|X(t0 ) = x, Proof X|J (t0 ) = 0} = |ΣJ (t0 )|xk (1 + o(1)). Since ν(t0 ) = 1, ΣJ (t0 ) is negative definite. Let Qt0 ,J be the k × k positive definite matrix such that Qt0 ,J (−ΣJ (t0 ))Qt0 ,J = Ik . Then we can write 2X |J (t0 ) = Q−1 Qt0 ,J 2 X|J (t0 )Qt0 ,J Q−1 , t0 ,J t0 ,J 75 and therefore, E{det 2 X|J (t0 )|X(t0 ) = x, = | − ΣJ (t0 )|E{det(Qt0 ,J Since X(t0 ) and X|J (t0 ) = 0} 2X (3.2.1) |J (t0 )Qt0 ,J )|X(t0 ) = x, X|J (t0 ) = 0}. X|J (t0 ) are independent, E{Qt0 ,J 2 X|J (t0 )Qt0 ,J |X(t0 ) = x, X|J (t0 ) = 0} = −xIk . It follows that E{det(Qt0 ,J 2 X|J (t0 )Qt0 ,J )|X(t0 ) = x, X|J (t0 ) = 0} = E{det(∆(t0 ) − xIk )}, (3.2.2) where ∆(t0 ) = (∆ij (t0 ))i,j∈σ(J) is a k × k Gaussian random matrix such that E{∆(t0 )} = 0 and its covariance matrix is independent of x. By the Laplace expansion of the determinant, det(∆(t0 ) − xIk ) = (−1)k [xk − S1 (∆(t0 ))xk−1 + S2 (∆(t0 ))xk−2 + · · · + (−1)k Sk (∆(t0 ))], where Si (∆(t0 )) is the sum of the k principle minors of order i in ∆(t0 ). Taking the i expectation above, we see that as x → ∞, E{det(∆(t0 ) − xIk )} = (−1)k xk (1 + o(1)). Combining this with (3.2.1) and (3.2.2) completes the proof. 76 2 Let τ (t) = θt , then ∀i ∈ σ(J), τi (t0 ) = 0, since t0 is a local maximum point of τ restricted on J. Assume additionally that the Hessian matrix ΘJ (t0 ) := (τij (t0 ))i,j∈σ(J) 2 is negative definite, then the Hessian matrix of 1/(2θt ) at t0 restricted on J, ΘJ (t0 ) = − 1 2τ 2 (t 0) (τij (t0 ))i,j∈σ(J) = − 1 Θ (t ), 4 J 0 2σT (3.2.3) 2k−1 and h(t) = 1/(2θ 2 ), applying Lemma is positive definite. Let g(t) = |ΛJ − ΛJ (t)|/θt t 2.5.3 with T replaced with J gives us that as u → ∞, Proposition 3.2.2 Let X be as in Theorem 3.1.9. Suppose that ν(t) attains its maximum 1 at a unique point t0 ∈ J ∈ ∂k T such that νj (t0 ) = 0 for all j ∈ σ(J). If ΘJ (t0 ) is negative / definite, then as u → ∞, P sup X(t) ≥ u = t∈T Proof 2k/2 | − ΣJ (t0 )| |ΛJ (t0 )|1/2 | − ΘJ (t0 )|1/2 Ψ(u)(1 + o(1)). Since t0 is the only point attaining the maximum variance, and also, νj (t0 ) = 0 for all j ∈ σ(J), similarly to Example 2.4.2, we obtain that as u → ∞, / k P sup X(t) ≥ u = t∈T 2 2 (−1)i E{µi (J)} + o(e−αu −u /2 ) (−1)k i=0 77 for some α > 0. Note that k (−1)i E{µi (J)} (−1)k i=0 = (−1)k J = p X (t) (0)E{det 2 X|J (t)1{X(t)≥u} | X|J (t) = 0}dt |J (−1)k ∞ 1 (2π)k/2 T |ΛJ (t)|1/2 E{det 2 X|J (t)|X(t) = x, dt u X|J (t) = 0} × pX(t) (x| 2 X|J (t) = 0)dx = ∞ (−1)k (2π)(k+1)/2 u 1 dx T θt |ΛJ (t)|1/2 E{det 2 X|J (t)|X(t) = x, 2 2 X|J (t) = 0}e−x /2θt dt. Now we apply the Laplace method in Lemma 2.5.3 with g(t) = 1 θt |ΛJ 1 h(t) = 2 , 2θt (t)|1/2 u= E{det 2 X|J (t)|X(t) = x, X|J (t) = 0 , (3.2.4) x2 , and obtain k (−1)k (−1)i E{µi (J)} i=0 = (−1)k (2π)k/2 (2π)(k+1)/2 |ΛJ (t0 )|1/2 |ΘJ (t0 )|1/2 ∞ × u = = E{det 2 X|J (t0 )|X(t0 ) = x, (−1)k (2π)k/2 |ΣJ (t0 )| (2π)k/2 |ΛJ (t0 )|1/2 |ΘJ (t0 )|1/2 2k/2 | − ΣJ (t0 )| |ΛJ (t0 )|1/2 | − ΘJ (t0 )|1/2 2 X|J (t0 ) = 0}x−k e−x /2 (1 + o(1))dx Ψ(u)(1 + o(1)) Ψ(u)(1 + o(1)), where the second equality is due to Lemma 3.2.1 and the last line is due to (3.2.3). 78 (3.2.5) If dim(T ) = 1, then the result in Proposition 3.2.2 becomes much simpler as stated in the following. Corollary 3.2.3 Let T ⊂ R and let X be as in Theorem 3.1.9. Suppose ν(t) attains its maximum 1 at a unique interior point t0 , and additionally, Var(X (t0 )) + E{X(t0 )X (t0 )} = 0. Then as u → ∞, P sup X(t) > u = t∈T Proof −1/2 Var(X (t0 )) Ψ(u)(1 + o(1)). +1 E{X(t0 )X (t0 )} Note that, under our assumptions, k = 1, Σ(t0 ) = E{X(t0 )X (t0 )}, Λ(t0 ) = Var(X (t0 )). 2 Also, τ (t) = θt = Var(X(t)|X (t)) implies Θ(t0 ) = τ (t0 ) = − 2E{X(t0 )X (t0 )}(Var(X (t0 )) + E{X(t0 )X (t0 )}) . Var(X (t0 )) Applying Proposition 3.2.2 gives P sup X(t) ≥ u = t∈T = 21/2 | − Σ(t0 )| |Λ(t0 )|1/2 | − Θ(t0 )|1/2 Ψ(u)(1 + o(1)) 1/2 E{X(t0 )X (t0 )} Ψ(u)(1 + o(1)), Var(X (t0 )) + E{X(t0 )X (t0 )} completing the proof. 79 (3.2.6) 3.3 Gaussian Fields on Manifolds without Boundary In this section, we assume that T is an N -dimensional smooth manifold without boundary (SN for example). Let {∂/∂xi }1≤i≤N be a natural coordinate vector field and let X be a smooth function on T . Define X= ∂X ∂X ,..., N , 1 ∂x ∂x 2 2 and let ( ∂i X j (t)) be the abbreviation of the N × N matrix ( ∂i X j (t))i,j=1,...,N . ∂x ∂x ∂x ∂x If X is a Morse function, then according to Corollary 9.3.5 or page 211-212 in Adler and Taylor (2007), the Euler characteristic of the excursion set Au = {t ∈ T : X(t) ≥ u} is given by N ϕ(Au ) = (−1)N (−1)k µk (T ), k=0 where µk (T ) := # t ∈ T : X(t) ≥ u, X(t) = 0, index ∂ 2X (t) = k . ∂xi ∂xj We also define the number of local maxima above level u as Mu (T ) := # t ∈ T : X(t) ≥ u, X(t) = 0, index ∂ 2X (t) = N . ∂xi ∂xj Since T has no boundary, we have the following much simpler bounds for the excursion probability. 1 E{Mu (T )} ≥ P sup X(t) ≥ u ≥ E{Mu (T )} − E{Mu (T )(Mu (T ) − 1)}. 2 t∈T 80 (3.3.1) We assume again supt∈T Var(X(t)) = 1. Lemma 3.3.1 Let T be an oriented, compact C 3 manifold without boundary. Let {X(t) : t ∈ T } be a Gaussian random field such that X ∈ C 3 (T ) a.s. and (H3) is fulfilled. Then there exists some constant α > 0 such that as u → ∞, 2 2 E{Mu (T )(Mu (T ) − 1)} = o(e−αu −u /2 ). Proof Since T is compact it has a finite atlas. Let (U, ϕ) be one of its charts and consider X := X ◦ ϕ−1 : ϕ(U ) ⊂ RN → R. Then it follows immediately from the definition of Mu that Mu (X, U ) ≡ Mu (X, ϕ(U )). Since X ∈ C 3 (M ), the condition (H1) holds for X. Applying Lemma 2.3.4 yields 2 2 E{Mu (X, ϕ(U ))(Mu (X, ϕ(U )) − 1)} = o(e−αu −u /2 ). This verifies the desired result. Lemma 3.3.2 Let T be an oriented, compact C 3 manifold without boundary. Let {X(t) : t ∈ T } be a Gaussian random field such that X ∈ C 3 (T ) a.s. and (H3) is fulfilled. Then 81 there exists some constant α > 0 such that as u → ∞, 2 2 E{Mu (T )} = E{ϕ(Au )} + o(e−αu −u /2 ). Proof The result follows from Lemma 3.1.5 and the arguments in the proof of Lemma 3.3.1. Theorem 3.3.3 Let T be an oriented, compact C 3 manifold without boundary. Let {X(t) : t ∈ T } be a Gaussian random field such that X ∈ C 3 (T ) a.s. and (H3) is fulfilled. Then there exists some constant α > 0 such that as u → ∞, 2 2 P sup X(t) ≥ u = E{ϕ(Au )} + o(e−αu −u /2 ), t∈T where E{ϕ(Au )} is formulated by (−1)N T Proof E det ∂ 2X (t) 1{X(t)≥u} ∂xi ∂xj X(t) = 0 p X(t) (0)∂x1 ∧ · · · ∧ ∂xN . The result follows immediately from the combination of (3.3.1), Lemma 3.3.1, Lemma 3.3.2 and the Kac-Rice metatheorem on manifolds [cf. Theorem 12.1.1 in Adler and Taylor (2007)]. 82 3.4 Gaussian Fields on Convex Sets with Smooth Boundary Let T be a compact, convex, N -dimensional subset of RN with smooth boundary ∂T . Morse theorem gives N ϕ(Au ) = (−1)N (−1)k µ ◦ k (T N −1 ) + (−1)N −1 k=0 (−1)k µk (∂T ), k=0 where ◦ ◦ µk (T ) = #{t ∈T : X(t) ≥ u, X(t) = 0, index( 2 X(t)) = k}, µk (∂T ) = #{t ∈ ∂T : X(t) ≥ u, X|∂T (t) = 0, X(t), n(t) ≥ 0, index( 2 X|∂T (t)) = k}, and n(t) is the unit normal vector pointing outwards. We also define the number of extended outward local maxima above level u as ◦ E ◦ Mu (T ) = #{t ∈T : X(t) ≥ u, X(t) = 0, index( 2 X(t)) = N }, E Mu (∂T ) = #{t ∈ ∂T : X(t) ≥ u, X|∂T (t) = 0, X(t), n(t) ≥ 0, index( 2 X|∂T (t)) = N − 1}. ◦ Note that T can be stratified into T ∪∂T . We have the following bounds for the excursion 83 probability. ◦ E E E{Mu (T )} + E{Mu (∂T )} ≥ P sup X(t) ≥ u t∈T ≥ E E E ◦ E ◦ E{Mu (T )} + E{Mu (∂T )} − E{Mu (T )Mu (∂T )} (3.4.1) 1 1 E ◦ E ◦ E E − E{Mu (T )(Mu (T ) − 1)} − E{Mu (∂T )(Mu (∂T ) − 1)}. 2 2 Since ∂T is a hypersurface. We may consider ∂T as an (N − 1)-dimensional submanifold embedded on RN . Similarly, we have the following result. Lemma 3.4.1 Let T be a compact, convex, N -dimensional subset of RN with smooth boundary ∂T . Let {X(t) : t ∈ T } be a Gaussian random field such that (H1) and (H3) are fulfilled. Then there exists some constant α > 0 such that as u → ∞, 2 2 E ◦ E ◦ E{Mu (T )(Mu (T ) − 1)} = o(e−αu −u /2 ), 2 2 E E E{Mu (∂T )(Mu (∂T ) − 1)} = o(e−αu −u /2 ), 2 2 E ◦ E E{Mu (T )} + E{Mu (∂T )} = E{ϕ(Au )} + o(e−αu −u /2 ). The next lemma shows that the crossing term is also super-exponentially small. Lemma 3.4.2 Let T be a compact, convex, N -dimensional subset of RN with smooth boundary ∂T . Let {X(t) : t ∈ T } be a Gaussian random field such that (H1) and (H3) are fulfilled. Then there exists some constant α > 0 such that as u → ∞, 2 2 E ◦ E E{Mu (T )Mu (∂T )} = o(e−αu −u /2 ). 84 Proof By the Kac-Rice metatheorem, ◦ E E E{Mu (T )Mu (∂T )} ∞ ∞ ≤ ◦ dt T dx ds ∂T u u dy E{|det 2 X(t)||det 2 X|∂T (s)||X(t) = x, X(s) = y, X(t) = 0, X|∂T (s) = 0}pX(t),X(s), X(t), X (s) (x, y, 0, 0). |∂T By similar arguments in Theorem 2.3.7, if X(s) = 0 for all s ∈ ∂T such that ν(s) = 1, then ◦ E E E{Mu (T )Mu (∂T )} is super-exponentially small. Hence we will consider the alternative case when I0 := {s ∈ ∂T : ν(s) = 1, X(s) = 0} = ∅. Let B(I0 , δ) = {t ∈ T : d(t, I0 ) < δ}, where δ is a small positive number to be specified. ◦ ◦ E E E As discussed in Lemma 3.1.8, E{Mu (T )Mu (∂T )} can be reduced to E{Mu (T E ∩B(s0 , δ))Mu (∂T ∩ B(I0 , δ))} with only a super-exponentially small difference. Due to ◦ E E the compactness of T , it suffice to show that E{Mu (T ∩B(s0 , δ))Mu (∂T ∩ B(s0 , δ))} is super-exponentially small for some s0 ∈ I0 and δ > 0, where B(s0 , δ) = {t ∈ T : d(t, s0 ) < δ}. Notice another fact that for all s ∈ ∂T such that ν(s) = 1, 2 ν(s) are negative semidefinite and hence Σ(s) = E{X(s) 2 X(s)} are negative definite. Therefore by continuity, we may choose δ small enough such that − Σ(t)et,s , et,s ≥ α0 , ◦ ∀t ∈T ∩B(s0 , δ), s ∈ ∂T ∩ B(s0 , δ), 85 (3.4.2) for some positive constant α0 , where et,s = (s − t)/ s − t . For s ∈ ∂T ∩ B(s0 , δ), let Z(s) = the tangent space of ∂T at s, so that X(s), n(s) , and denote by Πs the projection onto X|∂T (s) = Πs X(s). Then ◦ E E E{Mu (T ∩B(s0 , δ)))Mu (∂T ∩ B(s0 , δ))} dt T ∩B(s0 ,δ)) ∂T ∩B(s0 ,δ) dy dx ds ∞ ∞ ∞ = ◦ 0 u u dz E{|det 2 X(t)||det 2 X|∂T (s)| |X(t) = x, X(s) = y, X(t) = 0, X|∂T (s) = 0, Z(s) = z} × pX(t),X(s), X(t), X (s),Z(s) (x, y, 0, 0, z) |∂T ∞ ∞ ≤ ◦ T ∩B(s0 ,δ)) dx ds dt ∂T ∩B(s0 ,δ) u 0 dz E{|det 2 X(t)||det 2 X|∂T (s)||X(t) = x, X(t) = 0, X|∂T (s) = 0, Z(s) = z}pX(t), X(t), X (s),Z(s) (x, 0, 0, z) |∂T A(t, s)dtds. ◦ (T ∩B(s0 ,δ))×(∂T ∩B(s0 ,δ)) := (3.4.3) We can bound the integral in (3.4.3) as the following. A(t, s)dtds ◦ (T ∩B(s0 ,δ))×(∂T ∩B(s0 ,δ)) ≤ A(t, s)dtds + D1 A(t, s)dtds, (3.4.4) D2 where ◦ D1 = {t ∈T ∩B(s0 , δ), s ∈ ∂T ∩ B(s0 , δ) : − Σ(t)et,s , n(s) ≥ b1 }, ◦ N −1 D2 = {t ∈T ∩B(s0 , δ), s ∈ ∂T ∩ B(s0 , δ) : − Σ(t)et,s , Ei (s) et,s , Ei (s) ≥ b2 }, i=1 b1 and b2 are positive numbers such that b1 + b2 < α0 , {E1 (s), · · · , EN −1 (s)} is the orthonormal basis of the tangent space of ∂T at s. This is because, if (t, s) does not belong to 86 D1 nor D2 , then − Σ(t)et,s , et,s N −1 = − Σ(t)et,s , n(s) et,s , n(s) + − Σ(t)et,s , Ei (s) et,s , Ei (s) i=1 < b1 + b2 < α0 , where we use the fact that the convexity of T implies et,s , n(s) ≥ 0. But this conflicts ◦ (3.4.2), hence D1 ∪ D2 is a covering of (T ∩B(s0 , δ)) × (∂T ∩ B(s0 , δ)). We first show that D2 A(t, s)dtds is super-exponentially small. By similar arguments in the proof for Gaussian fields on rectangle, we see that there exists positive constants C1 , C2 and N1 such that E{|det 2 X(t)||det 2 X|∂T (s)||X(t) = x, X(t) = 0, X|∂T (s) = 0} ≤ C1 (xN1 + 1), detCov( X(t), X|∂T (s)) ≥ C2 s − t 2(N −1) . Therefore, ∞ A(t, s) ≤ u E{|det 2 X(t)||det 2 X|∂T (s)||X(t) = x, X(t) = 0, X|∂T (s) = 0} × pX(t) (x| X(t) = 0, X|∂T (s) = 0)p X(t), X (s) (0, 0)dx |∂T ≤ C3 s − t 1−N ∞ u (1 + xN1 )pX(t) (x| X(t) = 0, X|∂T (s) = 0)dx for some positive constant C3 . 87 (3.4.5) On the other hand, as s − t → 0, Var(X(t)| X(t), X|∂T (s)) = Var(X(t)| X(t), Πs X(s)) = Var(X(t)| X(t), Πs ( X(s) − X(t))/ s − t ) = Var(X(t)| X(t), Πs ( 2 X(t)et,s )) + o(1) ≤ Var(X(t)|Πs ( 2 X(t)et,s )) + o(1) ≤ 1 − (Πs (Σ(t)et,s ))[Cov(Πs ( 2 X(t)et,s ))]−1 (Πs (Σ(t)et,s ))T + o(1), where the third equality is due to Taylor’s formula. Note that Cov(Πs ( 2 X(t)et,s )) is bounded away from 0 because of the regularity condition (H3). Also, by the definition of D2 , the vectors Πs (Σ(t)et,s ) are not vanishing for all (t, s) ∈ D2 , thus there exists a constant ε1 > 0 such that Var(X(t)| X(t), X|∂T (s)) ≤ 1 − ε1 , ∀(t, s) ∈ D2 . ◦ Combining this with (3.4.5), and noting that s − t 1−N is integrable on (T ∩B(s0 , δ))) × (∂T ∩ B(s0 , δ)), we conclude that Now we turn to estimating D2 A(t, s)dtds D1 A(t, s)dtds. is super-exponentially small. For (t, s) ∈ D1 , we have pX(t), X(t), X (s),Z(s) (x, 0, 0, z) |∂T = pΠs ( X(t)), X (s) (0, 0|X(t) = x, Z(s) = z, |∂T × pX(t),Z(s) (x, z| X(t), n(s) = 0)p X(t),n(s) X(t), n(s) = 0) (0) ≤ C4 (detCov(X(t), X(t), X|∂T (s), Z(s)))−1/2 × exp − x2 z2 2ρ(t, s)xz 1 + 2 − 2 2(1 − ρ(t, s)2 ) σ1 (t, s) σ2 (t, s) σ1 (t, s)σ2 (t, s) 88 (3.4.6) for some positive constant C4 , where 2 σ1 (t, s) = Var(X(t)| X(t), n(s) ) = detCov(X(t), X(t), n(s) ) , Var( X(t), n(s) ) 2 σ2 (t, s) = Var(Z(s)| X(t), n(s) ) = detCov(Z(s), X(t), n(s) ) , Var( X(t), n(s) ) ρ(t, s) = Recall Z(s) = E{X(t)Z(s)| X(t), n(s) = 0} . σ1 (t, s)σ2 (t, s) X(s), n(s) , similarly to the rectangle case, one can check that there exits positive constants C5 and C6 such that 2 C5 s − t 2 ≤ σ2 (t, s) ≤ C6 s − t 2 . (3.4.7) Applying Taylor formula, we obtain ρ(t, s) = = 1 E{X(t) E{X(t)Z(s)} − σ1 (t, s)σ2 (t, s) 1 E{X(t) σ1 (t, s)σ2 (t, s) − 2 X(t)(s − t) + X(t) + E{X(t) X(t), n(s) } E{ Var( X(t), n(s) ) X(t), n(s) }E{Z(s) X(t), n(s) } Var( X(t), n(s) ) X(t), n(s) s − t 1+η Yt,s , n(s) } X(t) + 2 X(t)(s − t) + s − t 1+η Yt,s , n(s) } = s−t E{X(t) σ1 (t, s)σ2 (t, s) − 2 X(t)e E{X(t) X(t), n(s) } E{ Var( X(t), n(s) ) t,s + s − t η Yt,s , n(s) } X(t), n(s) 2 X(t)e t,s + s − t η Yt,s , n(s) } . By our assumption, E{X(s0 ) X(s0 )}=0, therefore if δ is sufficiently small, E{X(t) X(t)} 89 ◦ gets close to 0 for t ∈T ∩B(s0 , δ). Thus, as s − t → 0, s−t ( Σ(t)et,s , n(s) − o(1)) σ1 (t, s)σ2 (t, s) ρ(t, s) = s−t ≤ σ1 (t, s)(C5 s − t 2 )1/2 (−b1 − o(1)) (3.4.8) < −ε2 for some positive constant ε2 , where the second inequality comes from (3.4.7) and the definition of D1 . By similar arguments in the proof for Gaussian fields on rectangle, we see that there exists positive constants C7 , C8 , N2 and N3 such that E{|det 2 X(t)||det 2 X|∂T (s)||X(t) = x, X(t) = 0, X|∂T (s) = 0, Z(s) = z} ≤ C7 (xN2 + (z/ s − t )N3 + 1) and detCov(X(t), X(t), X|∂T (s), Z(s)) ≥ C8 s − t 2N . Combining this with (3.4.6), and making change of variable z = z/ s − t and σ2 (t, s) = ˜ ˜ σ2 (t, s)/ s − t , we obtain A(t, s) ≤ C9 s − t −N × exp − ∞ dx u dz (xN2 + (z/ s − t )N3 + 1) 0 1 x2 z2 2ρ(t, s)xz + 2 − 2 ) σ 2 (t, s) 2(1 − ρ(t, s) σ2 (t, s) σ1 (t, s)σ2 (t, s) 1 ≤ C9 s − t 1−N × exp ∞ ∞ ∞ dx u d˜ (xN2 + z N3 + 1) z ˜ 0 1 x2 z2 ˜ 2ρ(t, s)x˜ z − + 2 − 2 σ 2(1 − ρ(t, s)2 ) σ1 (t, s) σ2 (t, s) σ1 (t, s)˜2 (t, s) ˜ 90 (3.4.9) for some positive constant C9 . Applying Lemma 2.3.10 yields that D1 A(t, s)dtds is super- exponentially small. Theorem 3.4.3 Let T be a compact, convex, N -dimensional subset of RN with smooth boundary ∂T . Let {X(t) : t ∈ T } be a Gaussian random field such that (H1) and (H3) are fulfilled. Then there exists some constant α > 0 such that as u → ∞, 2 2 P sup X(t) ≥ u = E{ϕ(Au )} + o(e−αu −u /2 ). t∈T Proof The result follows immediately from applying (3.4.1), Lemma 3.4.1 and Lemma 3.4.2. 3.5 Gaussian Fields on Convex Sets with Piecewise Smooth Boundary Let T be an N -dimensional compact and convex set in RN . Suppose it has piecewise smooth boundary and can be stratified as T = ∪N ∂i T , where ∂i T is the i-dimensional boundary i=0 of T made up of the disjoint union of a finite number of i-dimensional manifolds without boundary. Define the support cone [cf. Adler and Taylor (2007, p.188)] of T at t as St T := {ξ ∈ RN : ∃δ > 0, C 1 curve γ : (−δ, δ) → Rd such that γ(0) = t, γ(0) = ξ, γ(s) ∈ T for all s ∈ [0, δ)}. It is easy to check that St T contains the tangent space of t. 91 Define the normal cone [cf. Adler and Taylor (2007, p.189)] of T at t as Nt T := {z ∈ RN : z, ξ ≤ 0 for all ξ ∈ St T }. So t ∈ T is called extended outward critical point if X(t) ∈ Nt T . Morse theorem gives N k (−1)i µi (J), (−1)k ϕ(Au ) = i=0 k=0 J∈∂k T where µi (J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = i, X(t) ∈ Nt T } = #{t ∈ J : X(t) ≥ u, X(t) ∈ Nt T, index( 2 X|J (t)) = i}, and the last line above is due to the fact that X(t) ∈ Nt T implies X|J (t) = 0 for all t ∈ J. We will need a modified version of (H4), say (H4 ), as the following. (H4 ). ∀t ∈ J ∈ ∂k T such that ν(t) = 1 and 0 ≤ k ≤ N − 2, (E{X(t) 2 X(t)})|L is negative definite, where L is the largest subspace of RN such that ( ν(t))|L = 0. Here, (E{X(t) 2 X(t)})|L and ( ν(t))|L are the projections of E{X(t) 2 X(t)} and ν(t) onto the subspace L, respectively. Similar to Proposition 3.1.1, we have the following result. Proposition 3.5.1 Let X(t) ∈ C 2 (RN ) a.s. and let L be the largest subspace of RN such that ( ν(t))|L = 0. If ( 2 ν(t))|L is negative semidefinite for each t ∈ J ∈ ∂k T such that ν(t) = 1 and 0 ≤ k ≤ N − 2, then (H4 ) holds. 92 For J ∈ ∂k T , we also define the number of extended outward maxima above level u as E Mu (J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = k, X(t) ∈ Nt T } = #{t ∈ J : X(t) ≥ u, X(t) ∈ Nt T, index( 2 X|J (t)) = k}. Then similarly to Lemma 2.3.1, one has N E {Mu (J) ≥ 1} a.s. sup X(t) ≥ u = t∈T k=0 J∈∂k T Thus by similar discussions, we get N E E{Mu (J)} ≥ P sup X(t) ≥ u t∈T k=0 J∈∂k T N ≥ k=0 J∈∂k T 1 E E E E{Mu (J)} − E{Mu (J)(Mu (J) − 1)} 2 (3.5.1) E E E{Mu (J)Mu (J )}. − J=J Theorem 3.5.2 Let T be a compact, convex, N -dimensional subset of RN with piecewise smooth boundary. Let {X(t) : t ∈ T } be a Gaussian random field such that (H1), (H3) and (H4 ) are fulfilled. Then there exists some constant α > 0 such that as u → ∞, 2 2 P sup X(t) ≥ u = E{ϕ(Au )} + o(e−αu −u /2 ). t∈T Proof Similar to the arguments in proving the smooth boundary case, we only need to E E show that E{Mu (J)Mu (J )} is super-exponentially small for neighboring faces J ∈ ∂k T and E E J ∈ ∂k T . Moreover, similarly, it suffices to show that E{Mu (J ∩B(s0 , δ))Mu (J ∩B(s0 , δ)) 93 ¯ ¯ is super-exponentially small for s0 ∈ I = J ∩ J such that ν(s0 ) = 1 and ( ν(s0 ))|L = 0, ¯ ¯ where L = span{Ss0 J ∪ Ss0 J }, B(s0 , δ) = {t ∈ T : d(t, s0 ) < δ} and δ is a small positive number to be specified. Without loss of generality, we assume k ≥ k and k ≥ 1. Denote by {E1 , · · · , Em } the normal basis on I. Since J is a k-dimensional face, there are (k −m) many (m+1)-dimensional faces which belong to the closure of J and are adjacent to I as well. Denote these (m + 1)-dimensional faces by {Jm+1 , · · · , Jk } ⊂ ∂m+1 T . Now, for each i = m + 1, . . . , k, we may view I as an m-dimensional boundary of Ji , and let Ei (s0 ) ¯ be the unit normal vector pointing outwards at s0 , i.e., Ei (s0 ) ∈ Ns0 Ji . In such way, we have a smooth frame (not necessary orthogonal) {E1 (t), · · · , Em (t), Em+1 (t), · · · , Ek (t)} on ¯ J ∩ B(s0 , δ). Similarly, since J is a k -dimensional face, there are (k − m) many (m + 1)-dimensional faces which belong to the closure of J and are adjacent to I as well. Denote these (m + 1)dimensional faces by {Jm+1 , · · · , J } ⊂ ∂m+1 T . Now, for each i = m + 1, . . . , k , we may k view I as an m-dimensional boundary of Ji , and let Ei (s0 ) be the unit normal vector pointing ¯ outwards at s0 , i.e., Ei (s0 ) ∈ Ns0 Ji . In such way, we have a smooth frame (not necessary ¯ orthogonal) {E1 (s), · · · , Em (s), Em+1 (s), · · · , E (s)} on J ∩ B(s0 , δ). k By (H4 ), ΣL (t) := (E{X(t) 2 X(t)})L is negative definite at t = s0 . If δ is small enough, by continuity, − ΣL (t)et,s , et,s ≥ α0 , ∀t ∈ J ∩ B(s0 , δ), s ∈ J ∩ B(s0 , δ), (3.5.2) for some positive constant α0 , where et,s is the projection of (s − t)/ s − t on L (in fact, the projection only removes the vanishing components of (s − t)/ s − t such that the number 94 of components in et,s is the same as dim(L)). For (t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)), let αi (t, s) = − ΣL (t)et,s , Ei (t) , i = 1, . . . , m, m + 1, . . . , k, αj (t, s) = − ΣL (t)et,s , Ej (s) , j = m + 1, . . . , k . We claim that there exist positive constants b0 , bm+1 , · · · , bk , bm+1 , · · · , b , whose propk k erty needs to be specified later, such that D0 ∪ (∪k i=m+1 Di ) ∪ (∪j=m+1 Dj ) is a covering of (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)), where Di = {(t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)) : αi (t, s) ≥ bi } if m + 1 ≤ i ≤ k; Dj = {(t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)) : αj (t, s) ≤ −bj } if m + 1 ≤ j ≤ k ; m D0 = |αi (t, s)|2 ≥ b0 . (t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)) : i=1 (3.5.3) Note that as δ gets smaller, (J ∩ B(s0 , δ)) ∪ (J ∩ B(s0 , δ)) becomes more similar to two intersecting flat faces, and et,s will be around the convex cone created by vectors {±E1 (t), . . . , ±Em (t), Em+1 (t), . . . , Ek (t), −Em+1 (s), . . . , −Ek (s)}. Hence due to the convexity of T , there exists ε0 > 0, whose property needs to be specified later, such that for all (t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)) and δ sufficiently small, the following representation holds: k et,s = k et,s , Ei (t) Ei (t) + i=1 et,s , Ej (s) Ej (s), j=m+1 95 (3.5.4) where et,s , Ei (t) ∈ R for i = 1, . . . , m, and et,s , Ei (t) ≥ −ε0 , et,s , Ej (s) ≤ ε0 , ∀i = m + 1, . . . , k, ∀j = m + 1, . . . , k . By continuity, there exists a universal positive constant r such that max{|αi (t, s)|, |αj (t, s)|} ≤ r, sup (t,s)∈J×J for all i = m + 1, . . . , k and j = m + 1, . . . , k . If (t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)) does not belong to any of sets in (3.5.3), then by (3.5.4), k m − ΣL (t)et,s , et,s = αi (t, s) et,s , Ei (t) αi (t, s) et,s , Ei (t) + i=m+1 i=1 k αj (t, s) et,s , Ej (s) + (3.5.5) j=m+1 k k max{bj , rε0 }, max{bi , rε0 } + ≤ b0 + j=m+1 i=m+1 where the last inequality we use the facts that for i = m + 1, . . . , k, if (t, s) ∈ Di and / et,s , Ei (t) ≥ 0, then αi (t, s) et,s , Ei (t) ≤ bi ; and for j = m + 1, . . . , k , if (t, s) ∈ Dj and / et,s , Ej (s) ≤ 0, then αj (t, s) et,s , Ej (s) ≤ bj . Now, we choose the positive constants b0 , bm+1 , · · · , bk , bm+1 , · · · , b and ε0 such that k the last line of (3.5.5) is strictly less than α0 . Then − ΣL (t)et,s , et,s < α0 conflicts (3.5.2). k This verifies our claim that D0 ∪ (∪k i=m+1 Di ) ∪ (∪j=m+1 Dj ) is a covering of (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)). 96 Due to the convexity of T , if δ is small enough, then for each (t, s) ∈ (J ∩ B(s0 , δ)) × (J ∩ B(s0 , δ)), X(t) ∈ Nt T ⇔ X|J (t) = 0 and X(t), Ej (s) ≥ 0, ∀j = m + 1, . . . , k ; and for each s ∈ J ∩ B(s0 , δ), X(s) ∈ Ns T ⇔ X(s), Ei (t) ≥ 0, X|J (s) = 0 and ∀i = m + 1, . . . , k. By the Kac-Rice metatheorem, E E E{Mu (J ∩ B(s0 , δ))Mu (J ∩ B(s0 , δ)) ∞ ∞ ≤ dx ds dt dy u u J ∩B(s0 ,δ) J∩B(s0 ,δ) ∞ ∞ ∞ ∞ dwk dzk −m dwm+1 · · · dzm+1 · · · 0 0 0 0 E{|det 2 X|J (t) det 2 X|J (s)||X(t) = x, X(s) = y, X|J (t) = 0, X|J (s) = 0, X(t), Em+1 (s) = zm+1 , · · · , X(s), Em+1 (t) = wm+1 , · · · , X(t), Ek (s) = zk , X(s), Ek (t) = wk } × pt,s (x, y, 0, zm+1 , · · · , zk , 0, wm+1 , · · · , wk ) := A(t, s) dtds, (J∩B(s0 ,δ))×(J ∩B(s0 ,δ)) where pt,s is the density of (X(t), X(s), X|J (t), X|J (s), X(t), Em+1 (s) , · · · , X(s), Em+1 (t) , · · · , 97 X(t), Ek (s) , X(s), Ek (t) ). Moreover, due to the covering discussed before, this integral can be bounded as A(t, s) dtds (J∩B(s0 ,δ))×(J ∩B(s0 ,δ)) k ≤ k A(t, s) dtds + Di i=m+1 We first show that A(t, s) dtds + Dj j=m+1 D0 A(t, s) dtds A(t, s) dtds. D0 is super-exponentially small. By similar arguments in the proof for Gaussian fields on rectangle, we see that there exist positive constants C1 , C2 and N1 such that E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, X|J (s) = 0} ≤ C1 (xN1 + 1), detCov( X|J (t), X|J (s)) ≥ C2 s − t 2m . Therefore, ∞ A(t, s) ≤ u E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, X|J (s) = 0} × pX(t) (x| X|J (t) = 0, X|J (s) = 0)p X (t), X (s) (0, 0)dx |J |J ≤ C3 s − t m ∞ u (1 + xN1 )pX(t) (x| X|J (t) = 0, X|J (s) = 0)dx (3.5.6) for some positive constant C3 . 98 On the other hand, let Πt be the projection onto span{E1 (t), · · · , Em (t)}, then as δ → 0, Var(X(t)| X|J (t), X|J (s)) ≤ Var(X(t)|Πt X|J (t), Πt X|J (s)) = Var(X(t)|Πt X(t), Πt X(s)) + o(1) = Var(X(t)|Πt X(t), Πt ( X(s) − X(t))/ s − t ) + o(1) = Var(X(t)|Πt X(t), Πt ( 2 X(t)et,s )) + o(1) ≤ Var(X(t)|Πt ( 2 X(t)et,s )) + o(1) ≤ 1 − (Πt (ΣL (t)et,s ))[Cov(Πt ( 2 X(t)et,s ))]−1 (Πt (ΣL (t)et,s ))T + o(1), where the third equality is due to Taylor’s formula. Note that Cov(Πt ( 2 X(t)et,s )) is bounded away from 0 because of the regularity condition (H3). Also, by the definition of D0 , the vectors Πt (ΣL (t)et,s ) are not vanishing for all (t, s) ∈ D0 , thus there exists a constant ε1 > 0 such that Var(X(t)| X|J (t), X|J (s)) ≤ 1 − ε1 , ∀(t, s) ∈ D0 . Combining this with (3.5.6), and noting that s − t m is integrable on (J ∩ B(s0 , δ))) × (J ∩ B(s0 , δ)), we conclude that Now we turn to estimating D0 A(t, s) dtds Di is super-exponentially small. A(t, s)dtds, i = m + 1, . . . , k. Let Πt be the projection onto span{E1 (t), . . . , Em (t), Em+1 (t), . . . , Ei−1 (t), Ei+1 (t), . . . , Ek (t)}, then for (t, s) ∈ Di , 99 we have pX(t), X (t), X (s), |J |J X(s),Ei (t) (x, 0, 0, w) = pΠ ( X(t)), X (s) (0, 0|X(t) = x, t |J × pX(t), X(s),Ei (t) (x, w| X(t), Ei (t) = 0)p ≤ C4 [detCov(X(t), X|J (t), X|J (s), × exp − X(s), Ei (t) = w, X(t), Ei (t) = 0) X(t),Ei (t) (0) (3.5.7) X(s), Ei (t) )]−1/2 1 x2 w2 2ρ(t, s)xw + 2 − 2 ) σ 2 (t, s) 2(1 − ρ(t, s) σ2 (t, s) σ1 (t, s)σ2 (t, s) 1 for some positive constant C4 , where 2 σ1 (t, s) = Var(X(t)| 2 σ2 (t, s) = Var( ρ(t, s) = X(t), Ei (t) ) = X(s), Ei (t) | E{X(t) detCov(X(t), X(t), Ei (t) ) , Var( X(t), Ei (t) ) X(t), Ei (t) ) = detCov( X(s), Ei (t) , X(t), Ei (t) ) , Var( X(t), Ei (t) ) X(s), Ei (t) | X(t), Ei (t) = 0} . σ1 (t, s)σ2 (t, s) Similarly to the rectangle case, one can check that there exits positive constants C5 and C6 such that 2 C5 s − t 2 ≤ σ2 (t, s) ≤ C6 s − t 2 . 100 (3.5.8) Applying Taylor formula, we obtain ρ(t, s) = 1 E{X(t) σ1 (t, s)σ2 (t, s) − = E{X(t) X(t), Ei (t) }E{ X(s), Ei (t) Var( X(t), Ei (t) ) 1 E{X(t) σ1 (t, s)σ2 (t, s) − X(s), Ei (t) } 2 X(t)(s − t) + X(t) + E{X(t) X(t), Ei (t) } E{ Var( X(t), Ei (t) ) X(t), Ei (t) } X(t), Ei (t) s − t 1+η Yt,s , Ei (t) } X(t) + 2 X(t)(s − t) + s − t 1+η Yt,s , Ei (t) } = s−t E{X(t) σ1 (t, s)σ2 (t, s) − 2 X(t)e t,s E{X(t) X(t), Ei (t) } E{ Var( X(t), Ei (t) ) + s − t η Yt,s , Ei (t) } X(t), Ei (t) 2 X(t)e t,s + s − t η Yt,s , Ei (t) } . By our assumption, (E{X(s0 ) X(s0 )})|L =0, therefore E{X(t) X(t), Ei (t) } gets close to 0 for t ∈ J ∩ B(s0 , δ) and δ small enough. Thus, as s − t → 0, ρ(t, s) = ≤ s−t ( ΣJ (t)et,s , Ei (t) − o(1)) σ1 (t, s)σ2 (t, s) s−t σ1 (t, s)(C5 s − t 2 )1/2 (−bi − o(1)) (3.5.9) < −ε2 for some positive constant ε2 , where the second inequality comes from (3.5.8) and the definition of Di . By similar arguments in the proof for Gaussian fields on rectangle, we see that there 101 exists positive constants C7 , C8 , N2 and N3 such that E{|det 2 X|J (t)||det 2 X|J (s)||X(t) = x, X|J (t) = 0, XJ (s) = 0, X(s), Ei (t) = w} ≤ C7 (xN2 + (w/ s − t )N3 + 1) and detCov(X(t), X|J (t), X|J (s), X(s), Ei (t) ) ≥ C8 s − t 2(m+1) . Combining this with (3.5.7), and making change of variable w = w/ s − t and σ2 (t, s) = ˜ ˜ σ2 (t, s)/ s − t , we obtain A(t, s) ≤ C9 s − t −(m+1) × exp − ≤ C9 s − t × exp ∞ dx u dw (xN2 + (w/ s − t )N3 + 1) 0 1 w2 2ρ(t, s)xw x2 + 2 − 2 ) σ 2 (t, s) 2(1 − ρ(t, s) σ2 (t, s) σ1 (t, s)σ2 (t, s) 1 −m − ∞ ∞ ∞ dx u dw ˜ (xN2 + w N3 ˜ (3.5.10) + 1) 0 1 x2 w2 ˜ 2ρ(t, s)xw ˜ + 2 − 2 ) σ 2 (t, s) σ 2(1 − ρ(t, s) σ2 (t, s) σ1 (t, s)˜2 (t, s) ˜ 1 for some positive constant C9 . Applying Lemma 2.3.10 yields that Di A(t, s)dtds is super- exponentially small. Estimating Dj A(t, s)dtds for j = m + 1, . . . , k is similar. And the proof for the case E E when k = 0 is also similar. Now we obtain that E{Mu (J)Mu (J )} is super-exponentially small, completing the proof. Remark 3.5.3 Our proof in Theorem 3.5.2 only focuses on the neighborhood of s0 , and therefore the proof is also valid for the case when T is locally convex [cf. Adler and Taylor (2007, p.189, Definition 8.2.1)]. 102 Chapter 4 The Expected Euler Characteristic of Non-centered Stationary Gaussian Fields It has been shown that the expected Euler characteristic of the excursion set, denoted by E{ϕ(Au )}, can be used to approximate the excursion probability very accurately. Now we turn to the computation of E{ϕ(Au )}. In the monograph Adler and Taylor (2007), the authors considered centered Gaussian fields with constant variance and they obtained very general formulae [cf. Theorem 12.4.1 and Theorem 12.4.2 therein] for E{ϕ(Au )} involving the so called Lipschitz-Killing curvatures. Usually, it is very hard to simplify these LipschitzKilling curvatures. As a consequence, for general centered smooth Gaussian fields with constant variance, E{ϕ(Au )} is difficult to compute. Therefore, E{ϕ(Au )} would become even more complicated for general smooth Gaussian fields with non-constant variances. However, for some relatively simple models and nice parameter space T , for example centered stationary Gaussian fields on rectangles, E{ϕ(Au )} can be simplified a lot [cf. Theorem 11.7.2 and Corollary 11.7.3 in Adler and Taylor (2007)]. The results there rely heavily on the zero mean function. If the Gaussian field is stationary, but the mean function is varying, then the computation for E{ϕ(Au )} will become more complicated. In this 103 chapter, we will show the formulae of E{ϕ(Au )} for stationary Gaussian fields with varying mean functions, and also for isotropic Gaussian fields on the sphere with varying mean functions. 4.1 Preliminary Gaussian Computations The following result is Lemma 11.6.1 in Adler and Taylor (2007). Lemma 4.1.1 (Wick formula). Let Z1 , Z2 , ..., ZN be a set of real-valued random variables having a joint Gaussian distribution and zero means. Then for any integer k, E{Z1 Z2 · · · Z2k+1 } = 0, E{Z1 Z2 · · · Z2k } = (4.1.1) E{Zi1 Zi2 } · · · E{Zi2k−1 Zi2k }, where the sum is taken over the (2k)!/(k!2k ) different ways of grouping Z1 , ..., Z2k into k pairs. Let ∆N be a symmetric N × N matrix with elements ∆ij such that each ∆ij is a zeromean normal variable with arbitrary variance but such that the following relationship holds: E{∆ij ∆kl } = E(i, j, k, l) − δij δkl , (4.1.2) where E is a symmetric function of i, j, k, l, and δij is the Kronecker delta function. Write |∆N | for the determinant of ∆N . 104 Let ∆N be a symmetric N × N matrix with elements ∆ij such that each ∆ij is a zeromean normal variable with arbitrary variance but such that the following relationship holds: E{∆ij ∆kl } = E(i, j, k, l) + δij δkl , (4.1.3) where E is a symmetric function of i, j, k, l, and δij is the Kronecker delta function. Let ∆N be a symmetric N × N matrix with elements ∆ij such that each ∆ij is a zeromean normal variable with arbitrary variance but such that the following relationship holds: E{∆ij ∆kl } = E (i, j, k, l), (4.1.4) where E is a symmetric function of i, j, k, l. Lemma 4.1.2 Let BN = (Bij )1≤i,j≤N be a real symmetric N × N matrix and let n be a positive integer. Then for ∆n satisfying (4.1.2), n/2 E{|∆n + Bn |} = G2k Sn−2k (Bn ), (4.1.5) k=0 l where G2j = (−1)j (2j)!/(j!2j ), Sk (Bl ) is the sum of the k principle minors of order k in Bl , and G0 = S0 (Bn ) = 1 in convention. Similarly, for ∆n satisfying (4.1.3) and ∆n satisfying (4.1.4), n/2 E{|∆n + Bn |} = G2k Sn−2k (Bn ), k=0 E{|∆n + Bn |} = |Bn |, 105 (4.1.6) where G2j = (2j)!/(j!2j ). Proof We first consider the case when n is even, say n = 2l, then |∆2l + B2l | = η(p)(∆1i1 + B1i1 ) · · · (∆2l,i + B2l,i ), 2l 2l (4.1.7) P where p = (i1 , i2 · · · , i2l ) is a permutation of (1, 2, · · · , 2l), P is the set of the (2l)! such permutations, and η(p) equals +1 or −1 depending on the order of the permutation p. Then E{|∆2l + B2l |} = η(p)E{(∆1i1 + B1i1 ) · · · (∆2l,i + B2l,i )}. 2l 2l (4.1.8) P It follows from Lemma 4.1.1 that for k ≤ l, E{∆1i1 · · · ∆2k+1,i } = 0 and 2k+1 {E(1, i1 , 2, i2 ) − δ1i1 δ2i2 } × · · · E{∆1i1 · · · ∆2k,i } = 2k Q2k × {E(2k − 1, i2k−1 , 2k, i2k ) − δ2k−1,i δ }, 2k−1 2k,i2k where Q2k is the set of the (2k)!/(k!2k ) ways of grouping (i1 , i2 , · · · , i2k ) into pairs without regard to order, keeping them paired with the first index. Let P be the set of all the 106 permutations of (2k + 1, . . . , 2l), then η(p)E{∆1i1 · · · ∆2k,i }B2k+1,i 2k 2k+1 · · · B2l,i 2l P = {E(1, i1 , 2, i2 ) − δ1i1 δ2i2 } × · · · η(p) P Q2k × {E(2k − 1, i2k−1 , 2k, i2k ) − δ2k−1,i = = = (−1)k (δ1i1 δ2i2 ) · · · (δ2k−1,i η(p) P δ } 2k−1 2k,i2k 2k+1 · · · B2l,i )B δ 2k−1 2k,i2k 2k+1,i2k+1 · · · B2l,i B2k+1,i 2l 2l Q2k (−1)k (2k)! k!2k η(p)B2k+1,i 2k+1 · · · B2l,i 2l P (−1)k (2k)! |(Bij )2k+1≤i,j≤2l |, k!2k where the second equality is due to the fact that all products involving at least one E term will cancel out because of their symmetry property, and the third equality comes from noting that for only one permutation in P is the product of the delta functions nonzero. Thus E{|∆2l + B2l |} = |B2l | + G2 S2l−2 (B2l ) + · · · + G2l−2 S2 (B2l ) + G2l . Similarly, we obtain that E{|∆2l+1 + B2l+1 |} = |B2l+1 | + G2 S2l−1 (B2l+1 ) + · · · + G2l S1 (B2l+1 ). Then we obtain (4.1.5). The proof for (4.1.6) follows similarly. Corollary 4.1.3 Let ∆N , ∆N , ∆N , BN , G2j , G2j and Sk (Bl ) be as in Lemma 4.1.2. Let 107 IN be the N × N unit matrix, and x ∈ R. Then n/2 N E{|∆N + BN − xIN |} = (−1)N G2k Sn−2k (BN ) xN −n , (−1)n n=0 k=0 N (4.1.9) n/2 and similarly, E{|∆N + BN − xIN |} = G2k Sn−2k (BN ) xN −n , (−1)n (−1)N n=0 N k=0 (4.1.10) (−1)n Sn (BN )xN −n . E{|∆N + BN − xIN |} = (−1)N n=0 Proof It follows from the usual Laplace expansion of the determinant that N (−1)n Sn (∆N + BN )xN −n . |∆N + BN − xIN | = (−1)N (4.1.11) n=0 It follows from Lemma 4.1.2 that n/2 E{Sn (∆N + BN )} = G2k Sn−2k (BN ), (4.1.12) k=0 and hence we obtain (4.1.9). (4.1.10) follows similarly. 4.2 Stationary Gaussian Fields on Rectangles Consider a centered stationary Gaussian random field Z = {Z(t), t ∈ RN }. It has representation Z(t) = RN ei t,λ W (dλ) 108 and covariance C(t) = RN ei t,λ ν(dλ), where W is a complex-valued Gaussian random measure and ν is the spectral measure satisfying ν(RN ) = C(0) = σ 2 . We introduce the second-order spectral moments λij = RN λi λj ν(dλ), and denote Λ = (λij )1≤i,j≤N . Denoting also differentiation via subscripts, so that Zi = ∂Z/∂ti , Zij = ∂ 2 Z/∂ti ∂tj , etc., we have E{Zi (t)Zj (t)} = λij = −Cij (0) = −E{Z(t)Zij (t)}. The covariances among the second-order derivatives can be similarly defined. However, all we shall need is that E0 (i, j, k, l) := E{Zij (t)Zkl (t)} = RN λi λj λk λl ν(dλ) (4.2.1) is a symmetric function of i, j, k, l. Also note that for any fixed t, Zi (t) is independent of both Z(t) and Zkl (t). Let X(t) = Z(t) + m(t), where m(·) ∈ C 2 (RN ) is a real-valued deterministic function. Let T = N i=1 [ai , bi ], −∞ < ai < bi < ∞. Theorem 4.2.1 Let X(t) = Z(t) + m(t), where Z(·) ∈ C 2 (RN ) a.s. is a stationary Gaussian field and m(·) ∈ C 2 (RN ) is a real-valued deterministic function. Suppose also that Z 109 satisfies (H3 ). Then we have E{ϕ(Au (X, T ))} = {t}∈∂0 T u − m(t) P{ X(t) ∈ E({t})}Ψ σ ∞ × dx exp dt u J N |ΛJ |1/2 + k=1 J∈∂k T (2π)(k+1)/2 σ k+1 1 1 − ( m|J (t))T Λ−1 m|J (t) − 2 (x − m(t))2 J 2 2σ (4.2.2) × P{(XJ1 (t), · · · , XJ (t)) ∈ E(J)| X|J (t) = 0} N −k j/2 k × (−1)j j=0 x k−j , σ 2 m (t)Q ) J J G2i Sj−2i (σQJ i=0 −1/2 where G and S are defined in Lemma 4.1.2 and QJ = ΛJ . If J = {t} ∈ ∂0 T , then Proof E{µ0 (J)} = P{X(t) ≥ u, ε∗ Xj (t) ≥ 0 for all 1 ≤ j ≤ N } j (4.2.3) u − m(t) = P{ X(t) ∈ E({t})}Ψ , σ where the last equality is due to the independence of X(t) and X(t) for each fixed t. Now we consider J ∈ ∂k T for some k ≥ 1. Let Di be the collection of all k × k matrices with index i. Applying Kac-Rice formula [cf. Theorem 11.2.1 or Corollary 11.3.2 in Adler and Taylor (2007)], together with the definition (2.2.2), we obtain k k (−1)i µi (J) E i=0 = J p X (t) (0)dt |J i=0 (−1)i E{|det 2 X|J (t)|1{ 2 X (t)∈D } i |J × 1{X(t)≥u} 1{(X (t),··· ,X J J 1 Note that on the event Di , the matrix 2X |J (t) 110 N −k (t))∈E(J)} | (4.2.4) X|J (t) = 0}. has i negative eigenvalues which implies (−1)i |det 2 X|J (t)| = det 2 X|J (t). Also, ∪k { 2 X|J (t) ∈ Di } = { 2 X|J (t) ∈ Rk }, and i=0 X(t) is independent of both X(t) and J = 2 X(t) for each fixed t, thus (4.2.4) becomes p X (t) (0)dt E{det 2 X|J (t)1{X(t)≥u} 1{(X (t),··· ,X J J |J 1 (2π)(k+1)/2 |ΛJ |1/2 σ J dt (t))∈E(J)} | X|J (t) 1 N −k ∞ − 1 ( m|J (t))T Λ−1 m|J (t) − 12 (x−m(t))2 J e 2σ dx e 2 u = 0} × P{(XJ1 (t), · · · , XJ (t)) ∈ E(J)| X|J (t) = 0}E{det 2 X|J (t)|X(t) = x}. N −k (4.2.5) Now we turn to computing E{det 2 X|J (t)|X(t) = x}. Since ΛJ is positive definite, there exists a unique k × k positive definite matrix QJ (called principal square root of Λ−1 , J −1/2 also denoted as ΛJ ) such that QJ ΛJ QJ = Ik , where Ik is the k × k identity matrix. Hence E{Z(t)(QJ 2 Z|J (t)QJ )ij } = −(QJ ΛJ QJ )ij = −δij , (4.2.6) where δij is the Kronecker delta function. One can write E{det(QJ 2 X|J (t)QJ )|X(t) = x} = E{det(QJ 2 Z|J (t)QJ + QJ 2 m|J (t)QJ )|X(t) = x} = E{det(∆(x) + QJ 2 mJ (t)QJ )}, where ∆(x) = (∆ij (x))i,j∈σ(J) with all elements ∆ij (x) being Gaussian variables. To study ∆(x), we only need to find its mean and covariance. Applying Lemma 2.5.1 and (4.2.6), we 111 obtain x E{∆ij (x)} = E{(QJ 2 Z|J (t)QJ )ij |X(t) = x} = − 2 δij σ and E{(∆ij (x) − E{∆ij (x)})(∆kl (x) − E{∆kl (x)})} δij δkl = E{(QJ 2 Z|J (t)QJ )ij (QJ 2 Z|J (t)QJ )kl } − σ2 δij δkl , = E(i, j, k, l) − σ2 where E is a symmetric function of i, j, k, l by Lemma 2.1.7 with A replaced by QJ . Then we have E{det(QJ 2 X|J (t)QJ )|X(t) = x} 1 det(σQJ ( 2 X|J (t))QJ ) X(t) = x σk 1 x = k E det ∆ + σQJ 2 mJ (t)QJ − Ik σ σ =E , where ∆ = (∆ij )i,j∈σ(J) and all ∆ij are Gaussian variables satisfying E{∆ij ∆kl } = σ 2 E(i, j, k, l) − δij δkl . E{∆ij } = 0, Applying Corollary 4.1.3, we get E{det(QJ 2 X|J (t)QJ )|X(t) = x} (−1)k = σk j/2 k (−1)j j=0 G2i Sj−2i (σQJ i=0 112 2 m (t)Q ) J J x k−j . σ (4.2.7) It follows from (4.2.7) that E{det 2 X|J (t)|X(t) = x} = E{det(Q−1 QJ 2 X|J (t)QJ Q−1 )|X(t) = x} t t = |ΛJ |E{det(QJ 2 X|J (t)QJ )|X(t) = x} (−1)k = |ΛJ | σk j/2 k G2i Sj−2i (σQJ 2 mJ (t)QJ ) (−1)j i=0 j=0 x k−j . σ Plugging this into (4.2.5), we obtain k (−1)i µi (J) E i=0 = ∞ (−1)k |ΛJ |1/2 (2π)(k+1)/2 σ k+1 J dt u × P{(XJ1 (t), · · · , XJ N −k (4.2.8) (t)) ∈ E(J)| X|J (t) = 0} j/2 k × − 1 ( m|J (t))T Λ−1 m|J (t) − 12 (x−m(t))2 J dx e 2 e 2σ (−1)j j=0 G2i Sj−2i (σQJ 2 mJ (t)QJ ) i=0 x k−j . σ Combining (4.2.3), (4.2.8) and the definition (2.2.1) yields the desired result. Corollary 4.2.2 Let Z be an isotropic Gaussian random field with Var(Z1 (t)) = γ 2 , then 113 under the conditions in Theorem 4.2.1, we have E{ϕ(Au (X, T ))} = {t}∈∂0 T ∞ × dt J N u − m(t) P{ X(t) ∈ E({t})}Ψ σ dx exp u − 1 2γ 2 γk + k=1 J∈∂k T (2π)(k+1)/2 σ k+1 1 m|J (t) 2 − 2 (x − m(t))2 2σ (4.2.9) × P{(XJ1 (t), · · · , XJ (t)) ∈ E(J)} N −k j/2 k × j=0 x k−j . σ G2i Sj−2i (σγ −2 2 mJ (t)) (−1)j i=0 Proof The result follows by applying Theorem 4.2.1 and noting that ΛJ = γ 2 Ik and hence QJ = γ −1 Ik . Corollary 4.2.3 Under the conditions in Theorem 4.2.1, assume that t0 , an interior point 2 m(t ) 0 in T , is the unique maximal point of m(t) and that is nondegenerate. Then as u → ∞, E{ϕ(Au (X, T ))} = Proof |Λ|1/2 uN/2 σN | − 2 m(t )|1/2 0 Ψ u − m(t0 ) (1 + o(1)). σ By Theorem 4.2.1, E{ϕ(Au (X, T ))} = ∞ |Λ|1/2 (2π)(N +1)/2 σ N +1 u − 1 (x−m(t))2 × e 2σ2 x σ 114 1 T −1 m(t) e− 2 ( m(t)) Λ dx J N dt(1 + o(1)). (4.2.10) Applying Laplace method, we obtain that as x → ∞, 1 2 1 T −1 m(t) − (x−m(t)) dt e 2σ2 e− 2 ( m(t)) Λ J = (2π)N/2 σ N xN/2 | − 2 m(t )|1/2 0 (4.2.11) − 1 (x−m(t0 ))2 e 2σ2 (1 + o(1)). Thus as u → ∞, E{ϕ(Au (X, T ))} = = 4.3 ∞ |Λ|1/2 (2π)1/2 σ N +1 | − 2 m(t )|1/2 0 |Λ|1/2 uN/2 σN | − 2 m(t )|1/2 0 Ψ − 1 (x−m(t0 ))2 xN/2 e 2σ2 dx(1 + o(1)) u u − m(t0 ) (1 + o(1)). σ Isotropic Gaussian Random Fields on Sphere We consider isotropic Gaussian random fields on N -dimensional unit sphere SN . For x = (x1 , · · · , xN +1 ) ∈ SN , we shall use the spherical coordinates as follows. x1 = cos θ1 , x2 = sin θ1 cos θ2 , x3 = sin θ1 sin θ2 cos θ3 , (4.3.1) . . . xN = sin θ1 sin θ2 · · · sin θN −1 cos θN , xN +1 = sin θ1 sin θ2 · · · sin θN −1 sin θN , 115 where 0 ≤ θi ≤ π for 1 ≤ i ≤ N − 1 and 0 ≤ θN < 2π. Let θ = (θ1 , · · · , θN ). Accordingly, let y = (y1 , · · · , yN +1 ) be another point in SN , we use ϕ = (ϕ1 , · · · , ϕN ) to denote its corresponding spherical coordinates. Let · , ·, · be the Euclidean norm and the inner product respectively. Denote by d(·, ·) the distance function in SN , i.e., d(x, y) = arccos x, y , ∀x, y ∈ SN . The following theorem by Schoenberg (1942) characterizes the covariance function of isotropic Gaussian field on sphere. Theorem 4.3.1 A real continuous function C(d) is a valid covariance on unit sphere SN for every dimension N ≥ 1 if and only if it has the form ∞ bn cosn d, C(d) = d ∈ [0, π], n=0 where bn ≥ 0 and ∞ n=0 bn < ∞. Recall d(x, y) = arccos x, y , ∀x, y ∈ SN . It follows from the above theorem that a function C(x, y), which is the covariance function of an isotropic Gaussian field on SN for every dimension N ≥ 1, has the form ∞ bn x, y n , C(x, y) = ∀x, y ∈ SN , (4.3.2) n=0 where where bn ≥ 0 and ∞ n=0 bn < ∞. Let ϕ(Au (X, SN )) be the Euler characteristic of excursion set Au (X, SN ) = {x ∈ SN : X(x) ≥ u}. Then according to Corollary 9.3.5 in Adler and Taylor (2007), N ϕ(Au (X, SN )) = (−1)N (−1)i µi (SN ) i=0 116 (4.3.3) with µi (SN ) := #{x ∈ SN : X(x) ≥ u, X(x) = 0, index( 2 X(x)) = i}, where X and 2X (4.3.4) are the gradient and Hessian on manifold respectively. Let X(x) = Z(x) + m(x), x ∈ SN , where Z is a centered, unit-variance smooth Gaussian random field on SN with covariance function C(·, ·) and m(·) ∈ C 2 (SN ) is a real-valued deterministic function. Under the spherical coordinate, let X(θ) = X(x), Z(θ) = Z(x), m(θ) = m(x), C(θ, ϕ) = C(x, y). Lemma 4.3.2 Let h(x, y) = x, y n , x, y ∈ SN , where n is a nonnegative integer, and let h(θ, ϕ) be its spherical version. Then h(θ, θ) = 1 and ∂ 3 h(θ, ϕ) ∂h(θ, ϕ) |θ=ϕ = | = 0, ∂θi ∂θi ∂ϕj ∂ϕk θ=ϕ ∂ 2 h(θ, ϕ) ∂ 2 h(θ, ϕ) |θ=ϕ = − | = nδij , ∂θi ∂ϕj ∂θi ∂θj θ=ϕ (4.3.5) ∂ 4 h(θ, ϕ) | = n(n − 1)(δij δkl + δik δjl + δil δjk ) + nδij δkl . ∂θi ∂θj ∂ϕk ∂ϕl θ=ϕ Let Θ = {θ ∈ RN : 0 ≤ θN < 2π, 0 ≤ θi ≤ π, ∀1 ≤ i ≤ N − 1} and let dσ(θ) be the volume element on the sphere, i.e., N −1 sinN −i θi dθ, dσ(θ) = ∀θ ∈ Θ. i=1 Then we can state our result as follows. Theorem 4.3.3 Let X = {X(x) = Z(x) + m(x) : x ∈ SN }, where Z is a Gaussian field on SN satisfying (H3 ) and m(·) ∈ C 2 (SN ) is a real-valued deterministic function. Suppose 117 also that X has the covariance function C(·, ·) such that ∞ bn x, y n , C(x, y) = ∀x, y ∈ SN , (4.3.6) n=0 where bn ≥ 0, ∞ n=0 bn ∞ 4 n=1 n bn = 1 and < ∞. Let β = ∞ n=1 nbn , m(θ) = m(x), and let G, G and S be as in Lemma 4.1.2. Then for β > 1, E{ϕ(Au (X, SN ))} = ∞ (β − 1)N/2 (2π)N/2 − u Θ j/2 N (−1)j × dw exp dσ(θ) m(θ)) 2 2 m(θ) G2i Sj−2i j=0 1 2β βw β2 − β i=0 N −j ; β2 − β for β < 1, E{ϕ(Au (X, SN ))} = ∞ (1 − β)N/2 dσ(θ) (2π)N/2 Θ (−1)j 1 2β m(θ)) 2 2 m(θ) G2i Sj−2i j=0 − u j/2 N × dw exp βw β − β2 i=0 β − β2 N −j ; and for β = 1, E{ϕ(Au (X, SN ))} = ∞ 1 (2π)N/2 dσ(θ) dw exp − u Θ 1 2 m(θ) 2 N × (−1)j Sj ( 2 m(θ))wN −j . j=0 Remark 4.3.4 It is easy to check that the condition C 4 (SN × SN ) and hence X(·) ∈ C 2 (SN ). 118 ∞ 4 n=1 n bn < ∞ makes C(·, ·) ∈ Proof Case 1: β > 1. Let κ = ∞ n=2 n(n − 1)bn . Then by Lemma 4.3.2, E{Z(θ)Z i (θ)} = E{Z i (θ)Z jk (θ)} = 0, E{Z i (θ)Z j (θ)} = −E{Z(θ)Z ij (θ)} = βδij , (4.3.7) E{Z ij (θ)Z kl (θ)} = κ(δij δkl + δik δjl + δil δjk ) + βδij δkl . By the Kac-Rice metatheorem, E{ϕ(Au (X, SN ))} = (−1)N Θ = = p X(θ) (0)E{det 2 X(θ)1{X(θ)≥u} | X(θ) = 0}dσ(θ) (4.3.8) ∞ (−1)N dσ(θ) Θ N (−1) (2π)N/2 β N/2 u p X(θ) (0)E{det dσ(θ) Θ ∞ − 1 e 2β u 2 X(θ)|X(θ) m(θ)) 2 = w}dw E{det 2 X(θ)|X(θ) = w}dw. Now we turn to computing E{det 2 X(θ)|X(θ) = w}. Note that E{det 2 X(θ)|X(θ) = w} = E{det( 2 Z(θ) + 2 m(θ))|X(θ) = w} (4.3.9) = (β 2 − β)N/2 E{det((β 2 − β)−1/2 2 Z(θ) + (β 2 − β)−1/2 2 m(θ))|X(θ) = w} = (β 2 − β)N/2 E{det(∆ + (β 2 − β)−1/2 2 m(θ) − β(β 2 − β)−1/2 wIN )}, where ∆ = (∆ij )1≤i,j≤N and all ∆ij are centered Gaussian variables satisfying E{∆ij ∆kl } = (β 2 − β)−1 E{Z ij (θ)Z kl (θ)|X(θ) = w} = (β 2 − β)−1 {κ(δij δkl + δik δjl + δil δjk ) + βδij δkl − β 2 δij δkl } = E(i, j, k, l) − δij δkl , 119 (4.3.10) where E(i, j, k, l) = (β 2 − β)−1 κ(δij δkl + δik δjl + δil δjk ). Applying Corollary 4.1.3, we get E{det 2 X(θ)|X(θ) = w} j/2 N = (−1)N (β 2 (−1)j − β)N/2 j=0 G2i Sj−2i i=0 2 m(θ) β2 − β (4.3.11) N −j βw β2 − β . Then E{ϕ(Au (X, SN ))} = ∞ (β − 1)N/2 (2π)N/2 dσ(θ) Θ j/2 (−1)j j=0 m(θ)) 2 u N × 1 − dw e 2β G2i Sj−2i i=0 2 m(θ) β2 − β βw N −j β2 − β Case 2: β < 1. Then (4.3.9) becomes E{det 2 X(θ)|X(θ) = w} = E{det( 2 Z(θ) + 2 m(θ))|X(θ) = w} = (β − β 2 )N/2 E{det((β − β 2 )−1/2 2 Z(θ) + (β − β 2 )−1/2 2 m(θ))|X(θ) = w} = (β − β 2 )N/2 E{det(∆ + (β − β 2 )−1/2 2 m(θ) − β(β − β 2 )−1/2 wIN )}, where ∆ = (∆ij )1≤i,j≤N and all ∆ij are centered Gaussian variables satisfying E{∆ij ∆kl } = (β − β 2 )−1 E{Z ij (θ)Z kl (θ)|X(θ) = w} = (β − β 2 )−1 {κ(δij δkl + δik δjl + δil δjk ) + βδij δkl − β 2 δij δkl } = E(i, j, k, l) + δij δkl , 120 . where E(i, j, k, l) = (β − β 2 )−1 κ(δij δkl + δik δjl + δil δjk ). Applying Corollary 4.1.3, we get E{det 2 X(θ)|X(θ) = w} j/2 N = (−1)N (β (−1)j − β 2 )N/2 2 m(θ) G2i Sj−2i j=0 β − β2 i=0 βw β − β2 N −j . Then E{ϕ(Au (X, SN ))} = ∞ (1 − β)N/2 (2π)N/2 dσ(θ) Θ j/2 (−1)j j=0 m(θ)) 2 u N × 1 − dw e 2β 2 m(θ) G2i Sj−2i β − β2 i=0 βw β − β2 Case 3: β = 1. Then (4.3.9) becomes E{det 2 X(θ)|X(θ) = w} = E{det( 2 Z(θ) + = E{det(∆ + 2 m(θ))|X(θ) = w} 2 m(θ) − wI )}, N where ∆ = (∆ij )1≤i,j≤N and all ∆ij are centered Gaussian variables satisfying E{∆ij ∆kl } = E{Z ij (θ)Z kl (θ)|X(θ) = w} = κ(δij δkl + δik δjl + δil δjk ) + βδij δkl − β 2 δij δkl = E (i, j, k, l), where E (i, j, k, l) = κ(δij δkl + δik δjl + δil δjk ). Applying Corollary 4.1.3, we get N E{det 2 X(θ)|X(θ) = w} = (−1)j Sj ( 2 m(θ))wN −j . (−1)N j=0 121 N −j . Then E{ϕ(Au (X, SN ))} = ∞ 1 (2π)N/2 Θ dσ(θ) 1 dw e− 2 m(θ) 2 u N × (−1)j Sj ( 2 m(θ))wN −j . j=0 122 Chapter 5 Excursion Probability of Smooth Gaussian Processes over Random Intervals Let {X(t) : t ∈ [0, ∞)} be a Gaussian process and let T > 0 be a fixed number, the tail probability P{sup0≤t≤T X(t) ≥ u} for large u has been extensively studied in the literature. However, the supremum over the a fixed domain [0, T ] is not adequate in certain applications [cf. Kozubowski et al. (2004, 2006)], instead, one needs to consider the asymptotics for sup0≤t≤T X(t) where T is a non-negative random variable independent of X. To study P{sup0≤t≤T X(t) ≥ u}, we have to take into account the behaviors of both X and T . Therefore, some interesting phenomena arise due to the connection between the Gaussian process and the random interval. Recently, M. Arendarczyk and K. Debicki (2011, 2012) considered the case when the Gaussian process X is non-smooth (i.e. the sample path is not twice differentiable), and obtained the following result under certain conditions: P sup X(t) ≥ u = g1 (u)(1 + o(1)), 0≤t≤T where g1 (u) is a function depending on X and T . 123 as u → ∞, (5.0.1) In the theory of approximating P{sup0≤t≤T X(t) ≥ u} for a fixed domain [0, T ], the asymptotic results for smooth Gaussian processes [cf. Adler and Taylor (2007) and Aza¨ and ıs M. Wschebor (2009)] are much more accurate than those for non-smooth Gaussian processes [cf. Piterbarg (1996a)]. More specifically, under certain smoothness condition, one can get a higher-order approximation such that the error term decays exponentially faster than the principle term. Motivated by this, one may expect that for smooth Gaussian processes over random interval [0, T ], the following approximation holds under certain conditions: 2 P sup X(t) ≥ u = g2 (u)(1 + o(e−αu )), as u → ∞, (5.0.2) 0≤t≤T for some α > 0, where g2 (u) is a function depending on X and T . Obviously, compared with (5.0.1), (5.0.2) provides a much more accurate approximation. In this chapter, we apply the Rice method [cf. Aza¨ and Delmas (2002) and Aza¨ and M. Wschebor (2009)] to prove our ıs ıs main results Theorem 5.1.6 and Theorem 5.2.5 which are of the form as (5.0.2). 5.1 Stationary Gaussian Processes Let X = {X(t) : t ∈ R+ } be a centered stationary Gaussian process with Var(X(0)) = 1. Define r(t) := E{X(t)X(0)}, λ2 := Var(X (0)). We will impose conditions (H1), (H3) and the following regularity condition (A1) on X. (A1). For any fixed δ > 0, supt≥δ r(t) < 1. 124 The number of maxima above level u over [0, T ] becomes Mu (0, T ) = #{t ∈ (0, T ) : X(t) ≥ u, X (t) = 0, X (t) < 0}. (5.1.1) Note that for each fixed t, X(t) and X (t) are independent, by (2.3.1), we have the following upper bound for the excursion probability, P sup X(t) ≥ u 0≤t≤T ≤ P{X(0) ≥ u, X (0) ≤ 0} + P{X(T ) ≥ u, X (T ) ≥ 0} + E{Mu (0, T )} (5.1.2) = Ψ(u) + E{Mu (0, T )}. Similarly, by (2.3.2), the lower bound becomes P sup X(t) ≥ u 0≤t≤T 1 ≥ Ψ(u) + E{Mu (0, T )} − E{Mu (0, T )(Mu (0, T ) − 1)} 2 (5.1.3) − P{X(0) ≥ u, X (0) ≤ 0, X(T ) ≥ u, X (T ) ≥ 0} − E{Mu (0, T )1{X(0)≥u,X (0)≤0} } − E{Mu (0, T )1{X(T )≥u,X (T )≥0} }. Lemma 5.1.1 Let X be a centered stationary Gaussian process satisfying (H1) and (H3). Then there exists some universal α > 0 such that for all T > 0, as u → ∞, E{Mu (0, T )} = 2 λT −u2 /2 e (1 + o(e−αu )). 2π 125 Proof Due to (H1) and (H3), one can use the Kac-Rice metatheorem and get T E{Mu (0, T )} = 0 pX (t) (0)E{|X (t)|1{X(t)≥u,X (t)<0} |X (t) = 0}dt ∞ 2 1 dt E{X (t)1{X (t)<0} |X(t) = x, X (t) = 0}e−x /2 dx 0 2πλ u ∞ 2 T =− dt E{X (0)1{X (0)<0} |X(0) = x}e−x /2 dx, 2πλ u T =− (5.1.4) where the second equality is due to the independence of X(t) and X (t), the last equality is due to the stationarity of X. Note that E{X(0)X (0)} = −λ2 , by Lemma 2.5.1, E{X (0)|X(0) = x} = −λ2 x. Make change of variable V = X (0) + λ2 x, then V |X(0) = x is a Gaussian variable with mean 0 and variance κ2 = Var(X (0)|X(0)). Let us denote its density by g(v), then E{X (0)1{X (0)<0} |X(0) = x} = E{X (0)|X(0) = x} − E{X (0)1{X (0)≥0} |X(0) = x} = −λ2 x − v≥λ2 x (5.1.5) (v − λ2 x)g(v)dv. But the last integral in (5.1.5) is non-negative and bounded by 2 4 2 −v v κ −λ x √ vg(v)dv = e 2κ2 dv = √ e 2κ2 . 2π v≥λ2 x v≥λ2 x 2πκ Since λ2 and κ2 are both constants not depending on T , plugging (5.1.5) into (5.1.4), we obtain the desired result by choosing some α ∈ (0, λ4 /(2κ2 )). 126 Lemma 5.1.2 Let X be a centered stationary Gaussian process satisfying (H1), (H3) and (A1). For t2 > t1 > 0, let f (t1 , t2 ) := min 1, inf t1 ≤t≤t2 detCov(X(0), X (0), X(t), X (t)) . (5.1.6) Then for any ε > 0, there exist positive constants C, δ and ε1 such that for all (0, T ) ⊂ R and u large enough, E{Mu (0, T )(Mu (0, T ) − 1)} ≤ CT exp − u2 2β 2 + ε + T 2 (f (δ, T ))−5/2 exp − u2 , 2 − ε1 where C, δ and ε1 do not depend on T and β 2 = Var(X(0)|X (0)) < 1. Remark 5.1.3 Note that X is stationary, hence for any fixed t ≥ 0, X (t) is independent of both X(t) and X (t), thus  0 r(t) r (t)  1    0 λ2 −r (t) −r (t)  detCov(X(0), X (0), X(t), X (t)) = det    r(t) −r (t) 1 0   r (t) −r (t) 0 λ2            = [λ4 − (r (t))2 ][1 − r2 (t)] + (r (t))2 [(r (t))2 − 2r(t)r (t) − 2λ2 ]. Proof Let b > a ≥ 0 and b − a ≤ 2δ for some δ > 0. Due to (H1) and (H3), one can use the Kac-Rice formula for factorial moments [cf. Theorem 11.5.1 in Adler and Taylor (2007)], 127 thus E{Mu (a, b)(Mu (a, b) − 1)} b b ds pX (t),X (s) (0, 0) dt = a a × E{|X (t)X (s)|1{X(t)≥u,X (t)<0} 1{X(s)≥u,X (s)<0} |X (t) = X (s) = 0} b ≤ a ∞ b dt ds a (5.1.7) u dx pX(t) (x|X (t) = X (s) = 0)pX (t),X (s) (0, 0) × E{|X (t)X (s)||X(t) = x, X (t) = X (s) = 0}. Let E(t, s) = E{|X (t)X (s)||X(t) = x, X (t) = X (s) = 0}. By Taylor’s formula, X (s) = X (t) + X (t)(s − t) + |s − t|1+η Yt,s , (5.1.8) where Yt,s is a centered Gaussian variable. In particular, for s > t, s X (s) − X (t) − X (t)(s − t) t (X (v) − X (t))dv , Yt,s = = |s − t|1+η (s − t)1+η and thus by (H1), Var(Yt,s ) ≤ L2 . Due to (5.1.8), we have E(t, s) = E{|X (t)X (s)||X(t) = x, X (t) = 0, X (t)(s − t) = −|s − t|1+η Yt,s } = |s − t|η E{|Yt,s X (s)||X(t) = x, X (t) = 0, X (t)(s − t) = −|s − t|1+η Yt,s }. (5.1.9) 128 By stationarity and (H1), Var(X (t)|X(t) = x, X (t) = X (s) = 0) ≤ Var(X (t)) ≤ C1 , (5.1.10) Var(Yt,s |X(t) = x, X (t) = X (s) = 0) ≤ Var(Yt,s ) ≤ L2 , where C1 is a positive constant. On the other hand, for |s − t| small enough, |E{X (s)|X(t) = x, X (t) = X (s) = 0}| = |E{X (s)|X(t) = x, X (t) = 0, X (t) + |s − t|η Yt,s = 0}| (5.1.11) = |E{X (s)|X(t) = x, X (t) = 0, X (t) = 0}|(1 + o(1)) ≤ C2 |x|, and similarly, |E{Yt,s |X(t) = x, X (t) = X (s) = 0}| ≤ C3 |x|, (5.1.12) for some C2 , C3 > 0. Note that for any Gaussian variables ξ1 and ξ2 , 2 2 E|ξ1 ξ2 | ≤ Eξ1 + Eξ2 = (Eξ1 )2 + Var(ξ1 ) + (Eξ2 )2 + Var(ξ2 ). (5.1.13) Applying (5.1.13) and plugging (5.1.10), (5.1.11) and (5.1.12) into (5.1.9), we obtain that there exists some C4 > 0 such that E(t, s) ≤ C4 |s − t|η (1 + x2 ). 129 (5.1.14) By Taylor’s formula (5.1.8), Var(X(t)|X (t), X (s)) = Var(X(t)|X (t), X (t) + X (t)(s − t) + |s − t|1+η Yt,s ) = Var(X(t)|X (t), X (t) ± |s − t|η Yt,s ) (5.1.15) = Var(X(t)|X (t), X (t))(1 + o(1)) = Var(X(0)|X (0))(1 + o(1)), where the last equality is due to the fact that X (t) is independent of both X(t) and X (t). Hence for any ε > 0, if |b − a| is small enough and u is large enough, then ∞ (1 + x2 )p u − X(t) (x|X (t) = X (s) = 0)dx ≤ e u2 2β 2 +ε . (5.1.16) Note that pX (t),X (s) (0, 0) ≤ 2π 1 detCov(X (t), X (s)) and by Taylor’s formula (5.1.8), detCov(X (t), X (s)) = detCov(X (t), X (t) + X (t)(s − t) ± |s − t|1+η Yt,s ) = |s − t|2 detCov(X (t), X (t) ± |s − t|η Yt,s ) (5.1.17) = |s − t|2 detCov(X (t), X (t))(1 + o(1)), as |s − t| → 0 uniformly. Thus there exists some C5 > 0 such that for |s − t| sufficiently small, C5 pX (t),X (s) (0, 0) ≤ . |s − t| 130 (5.1.18) Plugging (5.1.14), (5.1.16) and (5.1.18) into (5.1.7) we obtain that for any ε > 0, if δ is small enough, then there exists C6 > 0 such that for large u, − E{Mu (a, b)(Mu (a, b) − 1)} ≤ C4 C5 e ≤ C6 u2 2β 2 +ε b b |s − t|η−1 dtds a a u2 − 2 (b − a)1+η e 2β +ε − ≤ C6 (b − a)e (5.1.19) u2 2β 2 +ε . The set [0, T ] may be covered by congruent intervals Ii = [ai , ai+1 ] with the same length δ and disjoint interiors. Then E{Mu (0, T )(Mu (0, T ) − 1)} ≤ E i =E i = E{Mu (Ii i Mu (Ij ) − Mu (Ii ) j i Mu (Ii ) i )2 } + (5.1.20) E{Mu (Ii )Mu (Ij )} − i=j i E{Mu (Ii )} i E{Mu (Ii )(Mu (Ii ) − 1)} + = (Mu (Ii ) − 1) Mu (Ii ) E{Mu (Ii )Mu (Ij )}. i=j If Ii and Ij are neighboring, say j = i + 1, we have E{Mu (Ii ∪ Ii+1 )(Mu (Ii ∪ Ii+1 ) − 1)} = E{(Mu (Ii ) + Mu (Ii+1 ))(Mu (Ii ) + Mu (Ii+1 ) − 1)} = 2E{Mu (Ii )Mu (Ii+1 )} + E{Mu (Ii )(Mu (Ii ) − 1)} + E{Mu (Ii+1 )(Mu (Ii+1 ) − 1)}. (5.1.21) 131 It follows from (5.1.19) and (5.1.21) that for any ε > 0, if δ is small enough and u is large enough, then 1 E{Mu (Ii )Mu (Ii+1 )} ≤ E{Mu (Ii ∪ Ii+1 )(Mu (Ii ∪ Ii+1 ) − 1)} 2 1 = E{Mu (ai , ai+2 )(Mu (ai , ai+2 ) − 1)} 2 (5.1.22) u2 − 2 C ≤ 6 (ai+2 − ai )e 2β +ε 2 and hence E{Mu (Ii )Mu (Ii+1 )} ≤ 2C6 T e E{Mu (Ii )(Mu (Ii ) − 1)} + i − u2 2β 2 +ε . (5.1.23) i=j,Ii ∩Ii+1 =∅ Next we consider the case when Ii = [ai , ai+1 ] and Ij = [aj , aj+1 ] are non-neighboring, which implies aj − ai+1 ≥ δ. By the Kac-Rice formula for higher moments [cf. the proof is the same as that of Theorem 11.5.1 in Adler and Taylor (2007)], E{Mu (Ii )Mu (Ij )} = ai+1 dt ai aj+1 aj ds pX (t),X (s) (0, 0) × E{|X (t)X (s)|1{X(t)≥u,X (t)<0} 1{X(s)≥u,X (s)<0} |X (t) = X (s) = 0} ≤ ai+1 ai dt aj+1 aj ∞ ds ∞ dx u u dy pX (t),X (s) (0, 0|X(t) = x, X(s) = y)pX(t),X(s) (x, y) × E{|X (t)X (s)||X(t) = x, X(s) = y, X (t) = X (s) = 0}. (5.1.24) 132 By Lemma 2.5.1 and the stationarity of X, there exists some C7 > 0 such that |E{X (t)|X(t) = x, X(s) = y, X (t) = X (s) = 0}| ≤ C7 (|x| + |y|) detCov(X(t), X(s), X (t), X (s)) and |E{X (s)|X(t) = x, X(s) = y, X (t) = X (s) = 0}| ≤ C7 (|x| + |y|) . detCov(X(t), X(s), X (t), X (s)) Together with (5.1.13), similarly to (5.1.14), we obtain that E{|X (t)X (s)||X(t) = x, X(s) = y, X (t) = X (s) = 0} x2 + y 2 ≤ C8 1 + , [detCov(X(t), X(s), X (t), X (s))]2 (5.1.25) for some C8 > 0. On the other hand, pX (t),X (s) (0, 0|X(t) = x, X(s) = y) ≤ 1 2π 1 = 2π ≤ detCov(X (t), X (s)|X(t), X(s)) detCov(X(t), X(s)) detCov(X(t), X(s), X (t), X (s)) C9 detCov(X(t), X(s), X (t), X (s)) 133 , (5.1.26) for some C9 > 0. Plugging (5.1.25) and (5.1.26) into (5.1.24), we obtain that for large u, E{Mu (Ii )Mu (Ij )} ≤ 2(ai+1 − ai )(aj+1 − aj )(f (δ, T ))−5/2 ∞ u ∞ u (5.1.27) (x2 + y 2 )p X(t),X(s) (x, y)dxdy. Let R(δ) := supt≥δ r(t) which is strictly less than 1 by (H3), then for sufficiently large u, ∞ ∞ sup |s−t|≥δ u u (x2 + y 2 )pX(t),X(s) (x, y)dxdy ≤ sup E{(X(t)X(s))2 1{X(t)≥u,X(s)≥u} } |s−t|≥δ (5.1.28) ≤ sup E{(X(t) + X(s))4 )1{X(t)+X(s)≥2u} } |s−t|≥δ 2 ≤ u4 e−u /(1+R(δ)) . Let ε1 > 1 − R(δ), by (5.1.27) and (5.1.28), we obtain that for sufficiently large u, 2 E{Mu (Ii )Mu (Ij )} ≤ T 2 (f (δ, T ))−5/2 e−u /(2−ε1 ) . (5.1.29) i=j,Ii ∩Ij =∅ Combining (5.1.20) and (5.1.23) with (5.1.29), we obtain the desired result. Lemma 5.1.4 Let X be a centered stationary Gaussian process satisfying (H1), (H3) and (A1). Then there exist some universal positive constants δ and α such that for all T > 0 and u large enough, 2 2 E{Mu (0, T )1{X(0)≥u,X (0)≤0} } ≤ T [f (δ, T )]−3/2 e−αu −u /2 , 2 2 E{Mu (0, T )1{X(T )≥u,X (T )≥0} } ≤ T [f (δ, T )]−3/2 e−αu −u /2 , 134 where f (δ, T ) is defined in (5.1.6). Proof We shall only prove the first inequality, since the proof for the second one is similar. By Kac-Rice formula, E{Mu (0, T )1{X(0)≥u,X (0)≤0} } ∞ T = dt 0 ∞ dx 0 dy u −∞ u dz E{|X (t)|1{X (t)<0} |X(t) = x, X(0) = y, X (t) = 0, X (0) = z}pX(t),X(0),X (t),X (0) (x, y, 0, z) ∞ ∞ T ≤ −∞ u u 0 0 dy dx dt dz E{|X (t)||X(t) = x, X(0) = y, X (t) = 0, X (0) = z} × pX(t),X(0),X (t),X (0) (x, y, 0, z) T Au (t)dt. := 0 Let δ be a positive constant to be specified. We first consider T δ ∞ T Au (t)dt ≤ dt δ ∞ dx u u dy E{|X (t)||X(t) = x, X(0) = y, X (t) = 0} (5.1.30) × pX (t) (0|X(t) = x, X(0) = y)pX(t),X(0) (x, y). Note that E|ξ| ≤ E|ξ − Eξ| + |Eξ| ≤ Var(ξ) + |Eξ| for any random variable ξ, Var(X (t)|X(t), X(0), X (t)) ≤ Var(X (t)) = Var(X (0)). 135 By Lemma 2.5.1 and the stationarity of X, there exists some C1 > 0 such that sup |E{X (t)|X(t) = x, X(0) = y, X (t) = 0}| δ≤t≤T C1 (|x| + |y|) δ≤t≤T detCov(X(t), X(0), X (t)) C1 (|x| + |y|)Var(X (0)|X(t), X(0), X (t)) = sup detCov(X(t), X(0), X (t), X (0)) δ≤t≤T ≤ sup ≤ C1 λ2 [f (δ, T )]−1 (|x| + |y|). It follows that there exists some C2 > 0 such that E{|X (t)||X(t) = x, X(0) = y} ≤ C2 (1 + [f (δ, T )]−1 )(|x| + |y|). (5.1.31) Similarly, there exists some C3 > 0 such that sup pX (t) (0|X(t) = x, X(0) = y) δ≤t≤T ≤ sup δ≤t≤T (5.1.32) 1 2πVar(X (t)|X(t), X(0)) ≤ C3 [f (δ, T )]−1/2 . On the other hand, for sufficiently large u, ∞ sup ∞ dx t≥δ u u dy(x + y)pX(t),X(0) (x, y) = sup E{(X(t) + X(0))1{X(t)≥u,X(0)≥u} } t≥δ 2 ≤ sup E{(X(t) + X(0))1{X(t)+X(0)≥2u} } ≤ ue−u /(1+R(δ)) . t≥δ 136 (5.1.33) Plugging (5.1.31), (5.1.32) and (5.1.33) into (5.1.30), we obtain that for all T > 0, if u is sufficiently large, then T Au (t)dt ≤ u2 −3/2 e− 2−ε1 , T [f (δ, T )] δ where ε1 > 1 − R(δ). Next we consider δ 0 ∞ δ Au (t)dt ≤ dt 0 0 dy u dzE{|X (t)||X(0) = y, X (0) = z, X (t) = 0} −∞ × pX(0),X (0),X (t) (y, z, 0). Note that pX(0),X (0),X (t) (y, z, 0) = pX(0) (y|X (0) = z, X (t) = 0)pX (0) (z|X (t) = 0)pX (t) (0) − ≤ (2π)−3/2 [detCov(X(0), X (0), X (t))]−1/2 e (y−µt,z )2 2 − z2 2 2σt e 2γt , where µt,z = E{X(0)|X (0) = z, X (t) = 0}, 2 σt = Var(X(0)|X (0), X (t)), 2 γt = Var(X (0)|X (t)). By (H1) and Taylor’s formula, we can write X (t) = X (0) + X (0)t + Y0,t t1+η , 137 where Y0,t is a centered Gaussian variable. We find that for t ∈ (0, δ) with δ sufficiently small, E{X(0)X (t)} = t(E{X(0)X (0)} + tη E{X(0)Y0,t }) = t(−λ2 + tη E{X(0)Y0,t }) ≤ 0. Since z < 0, it follows from Lemma 2.5.1 that µt,z ≤ 0. If δ is sufficiently small, we also have, similarly to (5.1.15), 2 σt = Var(X(0)|X (0))(1 + o(1)) < 1 − ε0 , and similarly to (5.1.17), 2 C4 t2 ≤ γt ≤ C5 t2 , detCov(X(0), X (0), X (t)) ≥ C6 t2 , where ε0 , C4 , C5 and C6 are some positive constants. Together with the fact that E{|X (t)||X(0) = y, X (0) = z, X (t) = 0} = E{|X (t)||X(0) = y, X (t) − tX (t) + t1+η Yt,0 = z, X (t) = 0} = E{|X (t)||X(0) = y, X (t) − tη Yt,0 = z/t, X (t) = 0} ≤ C7 (y + |z/t| + 1) 138 for some C7 > 0, where the first equality is due to Taylor’s formula, we obtain that for δ sufficiently small and u sufficiently large, δ 0 −1/2 Au (t)dt ≤ (2π)−3/2 C6 −1/2 ≤ (2π)−3/2 C6 ≤ δe − (y−µt,z )2 2 − z2 2 2σt e 2γt − y2 z2 2 − 2C t2 2σt 5 e 0 ∞ 1 dz(y + |z/t| + 1)e dy dt −∞ u 0 t δ ∞ δ 1 dt 0 t u 0 −∞ 0 − dz(y + |z| + 1)e dy dt 0 dz(y + |z/t| + 1)e dy ∞ δ −1/2 = (2π)−3/2 C6 − y2 z2 2 2σt − 2C5 e −∞ u u2 2(1−ε0 ) . (5.1.34) Combining (5.1.30) with (5.1.34), we obtain that there exist some universal δ, α > 0 such that for all T > 0 and u large enough, 2 2 E{Mu (0, T )1{X(0)≥u,X (0)≤0} } ≤ T [f (δ, T )]−3/2 e−αu −u /2 . This completes the proof. Lemma 5.1.5 Let X be a centered stationary Gaussian process satisfying (H1), (H3) and (A1). Then there exists some universal α > 0 such that for all T > 0 and u large enough, 2 2 P{X(0) ≥ u, X (0) ≤ 0, X(T ) ≥ u, X (T ) ≥ 0} ≤ e−αu −u /2 . 139 Proof Let δ > 0, then similarly to (5.1.33), we obtain that for sufficiently large u, sup P{X(0) ≥ u, X (0) ≤ 0, X(T ) ≥ u, X (T ) ≥ 0} T ≥δ (5.1.35) ≤ sup P{X(0) ≥ u, X(T ) ≥ u} ≤ 2 e−u /(1+R(δ)) . T ≥δ For T ∈ (0, δ), by Taylor’s formula, X (T ) = X (0) + X (0)T + Y0,T T 1+η , it follows that P{X(0) ≥ u, X (0) ≤ 0, X(T ) ≥ u, X (T ) ≥ 0} ≤ P{X(0) ≥ u, X (0) ≤ 0, X (T ) ≥ 0} (5.1.36) = P{X(0) ≥ u, X (0) ≤ 0, X (0) + X (0)T + Y0,T T 1+η ≥ 0} ≤ P{X(0) ≥ u, X (0) + Y0,T T η ≥ 0}. Let ξ(T ) = X (0) + Y0,T T η , κ2 (T ) = Var(ξ(T )) and ρ(T ) = E{X(0)ξ(T )}/κ(T ), T ∈ (0, δ). Since E{X(0)X (0)} = −λ2 , if δ is sufficiently small, sup0≤T ≤δ ρ(T ) < −ε0 for some ε0 > 0. Let κ = sup κ(T ), κ= 0≤T ≤δ ρ = sup ρ(T ), ρ= 0≤T ≤δ 140 inf κ(T ), inf ρ(T ), 0≤T ≤δ 0≤T ≤δ then 0 < κ ≤ κ < ∞ and −1 < ρ ≤ ρ < −ε0 . We obtain that as u → ∞, sup P{X(0) ≥ u, X (0) + Y0,T T η ≥ 0} 0≤T ≤δ 1 = sup 0≤T ≤δ × exp ∞ 2πκ(T )(1 − ρ2 (T ))1/2 u − ∞ dx1 0 dx2 x2 1 2ρ(T )x1 x2 x2 + 2 2 − 1 2 (T )) κ(T ) 2(1 − ρ κ (T ) 1 ∞ dx1 exp{−x2 /2} 1 2 (T ))1/2 u 0≤T ≤δ 2πκ(T )(1 − ρ ∞ (x − κ(T )ρ(T )x1 )2 dx2 × exp − 22 2κ (T )(1 − ρ2 (T )) 0 ∞ ∞ 1 (x2 − κρx1 )2 2 /2} ≤ dx1 exp{−x1 exp − 2κ2 (1 − ρ2 ) 2πκ(1 − ρ2 )1/2 u 0 κ2 ρ2 u2 u2 − 2 + εu2 , = o exp − 2 2κ (1 − ρ2 ) = sup (5.1.37) dx2 for any ε > 0. Combining (5.1.35) and (5.1.36) with (5.1.37) yields the result. Theorem 5.1.6 Let {X(t) : t ∈ R+ } be a centered stationary Gaussian process satisfying (H1), (H3) and (A1), and let T be a non-negative random variable independent of X. If ET 2 (f (δ, T ))−5/2 < ∞ for any fixed δ > 0, then there exists α > 0 such that as u → ∞, sup X(t) ≥ u = Ψ(u) + P 0≤t≤T Proof 2 2 λET −u2 /2 e + o(e−αu −u /2 ). 2π Let FT be the cumulative distribution function of T . Note that ∞ P sup X(t) ≥ u = 0≤t≤T 0 P sup X(t) ≥ u FT (dT ), 0≤t≤T 141 (5.1.38) combining (5.1.2), (5.1.3), Lemma 5.1.1, Lemma 5.1.2 and Lemma 5.1.4 with Lemma 5.1.5, we obtain the result. Example 5.1.7 Let X be a centered stationary Gaussian process with covariance function 2 t r(t) = e− 2 . Then Var(X (0)) = Var(X(0)) = 1, E{X (t)X(0)} = −E{X(t)X (0)} = t2 r (t) = −te− 2 and E{X (t)X (0)} = −r (t), thus 2 detCov(X(t), X (t), X(0), X (0)) = (1 − e−t )2 , (5.1.39) which is increasing in t > 0. Hence if ET 2 < ∞, then P sup X(t) ≥ u = Ψ(u) + 0≤t≤T 5.2 2 2 ET −u2 /2 e + o(e−αu −u /2 ). 2π Gaussian Processes with Increasing Variance In this section, we consider a Gaussian process {X(t) : t ∈ R+ } with increasing variance at infinity. Let 2 σt = Var(X(t)), λ2 = Var(X (t)), t 2 θt = Var(X(t)|X (t)). (5.2.1) Let T be a non-negative random variable satisfying P{T ≥ t} = exp{−βtα (1 + o(1))} as t → ∞, 142 (5.2.2) where α, β > 0. We write T ∈ E(α, β) if T satisfies (5.2.2). Note that (5.2.2) implies that the corresponding cumulative distribution function FT (t) is continuous when t is sufficiently large. In additional to conditions (H1) and (H3), we will impose the following two conditions (A2) and (A3) on X. (A2). There exist α∞ > 0 and D1 > D2 > 0 such that as t → ∞, 2 σt = D1 tα∞ (1 + o(1)), 2 θt = D2 tα∞ (1 + o(1)). (A3). There exists N1 > 0 such that as t → ∞, max{λ2 , Var(X (t)), (detCov(X(t), X (t)))−1 } = O(tN1 ). t We will make use of the following inequality to estimate the excursion probability over each fixed interval [0, T ]: P{X(T ) ≥ u} ≤ P sup X(t) ≥ u ≤ P{X(T ) ≥ u} + P{X(0) ≥ u} + E{Mu (0, T )}. 0≤t≤T (5.2.3) The following result is Lemma 6.2 in Arendarczyk and Decicki (2011), which is analogous to the Laplace method. Lemma 5.2.1 Let α1 , α2 , β1 , β2 > 0 and a(u) = u(1−δ)α1 /(α1 +α2 ) , A(u) = u(1+δ)α1 /(α1 +α2 ) , where 0 < δ < α2 /α1 . Then as u → ∞, A(u) exp a(u) β uα1 − 1 α − β2 xα2 dx = exp{−β3 uα3 (1 + o(1))}, x 1 143 where α3 = α1 α2 , α1 + α2 α /(α1 +α2 ) α1 /(α1 +α2 ) β2 β3 = β1 2 α1 α2 /(α1 +α2 ) α2 α1 /(α1 +α2 ) . + α2 α1 Now we prove a lemma similar to Lemma 2.1 in Arendarczyk and Decicki (2011). Lemma 5.2.2 Let X ∈ E(α1 , β1 ), Y ∈ E(α2 , β2 ) be independent non-negative random variables. Then XY ∈ E(α, β) with α= α1 α2 , α1 + α2 α /(α1 +α2 ) α1 /(α1 +α2 ) β2 β = β1 2 Proof α2 α1 /(α1 +α2 ) α1 α2 /(α1 +α2 ) + . α2 α1 Let a(u) = u(1−δ)α1 /(α1 +α2 ) , A(u) = u(1+δ)α1 /(α1 +α2 ) , where 0 < δ < α2 /α1 . Then ∞ P{XY ≥ u} = 0 P{X ≥ u/y}dFY (y), a(u) A(u) P{X ≥ u/y}dFY (y) + = 0 a(u) P{X ≥ u/y}dFY (y) ∞ + A(u) P{X ≥ u/y}dFY (y) = I1 (u) + I2 (u) + I3 (u). For any ε > 0 and u large enough, we see that I1 (u) ≤ P{X ≥ u/a(u)} ≤ exp{−(β1 − ε)[u/a(u)]α1 } 2 ≤ exp{−(β1 − ε)uα1 α2 /(α1 +α2 )+δα1 /(α1 +α2 ) } = o(exp{−uα3 +ε0 }) 144 and I3 (u) ≤ P{Y ≥ A(u)} ≤ exp{−(β2 − ε)(A(u))α2 } ≤ exp{−(β2 − ε)u(1+δ)α1 α2 /(α1 +α2 ) } = o(exp{−uα3 +ε0 }). Next we estimate I2 . Note that both u/a(u) and u/A(u) tend to ∞, hence for any ε > 0 and u large enough, we have A(u) I2 (u) ≥ = a(u) A(u) exp{−(β1 + ε)(u/y)α1 }dFY (y) ∂ exp{−(β1 + ε)(u/y)α1 }P{Y ≥ y}dy a(u) ∂y + exp{−(β1 + ε)[u/a(u)]α1 }P{Y ≥ a(u)} − exp{−(β1 + ε)[u/A(u)]α1 }P{Y ≥ A(u)} A(u) ≥ a(u) exp{−(β1 + ε)(1 + ε)(u/y)α1 } exp{−(β2 + ε)uα2 }dy + exp{−(β1 + ε)[u/a(u)]α1 } exp{−(β2 + ε)(a(u))α2 } − exp{−(β1 + ε)[u/A(u)]α1 } exp{−(β2 − ε)(A(u))α2 } = I 2 (u, ε) + R1 (u, ε) − R2 (u, ε), and similarly, A(u) I2 (u) ≤ a(u) exp{−(β1 − ε)(1 − ε)(u/y)α1 } exp{−(β2 − ε)uα2 }dy + exp{−(β1 − ε)[u/a(u)]α1 } exp{−(β2 − ε)(a(u))α2 } − exp{−(β1 − ε)[u/A(u)]α1 } exp{−(β2 + ε)(A(u))α2 } = I 2 (u, ε) + R1 (u, ε) − R2 (u, ε). 145 Applying Lemma 5.2.1, we obtain that for any ε > 0, as u → ∞ I 2 (u, ε) = exp{−β 3 (ε)uα3 (1 + o(1))}, I 2 (u, ε) = exp{−β 3 (ε)uα3 (1 + o(1))}, where α3 = α1 α2 , α1 + α2 β 3 (ε) = [(β1 + ε)(1 + ε)]α2 /(α1 +α2 ) (β2 + ε)α1 /(α1 +α2 ) × α1 α2 /(α1 +α2 ) α2 α1 /(α1 +α2 ) + , α2 α1 β 3 (ε) = [(β1 − ε)(1 − ε)]α2 /(α1 +α2 ) (β2 − ε)α1 /(α1 +α2 ) × α2 α1 /(α1 +α2 ) α1 α2 /(α1 +α2 ) + . α2 α1 Together with the fact that there exists some ε0 > 0 such that all I1 (u), I3 (u), R1 (u, ε), R2 (u, ε), R1 (u, ε), R2 (u, ε) are o(exp{−uα3 +ε0 }), we obtain the desired result. Lemma 5.2.3 Let X be a Gaussian process satisfying (A2) and (A3) and let T ∈ E(α, β) be a non-negative random variable independent of X. Then X(T ) ∈ E(α, β1 ) with α= 2α , α + α∞ 1 α/(α+α∞ ) β1 = β α∞ /(α+α∞ ) 2D1 Proof α α∞ /(α+α∞ ) α∞ α/(α+α∞ ) + . α∞ α (5.2.4) Let N be the standard Normal random variable and let ν(·) be the standard deviation function of X, i.e. ν(t) = σt . Note that P{X(T ) ≥ u} = P{ν(T )N ≥ u}. 146 (5.2.5) On the other hand, as u → ∞, P{ν(T ) ≥ u} = P{T ≥ ν −1 (u)} = exp{−β(ν −1 (u))α (1 + o(1))} and −1/α∞ 2/α∞ u (1 + o(1)), ν −1 (u) = D1 thus −α/α∞ 2α/α∞ u (1 + o(1))}, P{ν(T ) ≥ u} = exp{−βD1 −α/α∞ i.e., ν(T ) ∈ E(2α/α∞ , βD1 ). Note that N ∈ E(2, 1/2), applying Lemma 5.2.2 in (5.2.5), we conclude the result. Lemma 5.2.4 Let X be a Gaussian process satisfying (H1), (H3), (A2) and (A3), and let T ∈ E(α, β) be a non-negative random variable independent of X. Then for any ε > 0, ∞ 0 E{Mu (0, T )}FT (dT ) = o(exp{−(β2 − ε)uα }) as u → ∞, where α= β2 = 2α , α + α∞ β α∞ /(α+α∞ ) 1 α/(α+α∞ ) 2D2 α α∞ /(α+α∞ ) α∞ α/(α+α∞ ) > β1 . + α∞ α 147 (5.2.6) Proof By the Kac-Rice formula, E{Mu (0, T ))} ∞ T dt = u ∞ 0 T ≤ dt 0 dx E{|X (t)|1{X (t)<0} |X(t) = x, X (t) = 0}pX(t),X (t) (x, 0) dx E{|X (t)||X(t) = x, X (t) = 0}pX(t) (x|X (t) = 0)pX (t) (0) u ∞ 2 2 1 1 dt dx E{|X (t)||X(t) = x, X (t) = 0} e−x /2θt θt 0 2πλt u T = Note that E|ξ| ≤ E|ξ − Eξ| + |Eξ| ≤ Var(ξ) + |Eξ| for any random variable ξ, and Var(X (t)|X(t), X (t)) ≤ Var(X (t)), E{X (t)|X(t) = x, X (t) = 0} = E{X (t)X(t)}λ2 − E{X (t)X (t)}E{X (t)X(t)} t x. detCov(X(t), X (t)) thus E{|X (t)||X(t) = x, X (t) = 0} ≤ Var(X (t)) + |E{X (t)X(t)}λ2 − E{X (t)X (t)}E{X (t)X(t)}| t x, detCov(X(t), X (t)) Now let h(t) 1 λt θt |E{X (t)X(t)}λ2 − E{X (t)X (t)}E{X (t)X(t)}| t Var(X (t)) + . detCov(X(t), X (t)) By (A3), there is some N2 > 0 such that h(t) = o(tN2 ) as t → ∞. 148 2 2 Let T0 be a large number such that for t > T0 , h(t) ≤ tN1 , θt is increasing and θt ≤ tα1 +1 . 2 Let A be a large number such that sup0≤t≤T0 h(t) ≤ A and sup0≤t≤T0 θt ≤ A, then for u large enough, E{Mu (0, T ))} ≤ T0 0 T 1 ∞ ∞ 2 2 2 1 −x2 /2θt + dxe dxe−x /2θt h(t)dt h(t)dt 2π T0 2π u u T 1 ∞ 2 2 T −x2 /2θT ≤ 0 Ae−u /(2A) + tN1 dt dxe 2π T0 2π u ∞ 2 2 T 1 1 −x2 /2θT ≤ 0 Ae−u /(2A) + √ T N1 +α1 /2+3/2 dx √ e . 2π 2π 2πθT u Hence we have ∞ 0 E{Mu (0, T )}dFT (T ) ∞ ∞ 2 2 T 1 −x2 /2θT √ T N1 +α1 +2 dFT (T ) ≤ 0 Ae−u /(2A) + e dx, 2π 2πθT 0 u = I1 (u) + I2 (u). Let T be a non-negative random variable with cumulative distribution function satisfying dFT (t) = tN1 +α1 +2 dFT (t), then T ∈ E(α, β). Let {X(t) : t ∈ R+ } be a Gaussian process 2 with Var(X(t)) = θt , then by Lemma 5.2.3, X(T ) ∈ E(α, β2 ), where α and β2 are as shown in (5.2.6). Note that I2 (u) = P{X(T ) ≥ u} and 2 > α hence I1 (u) = o(exp{−uα+δ }) for any δ ∈ (0, 2 − α). Thus both I1 (u) and I2 (u) 149 are o(exp{−(β2 − ε)uα }) for any ε > 0. The proof is completed. Theorem 5.2.5 Let {X(t) : t ∈ R+ } be a Gaussian process satisfying (H1), (H3), (A2) and (A3), and let T ∈ E(α, β) be a non-negative random variable independent of X. Then X(T ) ∈ E(α, β1 ) and as u → ∞, P sup X(t) ≥ u = P{X(T ) ≥ u} + o(exp{−(β2 − ε)uα }) 0≤t≤T = P{X(T ) ≥ u}(1 + o(exp{−(β2 − β1 − ε)uα }) for any ε > 0, where α, β1 and β2 are as shown in (5.2.4) and (5.2.6). Proof Note that ∞ sup X(t) ≥ u = P 0 0≤t≤T P sup X(t) ≥ u FT (dT ), 0≤t≤T combining (5.2.3) and Lemma 5.2.3 with Lemma 5.2.4, we obtain the result. t s Example 5.2.6 Let X(t) = 0 0 B(v)dvds, where B(v) is the standard Brownian motion. Then one has t3 , 3 t4 E{X(t)X (t)} = , 8 2 σt = t5 , 20 λ2 = t Var(X (t)) = t, E{X(t)X (t)} = 2 θt = Var(X(t)|X (t)) = t3 , 6 E{X (t)X (t)} = t2 , 2 t5 . 320 t s Example 5.2.7 Let X(t) = 0 0 Z(v)dvds, where Z(v) is a continuous stationary Gaussian 150 process with covariance function R(t) such that R(0) = 1 and R(t) = Dtα∞ −4 (1 + o(1)) as t → ∞, where D > 0, 2 < α∞ < 4. Then 2 σt = 2D tα∞ (1 + o(1)) α∞ (α∞ − 2)(α∞ − 3) and 2 2 θt = σt − [E{X(t)X (t)}]2 /Var(X (t)) = (4 − α∞ )D tα∞ (1 + o(1)). 2α∞ (α∞ − 2)(α∞ − 3) 151 Chapter 6 Ruin Probability of a Certain Class of Smooth Gaussian Processes Let {X(t) : t ≥ 0} be a centered smooth Gaussian process with variance t2γ for some γ > 2. We consider the probability P{supt≥0 (X(t) − ctβ ) ≥ u} as u → ∞, where c > 0 and β > γ. We derive some asymptotic approximations to such probability which refine the result of H¨sler and Piterbarg (1999). u 6.1 Self-similar Processes Let {X(t) : t ≥ 0} be a centered smooth Gaussian process with variance t2γ for some γ > 2. We say X is self similar if its covariance function C(t, s) satisfies C(at, as) = a2γ C(t, s), 152 ∀t, s ≥ 0, a > 0. (6.1.1) Let Y (t) = X(t) , 1+ctβ then P sup(X(t) − ctβ ) ≥ u = P{X(t) ≥ u + ctβ for some t ≥ 0} t≥0 = P{X(u1/β t) ≥ u + c(u1/β t)β for some t ≥ 0} = P{uγ/β X(t) ≥ u(1 + ctβ ) for some t ≥ 0} (6.1.2) X(t) ≥ u1−γ/β 1 + ctβ t≥0 = P sup = P sup Y (t) ≥ u1−γ/β , t≥0 where the third equality is due to the self-similarity (6.1.1). Note that Var(Y (t)) = t2γ , (1+ctβ )2 as a function of t, attains its maximum σ2 = 2γ/β −2 β γ c(β − γ) β−γ at the unique point t0 = 1/β γ . c(β − γ) Theorem 6.1.1 Let {X(t) : t ≥ 0} be a centered self-similar Gaussian process with variance t2γ for some γ > 2. Let β > γ and c > 0. Suppose X satisfies (H1) and (H3). Then there exists some α > 0 such that as u → ∞, P sup(X(t) − ctβ ) ≥ u = P sup Y (t) ≥ u1−γ/β =− t≥0 2t0 t≥0 ∞ dt t0 /2 + o exp E{Y (t)|Y (t) = x, Y (t) = 0} −x2 /2θ2 t dx e 2π detCov(Y (t), Y (t)) u1−γ/β − u2−2γ/β − αu2−2γ/β 2 2σ 153 , (6.1.3) where Y (t) = Proof X(t) 1+ctβ 2 and θt = Var(Y (t)|Y (t)). The first equality in (6.1.3) is the result in (6.1.2). Note that Y (t) ≥ u1−γ/β sup P ≤ P sup Y (t) ≥ u1−γ/β t≥0 t0 /2≤t≤2t0 ≤P Y (t) ≥ u1−γ/β + P sup t0 /2≤t≤2t0 +P sup Y (t) ≥ u1−γ/β (6.1.4) 0≤t≤t0 /2 sup Y (t) ≥ u1−γ/β , t≥2t0 where the last two terms are super-exponentially small due to the Borell-TIS inequality [cf. Theorem 2.1.1 in Adler and Taylor (2007)]. On the other hand, by Theorem 3.1.9, Y (t) ≥ u1−γ/β sup P t0 /2≤t≤2t0 2t0 =− ∞ dt t0 /2 + o exp E{Y (t)|Y (t) = x, Y (t) = 0} −x2 /2θ2 t dx e 2π detCov(Y (t), Y (t)) u1−γ/β − u2−2γ/β − αu2−2γ/β 2 2σ . Combining this with (6.1.4) yields the desired result. Corollary 6.1.2 Under the assumptions in Theorem 6.1.1, as u → ∞, P sup(X(t) − ctβ ) ≥ u = P sup Y (t) ≥ u1−γ/β t≥0 = Var(Y (t0 )) +1 E{Y (t0 )Y (t0 )} t≥0 −1/2 Ψ u1−γ/β (1 + o(1)). σ Proof One can check that the second derivative of the variance function of Y , Var(Y (t)) = t2γ , (1+ctβ )2 at t0 is not equal to 0. This implies that the condition in (3.2.6) holds. Applying 154 Corollary 3.2.3 and Theorem 6.1.1, we obtain the result. 6.2 Integrated Fractional Brownian Motion In this section, we show the application to a typical example, the double integrated fractional Brownian motion. t s Let X(t) = 0 0 BH (u)duds, where BH is fractional Brownian motion with Hurst index H, i.e. Cov(BH (t)BH (s)) = 1 (t2H +s2H −|t−s|2H ). Then X satisfies (6.1.1) with γ = H+2, 2 it also satisfies (H1) and (H3), and Var(X(t)) = Let β > H + 2 and Y (t) = X(t) , 1+ctβ t2H+4 . 2(2H + 1)(2H + 4) (6.2.1) we consider the probability P sup(X(t) − ctβ ) ≥ u = P sup Y (t) ≥ u1−(H+2)/β . t≥0 (6.2.2) t≥0 We see that Var(Y (t)) = t2H+4 , 2(2H + 1)(2H + 4)(1 + ctβ )2 (6.2.3) which attains the maximum at the unique point t0 = 1/β H +2 . c(β − H − 2) 155 (6.2.4) Note that E(Y (t)Y (t)) = Var(Y (t)) = cβtβ t2H+3 1 − , 2(2H + 1)(1 + ctβ )2 2 (2H + 4)(1 + ctβ ) c2 β 2 t2β t2H+2 1 cβtβ + , − (1 + ctβ )2 2H + 2 2(2H + 1)(2H + 4)(1 + ctβ )2 2(2H + 1)(1 + ctβ ) and E{Y (t)Y (t)} = cβtβ t2H+2 2H 2 + H + 1 c2 β(β + 1)t2β − cβ(β − 1)tβ + − , 2H + 2 2(2H + 1)(1 + ctβ )2 (2H + 4)(1 + ctβ )2 1 + ctβ it follows that H2 − H Var(Y (t0 )) = . E{Y (t0 )Y (t0 )} (β − 2)(H + 1) − 2H 2 Thus by Corollary 6.1.2, P sup(X(t) − ctβ ) ≥ u t≥0 = P sup Y (t) ≥ u1−(H+2)/β t≥0 ∼ −1/2 Var(Y (t0 )) Ψ +1 E{Y (t0 )Y (t0 )} ∼ (β − 2)(H + 1) − 2H 2 1/2 Ψ (β − H − 2)(H + 1) u1−(H+2)/β (6.2.5) Var(Y (t0 )) u1−(H+2)/β Var(Y (t0 )) . Now let H = 1/2, β = 3 and c = 1. By the discussions above, √ P sup(X(t) − ctβ ) ≥u ∼ t≥0 4/3Ψ 144u1/6 51/3 . (6.2.6) However, applying the Laplace approximation of higher order to (6.1.3), we will get a more 156 accurate approximation: P sup(X(t) − ctβ ) ≥ u t≥0 ∞ √ 1 ∼ πc0 + 2π u1/6 where 6.3 √ π 1 c2 + c0 2 2 x − exp 72x2 52/3 (6.2.7) dx, √ √ √ 19 3055/6 8 3051/6 15 , c2 = , c0 = . c0 = 5 2700 5 More General Gaussian Processes Assume that {X(t) : t ≥ 0} is a centered smooth Gaussian process with variance t2γ for X(u1/β t) some γ > 2. Let Xu (t) = γ/β , then u (1+ctβ ) P sup(X(t) − ctβ ) ≥ u = P{X(t) ≥ u + ctβ for some t ≥ 0} t≥0 = P{X(u1/β t) ≥ u + c(u1/β t)β for some t ≥ 0} =P X(u1/β t) uγ/β (1 + ctβ ) (6.3.1) ≥ u1−γ/β for some t ≥ 0 = P sup Xu (t) ≥ u1−γ/β . t≥0 Note that Var(Xu (t)) = t2γ , (1+ctβ )2 σ2 = as a function of t, attains its maximum 2γ/β −2 γ β c(β − γ) β−γ at the unique point t0 = 1/β γ . c(β − γ) 157 We see that neither σ 2 nor t0 depend on u. E{X(t)X(s)} Let r(t, s) = √ Var(X(t))Var(X(s)) , we will make use of the following condition. (A1 ). For any fixed δ > 0, R(δ) := sup|t−s|≥δ r(t, s) < 1. Let M 1−γ/β (Xu , (t0 /2, 2t0 )) be the number of local maximum points t ∈ (t0 /2, 2t0 ) u such that Xu (t) exceeding level u1−γ/β . Lemma 6.3.1 Let {X(t) : t ≥ 0} be a centered Gaussian process with variance t2γ for some γ > 2. Assume X ∈ C 2 (R+ ) a.s. and that X satisfies the regularity conditions (H3) and (A1 ). Suppose there exist positive constants C0 , N0 and η0 such that for all t ≥ 0, Var(X (t)) ≤ C0 (tN0 + 1), (6.3.2) [detCov(X(t), X (t), X (t))]−1 ≤ C0 (tN0 + 1); for all t = s, E(X (t) − X (s))2 ≤ C0 [(t + s)N0 + 1](t − s)2η0 ; (6.3.3) and all |t − s| ≥ δ0 , where δ0 > 0 is some fixed number, [detCov(X(t), X (t), X(s), X (s))]−1 ≤ C0 (t + s)N0 . Then there exists some α > 0 such that as u → ∞, E{M 1−γ/β (Xu , (t0 /2, 2t0 ))[M 1−γ/β (Xu , (t0 /2, 2t0 )) − 1]} u u = o exp − u2−2γ/β − αu2−2γ/β 2 2σ 158 . (6.3.4) Proof By the Kac-Rice metatheorem, one has E{M 1−γ/β (Xu , (t0 /2, 2t0 ))[M 1−γ/β (Xu , (t0 /2, 2t0 )) − 1]} u u ≤ 2t0 dt t0 /2 2t0 t0 /2 ∞ ds u1−γ/β dxE{|Xu (t)Xu (s)||Xu (t) = x, Xu (t) = Xu (s) = 0} (6.3.5) × pXu (t) (x|Xu (t) = Xu (s) = 0)pX (t),X (s) (0, 0). u u Let Eu (t, s) := E{|Xu (t)Xu (s)||Xu (t) = x, Xu (t) = Xu (s) = 0}. By Taylor’s formula, Xu (s) = Xu (t) + Xu (t)(s − t) + |s − t|1+η Yt,s,u , (6.3.6) where Yt,s,u is a centered Gaussian variable. In particular, for s > t, s Xu (s) − Xu (t) − Xu (t)(s − t) t (Xu (v) − Xu (t))dv . Yt,s,u = = (s − t)1+η (s − t)1+η Differentiating Var(X(t)) = t2γ with respective to t twice, we see that 2(Var(X (t))) + E{X(t)X (t)} = 2γ(2γ − 1)t2γ−2 . Since |E{X(t)X (t)}| ≤ Var(X(t))Var(X (t)) = t2γ Var(X (t)), together with condition (6.3.2), we get Var(X (t)) ≤ C1 (tN1 + 1) for some positive constants C1 and N1 . Combining this fact with conditions (6.3.2) and (6.3.3), we obtain sup Var(Yt,s,u ) ≤ C2 uN2 t0 /2≤t t and |s − t| → 0, there exist positive constants C4 , C5 , N4 and N5 such that for large x and u, |E{Xu (t)|Xu (t) = x, Xu (t) = Xu (s) = 0}| = |E{Xu (t)|Xu (t) = x, Xu (t) = 0, Xu (t) + |s − t|η Yt,s,u = 0}| ≤ |E{Xu (t)|Xu (t) = x, Xu (t) = 0, Xu (t) = 0}| + o(1)uN4 |x| ≤ C4 |x| + o(1)uN |x| ≤ C5 |x|uN5 , detCov(Xu (t), Xu (t), Xu (t)) where the last line is due to condition (6.3.2). In fact, let (ξ1 , ξ2 , ξ3 ) be a non-degenerate 160 Gaussian vector, then detCov(ξ1 , ξ1 + ξ2 , ξ1 + ξ2 + ξ3 ) = detCov(ξ1 , ξ2 , ξ3 ). By using this identity, we see that condition (6.3.2) implies that there exist positive constants C0 and N0 such that for large u, sup t0 /2≤t0,t0 /2≤t≤2t0 Var(Xu (t)|Xu (t), Xu (t)). One can check that the second derivative of the variance function of Xu (t), Var(Xu (t)) = t2γ , (1+ctβ )2 at t0 is not equal to 0. Therefore supu>0 E{Xu (t0 )Xu (t0 )} < 0 and moreover, κ2 < σ 2 . For any ε > 0, if |s − t| is sufficiently small, then for large u, ∞ u1−γ/β x2 p Xu (t) (x|Xu (t) = Xu (s) = 0)dx ≤ 2−2γ/β −u 2 e 2κ +ε . Note that pX (t),X (s) (0, 0) ≤ u u 2π 1 detCov(Xu (t), Xu (s)) , and by the Taylor expansion, for |s − t| → 0 and large u, detCov(Xu (t), Xu (s)) = detCov(Xu (t), Xu (t) + Xu (t)(s − t) + |s − t|1+η Yt,s,u ) = |s − t|2 detCov(Xu (t), Xu (t) + |s − t|η Yt,s,u ) = |s − t|2 detCov(Xu (t), Xu (t))(1 + uN7 o(1)), where N7 is some positive constant. Note that detCov(Xu (t), Xu (t)) = detCov(Xu (t), Xu (t), Xu (t)) . Var(Xu (t)|Xu (t), Xu (t)) 162 Thus by conditions (6.3.2) and (6.3.3), there exists positive constant N8 such that for small |s − t| and large u, uN8 . pX (t),X (s) (0, 0) ≤ u u |s − t| Note that when u tends to infinity, the polynomials of u will be killed by the exponential decay of u. Plugging these results into (6.3.5), we obtain that for any ε > 0, there exists δ > 0 small enough, such that for large u, E{M 1−γ/β (Xu , (t0 − δ, t0 + δ))[M 1−γ/β (Xu , (t0 − δ, t0 + δ)) − 1]} u u −u ≤e 2−2γ/β 2κ2 +ε ≤ C7 δ exp t0 +δ − t0 +δ |s − t|η−1 dtds t0 −δ t0 −δ u2−2γ/β 2κ2 + ε for some positive constant C7 . The set [t0 /2, 2t0 ] may be covered by congruent intervals Ii = [ai , ai+1 ] with disjoint interiors such that the lengths are less than δ/2. By similar discussions in Lemma 5.1.2, we only need to consider non-neighboring Ii = [ai , ai+1 ] and Ij = [aj , aj+1 ], say aj −ai+1 ≥ δ/2. Then E{M 1−γ/β (Xu , Ii )M 1−γ/β (Xu , Ij )} u u = ai+1 ai aj+1 aj ∞ dtds u1−γ/β ∞ u1−γ/β dxdyE{|Xu (t)Xu (s)||Xu (t) = x, Xu (s) = y, Xu (t) = Xu (s) = 0}pX (t),X (s) (0, 0|Xu (t) = x, Xu (s) = y)pXu (t),Xu (s) (x, y). u u Similarly to (6.3.9), by conditions (6.3.2) and (6.3.3), there exists a positive constant N9 163 such that for large u, E{|Xu (t)Xu (s)||Xu (t) = x, Xu (s) = y, Xu (t) = Xu (s) = 0} ≤ uN9 (x2 + y 2 ) ; [detCov(Xu (t), Xu (s), Xu (t), Xu (s))]2 and also, pX (t),X (s) (0, 0|Xu (t) = x, Xu (s) = y) u u 1 ≤ 2π detCov(Xu (t), Xu (s)|Xu (t), Xu (s)) = ≤ 1 2π detCov(Xu (t), Xu (s)) detCov(Xu (t), Xu (s), Xu (t), Xu (s)) uN9 detCov(Xu (t), Xu (s), Xu (t), Xu (s)) . Thus by condition (6.3.4), there exists a positive constant N10 such that for large u, E{Mu (Ii )Mu (Ij )} ≤ u2N9 (ai+1 − ai )(aj+1 − aj ) ∞ ∞ u1−γ/β u1−γ/β dxdy(x2 + y 2 )pXu (t),Xu (s) (x, y) −5/2 × inf t,s∈[t0 /2,2t0 ]:|t−s|≥δ/2 ≤ uN10 (a detCov(Xu (t), Xu (s), Xu (t), Xu (s)) ∞ i+1 − ai )(aj+1 − aj ) ∞ u1−γ/β u1−γ/β 164 dxdy(x2 + y 2 )pXu (t),Xu (s) (x, y). By (A1 ), R(δ) = sup|s−t|≥δ r(t, s) is strictly less than 1 hence for u sufficiently large, ∞ ∞ sup |s−t|≥δ/2 u1−γ/β ≤ u1−γ/β dxdy(x2 + y 2 )pXu (t),Xu (s) (x, y) E{(Xu (t)Xu (s))2 I(Xu (t) ≥ u1−γ/β , Xu (s) ≥ u1−γ/β )} sup |s−t|≥δ/2 ≤ E{(Xu (t) + Xu (s))4 )I(Xu (t) + Xu (s) ≥ 2u1−γ/β )} sup |s−t|≥δ/2 ≤ u4 exp − u2−2γ/β . 1 + R(δ/2) Combining the results completes the proof. Theorem 6.3.2 Suppose the assumptions in Lemma 6.3.1 hold. Then there exists α > 0 such that as u → ∞, P sup(X(t) − ctβ ) ≥ u = P sup Xu (t) ≥ u1−γ/β =− t≥0 2t0 t≥0 ∞ dt t0 /2 + o exp 2 E{Xu (t)|Xu (t) = x, Xu (t) = 0} −x2 /(2θu (t)) dx e 2π detCov(Xu (t), Xu (t)) u1−γ/β − u2−2γ/β − αu2−2γ/β 2σ 2 (6.3.10) , X(u1/β t) 2 where Xu (t) = γ/β and θu (t) = Var(Xu (t)|Xu (t)). β) u (1+ct Proof The first equality in (6.3.10) is the result in (6.3.1). Note that Xu (t) ≥ u1−γ/β sup P ≤ P sup Xu (t) ≥ u1−γ/β t≥0 t0 /2≤t≤2t0 ≤P sup Xu (t) ≥ u1−γ/β + P t0 /2≤t≤2t0 +P sup 0≤t≤t0 /2 sup Xu (t) ≥ u1−γ/β , t≥2t0 165 Xu (t) ≥ u1−γ/β (6.3.11) where the last two terms are super-exponentially small due to the Borell-TIS inequality [cf. Theorem 2.1.1 in Adler and Taylor (2007)]. On the other hand, by Lemma 6.3.1 and the bounds in (2.3.3), P Xu (t) ≥ u1−γ/β sup t0 /2≤t≤2t0 u2−2γ/β − αu2−2γ/β 2 2σ 2t0 ∞ 2 E{Xu (t)|Xu (t) = x, Xu (t) = 0} −x2 /(2θu (t)) =− dt e dx 2π detCov(Xu (t), Xu (t)) t0 /2 u1−γ/β u2−2γ/β − αu2−2γ/β , + o exp − 2 2σ = E{M 1−γ/β (Xu , (t0 /2, 2t0 ))} + o exp u − where the last equality comes from the combination of similar discussions in Lemma 2.3.2 and Lemma 6.3.1. Combining this with (6.3.11) yields the desired result. Applying the Laplace method, we obtain the following result. Corollary 6.3.3 Under the assumptions in Theorem 6.3.2, one has that as u → ∞, P sup(X(t) − ctβ ) ≥ u = P sup Xu (t) ≥ u1−γ/β t≥0 = t≥0 Var(Xu (t0 )) +1 E{Xu (t0 )Xu (t0 )} −1/2 Ψ u1−γ/β (1 + o(1)). σ Example 6.3.4 Let β > H > 2, X(t) = tH Z(t), Xu (t) = X(tu1/β ) uH/β (1+ctβ ) = tH Z(tu1/β ) , 1+ctβ where Z is a smooth stationary Gaussian process with covariance r(t) and r(0) = 1. Then Var(X(t)) = t2H , and P sup(X(t) − ctβ ) ≥ u = P sup Xu (t) ≥ u1−H/β . t>0 t>0 166 Notice that Var(Xu (t)) attains it maximum Var(Xu (t0 )) = 2H/β −2 β H c(β − H) β−H at the unique point t0 = 1/β H . c(β − H) By tedious computations, we get H 1/β HtH−1 − c(β − H)tH+β−1 1/β ) + t u Z(tu Z (tu1/β ), Xu (t) = β )2 β (1 + ct 1 + ct and Xu (t) = H(H − 1)tH−2 + c((2H − β)(H − 1) − β 2 )tH+β−2 + c2 (β − H)(1 − H)t2β+H−2 (1 + ctβ )3 × Z(tu1/β ) + tH u2/β 2[HtH−1 (1 + ctβ ) − cβtH+β−1 ]u1/β Z (tu1/β ) + Z (tu1/β ). (1 + ctβ )2 1 + ctβ Notice that E{Z(t)Z (t)} = 0 and Var(Z (t)) = −E{Z(t)Z (t)} = r (0) for all t, we obtain Var(Xu (t)) = (HtH−1 − c(β − H)tH+β−1 )2 t2H u2/β − r (0) (1 + ctβ )4 (1 + ctβ )2 = t2H−2 (H − c(β − H)tβ )2 − t2 u2/β r (0) , (1 + ctβ )2 (1 + ctβ )2 and E{Xu (t)Xu (t)} = (H(H − 1) + c((2H − β)(H − 1) − β 2 )tβ + c2 (β − H)(1 − H)t2β (1 + ctβ )2 + t2 u2/β r (0) t2H−2 . (1 + ctβ )2 167 It follows that −H(β − H) + t2 u2/β r (0) E{Xu (t0 )Xu (t0 )} 0 = Var(Xu (t0 )) + E{Xu (t0 )Xu (t0 )} −H(β − H) =1− H 2/β−1 u2/β r (0) c2/β (β − H)2/β+1 , and thus P sup(X(t) − ctβ ) > u = P sup Xu (t) > u1−H/β t>0 t>0 ∼ 1/2 E{Xu (t0 )Xu (t0 )} u1−H/β Ψ Var(Xu (t0 )) + E{Xu (t0 )Xu (t0 )} Var(Xu (t0 ))1/2 = 1− H 2/β−1 u2/β r (0) 1/2 c2/β (β − H)2/β+1 Ψ 168 u1−H/β Var(Xu (t0 ))1/2 . Chapter 7 Excursion Probability of Gaussian Random Fields on Sphere In this chapter, we consider a real-valued Gaussian random field X = {X(x) : x ∈ SN } indexed on the N -dimensional unit sphere SN . The approximations to excursion probability of the field P{supx∈SN X(x) ≥ u}, as u → ∞, are obtained for two cases: (i) X is locally isotropic and the sample path is non-smooth; (ii) X is isotropic and the sample path is twice differentiable. For the first case, it is shown that the asymptotics is similar to Pickands’ approximation on Euclidean space which involves Pickands’ constant; while for the second case, we use the expected Euler characteristic method to obtain a more precise approximation such that the error is super-exponentially small. 169 7.1 Notations For x = (x1 , . . . , xN +1 ) ∈ SN , its corresponding spherical coordinate θ = (θ1 , . . . , θN ) is defined as follows. x1 = cos θ1 , x2 = sin θ1 cos θ2 , x3 = sin θ1 sin θ2 cos θ3 , . . . xN = sin θ1 sin θ2 · · · sin θN −1 cos θN , xN +1 = sin θ1 sin θ2 · · · sin θN −1 sin θN , where 0 ≤ θi ≤ π for 1 ≤ i ≤ N − 1 and 0 ≤ θN < 2π. Throughout this chapter, for two points x = (x1 , . . . , xN +1 ) and y = (y1 , . . . , yN +1 ) on SN , we always denote by θ = (θ1 , . . . , θN ) the spherical coordinate of x and by ϕ = (ϕ1 , . . . , ϕN ) the spherical coordinate of y respectively. Let · , ·, · be Euclidean norm and inner product respectively. Denote by d(·, ·) the distance function in SN , i.e., d(x, y) = arccos x, y , ∀x, y ∈ SN . 7.2 Non-smooth Gaussian Fields on Sphere 7.2.1 Locally Isotropic Gaussian Fields on Sphere Let X = {X(x) : x ∈ SN } be a centered Gaussian random field with covariance function C satisfying C(x, y) = 1 − cdα (x, y)(1 + o(1)) as dα (x, y) → 0, 170 (7.2.1) for some constants c > 0 and α ∈ (0, 2]. Covariance functions satisfying (7.2.1) behave like isotropic in local sense, hence they fall under the general category of locally isotropic covariance. Also, there are many examples of covariances of isotropic Gaussian fields on SN satisfying (7.2.1). For instance, C(x, y) = α e−cd (x,y) , where c > 0 and α ∈ (0, 1]. Recall the spherical coordinate representation, we define X(θ) := X(x) and denote by C the covariance function of X accordingly. Lemma 7.2.1 Let x, y ∈ SN and let x be fixed. Then as d(y, x) → 0, N −1 d2 (y, x) ∼ (ϕ1 − θ1 )2 + (sin2 θ1 )(ϕ2 − θ2 )2 sin2 θi (ϕN − θN )2 , + ··· + i=1 where θ = (θ1 , . . . , θN ) and ϕ = (ϕ1 , . . . , ϕN ) are the spherical coordinates of x and y respectively. Proof Note that x, y ∈ SN implies x 2 = y 2 = 1, hence as d(y, x) → 0, x − y → 0 and cos y − x ∼ 1 − 1 y − x 2 = y, x . 2 Applying the spherical coordinates, we obtain that as d(y, x) → 0, or equivalently ϕ − θ → 0 (There is an exception for θ with θN = 0, since for those ϕ such that d(y, x) → 0 and ϕN tending to 2π, ϕ − θ does not tend to 0. In such case, we may treat θN to be 2π instead 171 of 0 and this does not affect the result thanks to the periodicity.), d2 (y, x) = arccos2 y, x ∼ y − x 2 = (cos ϕ1 − cos θ1 )2 + (sin ϕ1 cos ϕ2 − sin θ1 cos θ2 )2 + · · · + (sin ϕ1 sin ϕ2 · · · sin ϕN −1 cos ϕN − sin θ1 sin θ2 · · · sin θN −1 cos θN )2 + (sin ϕ1 sin ϕ2 · · · sin ϕN −1 sin ϕN − sin θ1 sin θ2 · · · sin θN −1 sin θN )2 = 2 − 2 cos(ϕ1 − θ1 ) + 2(sin ϕ1 sin θ1 )[1 − cos(ϕ2 − θ2 )] N −1 sin ϕi sin θi [1 − cos(ϕN − θN )]. + ··· + 2 i=1 It then follows from Taylor’s expansion that N −1 d2 (y, x) ∼ (ϕ1 − θ1 )2 + (sin ϕ1 sin θ1 )(ϕ2 − θ2 )2 sin ϕi sin θi (ϕN − θN )2 + ··· + i=1 N −1 sin2 θi (ϕN − θN )2 , ∼ (ϕ1 − θ1 )2 + (sin2 θ1 )(ϕ2 − θ2 )2 + · · · + i=1 completing the proof. Next, we need some existing results on the approximations to excursion probability of Gaussian random fields over Euclidean space. Let 0 < α ≤ 2 and let {Wt (s) : t ∈ RN , s ∈ [0, ∞)N } ba a Gaussian random field such that EWt (s) = − s α rt (s/ s ), Cov(Wt (s), Wt (v)) = s α rt (s/ s ) + v α rt (v/ v ) − s − v α rt ((s − v)/ s − v ), 172 (7.2.2) where rt (·) : SN −1 → R+ is a continuous function satisfying sup v∈SN −1 |rt (v) − rs (v)| → 0 as s → t. (7.2.3) Define r Hα (t) = lim K −N K→∞ ∞ 0 eu P sup Wt (s) ≥ u du. (7.2.4) s∈[0,K]N Denote by Hα the usual Pickands’ constant, that is ∞ Hα = lim K −N K→∞ 0 eu P sup Z(s) ≥ u du, s∈[0,K]N where {Z(s) : s ∈ [0, ∞)N } is a Gaussian random field such that EZ(s) = − s α , Cov(Z(s), Z(v)) = s α + v α − s − v α . r It is clear that Hα (t) becomes Hα when rt ≡ 1. Let D ⊂ RN be a bounded N -dimensional Jordan measurable set. Let Y = {Y (t), t ∈ RN } be a real-valued, centered Gaussian field such that the covariance function CY satisfies CY (t, t + s) = 1 − s α rt (s/ s )(1 + o(1)) as s → 0, (7.2.5) ¯ for some constant α ∈ (0, 2], uniformly over t ∈ D. We will make use of the following theorem of Chan and Lai (2006). Theorem 7.2.2 [Theorem 2.1 in Chan and Lai (2006)] Suppose the Gaussian random field {Y (t) : t ∈ RN } satisfies condition (7.2.5), in which rt (·) : SN −1 → R+ is a continuous 173 ¯ function such that the convergence (7.2.3) is uniformly in D and supt∈D,v∈SN −1 rt (v) < ∞. ¯ Then as u → ∞, P sup Y (t) ≥ u ∼ u2N/α Ψ(u) t∈D D r Hα (t)dt. Lemma 7.2.3 Let {Wt (s) : t ∈ RN , s ∈ [0, ∞)N } be a Gaussian random field satisfying (7.2.2) with ∀v ∈ SN −1 , rt (v) = Mt v α , where Mt are non-degenerate N × N matrices. Then for each t ∈ RN , r Hα (t) = |detMt |Hα . Proof Let Wt (s) = Wt (Mt−1 s), ∀s ∈ [0, ∞)N . Then under the above conditions, Wt satisfies EWt (s) = − s α , Cov(Wt (s), Wt (v)) = s α + v α − s − v α . Let BK = [0, K]N and define Mt BK = {s ∈ RN : ∃v ∈ BK such that s = Mt v}. Note that Vol(Mt BK ) = |detMt |Vol(BK ) and sups∈B Wt (s) = sups∈Mt B Wt (s), it follows from K K (7.2.4) that ∞ 1 eu P sup Wt (s) ≥ u du Vol(BK ) 0 K→∞ s∈BK ∞ Vol(Mt BK ) 1 = lim eu P sup Wt (s) ≥ u du K→∞ Vol(BK ) Vol(Mt BK ) 0 s∈Mt B r Hα (t) = lim K 1 K→∞ Vol(Mt BK ) 0 ∞ = |detMt | lim 174 eu P sup s∈Mt BK Wt (s) ≥ u du. By modifying the proofs in Qualls and Watanabe (1973), we can check that ∞ 1 eu P sup Wt (s) ≥ u du, K→∞ Vol(Mt BK ) 0 s∈Mt BK Hα = lim completing the proof. Now we can prove our main result. Theorem 7.2.4 Let {X(x) : x ∈ SN } be a centered Gaussian random field satisfying condition (7.2.1) and let T ⊂ SN be an N -dimensional Jordan measurable set. Then as u → ∞, P sup X(x) ≥ u ∼ cN/α Area(T )Hα u2N/α Ψ(u), x∈T where Area(T ) denotes the spherical area of T . Proof Let Mθ = c1/α diag(1, sin θ1 , . . . , N −1 i=1 sin θi ). If N = 1, we set Mθ = c1/α . By Lemma 7.2.1, condition (7.2.1) becomes C(θ, θ + ξ) = 1 − ξ α rθ (ξ/ ξ )(1 + o(1)) as ξ → 0, where rθ (τ ) = Mθ τ α , ∀τ ∈ SN −1 . Denote by D the domain of T under spherical coordinates. Then by Theorem 7.2.2, as u → ∞, P sup X(x) ≥ u = P sup X(θ) ≥ u ∼ u2N/α Ψ(u) x∈T θ∈D D r Hα (θ)dθ. It follows from Lemma 7.2.3 that for any θ such that Mθ is non-degenerate( i.e., 175 (7.2.6) N −1 i=1 sin θi = 0), N −1 sinN −i θi Hα . r Hα (θ) = cN/α i=1 Note that ( N −1 sinN −i θi )dθ is the spherical area element and Mθ are non-degenerate for i=1 θ ∈ D almost everywhere, we obtain D r Hα (θ)dθ = cN/α Area(T )Hα . Plugging this into (7.2.6), we finish the proof. 7.2.2 Standardized Spherical Fractional Brownian Motion Theorem 7.2.4 is an application of Lemma 7.2.1 and Theorem 7.2.2, and it provides a nice formula since (7.2.1) has a simple form. More generally, the local behavior of covariance function may be more complicated, but we can still apply Lemma 7.2.1 to find the corresponding local behavior of covariance function under spherical coordinates and then apply Theorem 7.2.2 to obtain the asymptotics for the excursion probability. Here, we present an example about spherical fractional Brownian motion on sphere. Let o be a fixed point on SN . The Spherical Fractional Brownian Motion Bβ (x) is defined as a centered real-valued Gaussian random field such that B(o) = 0 E(B(x) − B(y))2 = d2β (x, y) ∀x, y ∈ SN , 176 where β ∈ (0, 1/2]. It follows immediately that Cov(B(x), B(y)) = 1 2β d (x, o) + d2β (y, o) − d2β (x, y) . 2 Without loss of generality, we take o = (1, 0, . . . , 0) ∈ RN +1 , whose corresponding spherical coordinate is (0, . . . , 0) ∈ RN . Define B(x) X(x) = β , d (x, o) ∀x ∈ SN \{o}. Then the covariance is d2β (x, o) + d2β (y, o) − d2β (x, y) . 2dβ (x, o)dβ (y, o) C(x, y) = Cov(X(x), X(y)) = Note that under the spherical coordinates, d(x, o) = θ1 and d(y, o) = ϕ1 , together with Lemma 7.2.1, we obtain that as d(x, y) → 0, C(θ, ϕ) = Cov(X(θ), X(ϕ)) = 1 − (1 + o(1)) 1 2β 2θ1 (ϕ1 − θ1 )2 + (sin2 θ1 )(ϕ2 − θ2 )2 N −1 sin2 θi + ··· + (ϕN − θN )2 i=1 Let Mθ = 1 21/(2β) θ1 N −1 diag 1, sin θ1 , . . . , rθ (τ ) = Mθ τ 2β , sin θi , i=1 ∀τ ∈ SN −1 . 177 β . Then as ξ = ϕ − θ → 0, C(θ, θ + ξ) = 1 − ξ 2β rθ (ξ/ ξ )(1 + o(1)). Let T ⊂ SN be an N -dimensional Jordan measurable set such that o ∈ T , and denote its / ¯ domain under spherical coordinates by D. Then by Theorem 7.2.2, as u → ∞, P sup X(x) ≥ u = P sup X(θ) ≥ u ∼ uN/β Ψ(u) x∈T θ∈D D r H2β (θ)dθ. It follows from Lemma 7.2.3 that for any θ such that Mθ is non-degenerate( i.e., N −1 i=1 sin θi = 0), r H2β (θ) = N −1 1 N 2N/(2β) θ1 sinN −i θi H2β . i=1 Therefore, N −1 P sup X(x) ≥ u ∼ x∈T 7.3 uN/β Ψ(u)2−N/(2β) H2β D −N θ1 sinN −i θi dθ. i=1 Smooth Isotropic Gaussian Fields on Sphere In this section we consider the excursion probabilities for smooth isotropic Gaussian fields on sphere. 178 7.3.1 Preliminaries λ Given λ > 0 and an integer n ≥ 0, the function Pn (t) is defined by the expansion ∞ (1 − 2rt + r2 )−λ λ rn Pn (t), = t ∈ [−1, 1], n=0 λ and Pn (t) is called the ultraspherical polynomial (or Gegenbauer polynomial ) of degree n. 0 If λ = 0, we follow Schoenberg (1942) and set Pn (t) = cos(n arccos t) = Tn (t), where Tn , n ≥ 0, are Chebyshev polynomials of the first kind defined by the expansion 1 − rt = 1 − 2rt + r2 ∞ rn Tn (t), t ∈ [−1, 1]. n=0 λ For reference later on, we need the following formulae on Pn . 0 o (i). For all n ≥ 0, Pn (1) = 1, and if λ > 0 [cf. Szeg¨ (1975, p.80)], λ Pn (1) = n + 2λ − 1 . n (7.3.1) (ii). For all n ≥ 0, d 0 1 P (t) = nPn−1 (t), dt n (7.3.2) and if λ > 0 [cf. Szeg¨ (1975, p.81)], o d λ λ+1 P (t) = 2λPn−1 (t). dt n (7.3.3) The following theorem by Schoenberg (1942) characterizes the covariance function of an isotropic Gaussian field on sphere [see also Gneiting (2012)]. 179 Theorem 7.3.1 Let N ≥ 1, then a continuous function C(·, ·) : SN × SN → R is the covariance of an isotropic Gaussian field on SN if and only if it has the form ∞ λ an Pn ( x, y ), C(x, y) = x, y ∈ SN , n=0 where λ = (N − 1)/2, an ≥ 0, ∞ λ n=0 an Pn (1) < ∞. ∞ 0 n=0 an Pn (1) Remark 7.3.2 Note that for N = 1, λ = 0 and ∞ n=0 an < ∞; while for N ≥ 2, λ = (N − 1)/2 and by (7.3.1), equivalent to ∞ N −2 a n n=0 n < ∞ is equivalent to ∞ λ n=0 an Pn (1) < ∞ is < ∞. λ When N = 2, λ = 1/2 and Pn become Legendre polynomials. For more results on isotropic Gaussian fields on S2 , we refer to a recent monograph by Marinucci and Peccati (2011). The following statement (S) is a smoothness condition for Gaussian fields on sphere. In Lemma 7.3.3 below, we show that it implies C(·, ·) ∈ C 4 (SN × SN ). (S). The covariance C(·, ·) of {X(x) : x ∈ SN } satisfies ∞ λ an Pn ( x, y ), C(x, y) = x, y ∈ SN , n=0 where λ = (N − 1)/2, an ≥ 0, and ∞ 8 n=1 n an < ∞ if N = 1, ∞ N +6 a n n=1 n < ∞ if N ≥ 2. Lemma 7.3.3 Let {X(x) : x ∈ SN } be an isotropic Gaussian field such that (S) is fulfilled. Then the covariance C(·, ·) ∈ C 4 (SN × SN ) and hence X(·) ∈ C 2 (SN ) a.s. 180 Proof λ We first consider N ≥ 2. By Theorem 7.3.1, each Pn ( t, s ) is the covariance of an isotropic Gaussian field on SN and hence ∀x, y ∈ SN . λ λ λ |Pn ( x, y )| ≤ Pn ( x, x ) = Pn (1), (7.3.4) λ Combining (S) with (7.3.1), (7.3.3) and (7.3.4), together with the fact P0 (t) ≡ 1, we obtain that there exist positive constants M1 and M2 such that ∞ sup d4 λ P (t) dt4 n an t∈[−1,1] n=0 ∞ ∞ λ+4 an Pn−4 (1) ≤ M1 nN +6 an < ∞. ≤ M2 n=4 n=1 This gives C 4 (SN × SN ). The proof for N = 1 is similar once we apply both (7.3.2) and (7.3.3). By Schoenberg (1942) or Gneiting (2012), C(·, ·) is a covariance function on SN for every N ≥ 1 if and only if it has the form ∞ bn x, y n , C(x, y) = x, y ∈ SN , n=0 where bn ≥ 0 and ∞ n=0 bn < ∞. Then we may state (S ) below as another form of smoothness condition for Gaussian fields on sphere. (S ). The covariance C(·, ·) of {X(x) : x ∈ SN } satisfies ∞ bn x, y n , C(x, y) = n=0 where bn ≥ 0 and ∞ 4 n=0 n bn < ∞. 181 x, y ∈ SN , We obtain an analogue of Lemma 7.3.3 below. Since the proof is similar, it is omitted. Lemma 7.3.4 Let {X(x) : x ∈ SN } be an isotropic Gaussian field such that (S ) is fulfilled. Then the covariance C(·, ·) ∈ C 4 (SN × SN ) and hence X(·) ∈ C 2 (SN ) a.s. 7.3.2 Excursion Probability Let χ(Au (X, SN )) be the Euler characteristic of excursion set Au (X, SN ) = {x ∈ SN : X(x) ≥ u}. Denote by Hj (x) the Hermite polynomial, i.e., Hj (x) = 2 (−1)j ex /2 dj −x2 /2 e . dxj Let ωj := Area(Sj ), where Sj is the j-dimensional unit sphere. Lemma 7.3.5 Let {X(x) : x ∈ SN } be a centered, unit-variance, isotropic Gaussian field satisfying (S). Suppose also that the joint distribution of (X(x), X(x), 2 X(x)) is nondegenerate for each x ∈ SN . Then N E{χ(Au (X, SN ))} (C )j/2 Lj (SN )ρj (u), = j=0 where C =    (N − 1)   n+N −1 ∞ n=1 N ∞ 2 n=1 n an an if N ≥ 2, (7.3.5) if N = 1, 2 ρ0 (u) = Ψ(u), ρj (u) = (2π)−(j+1)/2 Hj−1 (u)e−u /2 with Hermite polynomials Hj−1 for 182 j ≥ 1 and, for j = 0, . . . , N ,   N ωN   2 j ωN −j N) = Lj (S   0  if N − j is even (7.3.6) otherwise are the Lipschitz-Killing curvatures of SN . Remark 7.3.6 In Lemma 7.3.5, if condition (S) is replaced by (S ), then it can be seen from the proof below that C would be changed to a much simpler form ∞ C = nbn . (7.3.7) n=1 Proof Due to Theorem 12.4.1 in Adler and Taylor (2007), we only need to show that the Lipschitz-Killing curvatures induced by X on SN is Lj (X, SN ) = (C )j/2 Lj (SN ). The Riemannian structure induced by X on SN is defined as [cf. Adler and Taylor (2007, p.305)] X,SN gx0 (ξx0 , σx0 ) := E{(ξx0 X) · (σx0 X)} = ξx0 σx0 C(x, y)|x=y=x0 , ∀x0 ∈ SN , where ξx0 , σx0 ∈ Tx0 SN , the tangent space of SN at x0 . We may choose two smooth curves on SN , say γ(t), τ (s), t, s ∈ [0, 1], such that γ(0) = τ (0) = x0 and γ (0) = ξx0 , τ (0) = σx0 . 183 We first consider N ≥ 2, then ξx0 σx0 C(x, y)|x=y=x0 ∂ ∂ ∂ ∂ C(γ(t), τ (s))|t=s=0 = = ∂t ∂s ∂t ∂s = ∂ ∂t ∞ λ an Pn ( γ(t), τ (s) )|t=s=0 n=0 ∞ λ+1 an (N − 1)Pn−1 ( γ(t), x0 ) γ(t), σx0 |t=0 n=1 ∞ λ+2 an (N − 1)(N + 2)Pn−2 ( x0 , x0 ) ξx0 , x0 x0 , σx0 = n=2 ∞ λ+1 an (N − 1)Pn−1 ( x0 , x0 ) ξx0 , σx0 + n=1 ∞ λ+1 an (N − 1)Pn−1 (1) = ξx0 , σx0 = C ξx0 , σx0 , n=1 where the third and fourth equalities are from (7.3.3), the fifth equality comes from x0 , x0 = 1 and ξx0 , x0 = σx0 , x0 = 0, since the vector x0 is always orthogonal to its tangent space. The case of N = 1 can be proved similarly once we apply (7.3.2) instead of (7.3.3). Hence the induced metric is X,SN gx 0 (ξx0 , σx0 ) = C ξx0 , σx0 , ∀x0 ∈ SN . By the definition of Lipschitz-Killing curvatures, one has Lj (X, SN ) = (C )j/2 Lj (SN ), where Lj (SN ) are the original Lipschitz-Killing curvatures of SN given by (7.3.6). We finish the proof. 184 Theorem 7.3.7 Suppose the conditions in Lemma 7.3.5 hold. Then, under the notations therein, there exists α > 0 such that as u → ∞, N P 2 2 (C )j/2 Lj (SN )ρj (u) + o(e−αu −u /2 ). sup X(x) ≥ u = x∈SN (7.3.8) j=0 Remark 7.3.8 Under the conditions in Theorem 7.3.7, the covariance function C satisfies (7.2.1) with α = 2. Also note that when α = 2, Pickands’ constant H2 = π −N/2 . Then one can check that the approximation in Theorem 7.2.4 only provides the leading term of the approximation in Theorem 7.3.7. This also affects the errors in two approximations: the 2 error in the former one is only o(1), while the error in the latter one is o(e−αu ). Proof The result is an immediate consequence of Lemma 7.3.5 and Theorem 14.3.3 in Adler and Taylor (2007). If the set SN is replaced by a more general subset T ⊂ SN , by simply revising Lemma 7.3.5 and applying Theorem 14.3.3 in Adler and Taylor (2007) again, we obtain the following corollary. Corollary 7.3.9 Suppose the conditions in Lemma 7.3.5 hold. Let T ⊂ SN be a k-dimensional, locally convex, regular stratified manifold [cf. Adler and Taylor (2007)], then there exists α > 0 such that as u → ∞, k P sup X(x) ≥ u = x∈T 2 2 (C )j/2 Lj (T )ρj (u) + o(e−αu −u /2 ), j=0 where Lj (T ) are the Lipschitz-Killing curvatures of SN [cf. Adler and Taylor (2007)], C and ρj (u) are as in Lemma 7.3.5. 185 Example 7.3.10 Canonical field on SN , whose covariance structure is given by C(x, y) = x, y . Since C(x, y) = cos d(x, y), it satisfies 1 C(x, y) = 1 − d2 (x, y)(1 + o(1)), 2 as d(x, y) → 0. Applying Theorem 7.2.4, one can get an approximation to the excursion probability. However, by applying Theorem 7.3.7, we will get a more precise approximation for N ≥ 2. Example 7.3.11 Consider the Hamiltonian of the pure p-spin model on SN −1 HN,p (x) = 1 N Ji ,...,ip xi1 · · · xip , N (p−1)/2 i ,...,i =1 1 p 1 ∀x = (x1 , · · · , xN ) ∈ SN −1 , where Ji1 ,...,ip are independent standard Gaussian random variables. Then E{HN,p (x)HN,p (y)} = 1 N p−1 x, y p . Let ∞ X(x) = bp HN,p (x), p=2 where (bp )p≥2 is a sequence of positive numbers such that ∞ p p=2 2 bp < ∞, and HN,p and HN,p are independent for any p = p . Then X is a smooth Gaussian random field on sphere with covariance ∞ C(x, y) = p=2 b2 p N p−1 186 x, y p . Example 7.3.12 We show how to apply Corollary 7.3.9. If T is the semisphere of dimension one, then L0 (T ) = 1 and L1 (T ) = π. If T is the semisphere of dimension two, then L0 (T ) = 1, L1 (T ) = π and L2 (T ) = 2π. Basically, L0 (T ) is the Euler characteristic, Lk (T ) is the volume and Lk−1 (T ) is half of the surface area. Usually, one may use Steiner’s formula [Adlar and Taylor (2007, p.142)] to compute the Lipschitz-Killing curvatures exactly. 2 Example 7.3.13 Consider the covariance structure C(x, y) = 1 − π d(x, y), which can be verified to be a valid covariance on sphere. Since d(x, y) = arccos x, y , we can write 2 C(x, y) = 1 − arccos x, y = π ∞ n=0 (2n)! 4n (n!)2 (2n + 1) x, y 2n+1 ∞ bn x, y n , := n=0 it is easy to check that ∞ n n=0 nb = ∞, hence Theorem 7.3.7 is not applicable. In fact, C(x, y) is not smooth neither. Instead, we may use Theorem 7.2.2 to get the approximation to excursion probability. 187 Chapter 8 Excursion Probability of Anisotropic Gaussian and Asymptotically Gaussian Random Fields 8.1 Preliminaries For vectors u, v ∈ Rd , the relation u ≤ v means ui ≤ vi for all i and u < v means ui < vi for all i, also we let uv := (u1 v1 , · · · , ud vd ). For t = (t1 , · · · , td ) ∈ Rd and ζ = (ζ1 , · · · , ζd ) > 0, define d It,ζ = [ti , ti + ζi ). i=1 Let · be the Euclidean norm of a vector, · be the greatest integer function, µ(·) be the volume of set. For bounded and Jordan measurable set D ⊂ Rd and δ > 0, define [D]δ = {t + u : t ∈ D, u ≤ δ}. 188 Let 0 < α ≤ 1, p = (p1 , · · · , pd ) with 0 < pi ≤ 2 for all 1 ≤ i ≤ d and let {Wt (u) : u ∈ [0, ∞)d } be a Gaussian random field such that Wt (0) = 0 and d E(Wt (u)) = − Cov(Wt (u), Wt (v)) = + − p ui i p p α udd u11 pi , · · · , pi d d i=1 ui i=1 ui i=1 p p d α udd u11 pi ui rt pi , · · · , pi d d i=1 ui i=1 ui i=1 p p d α vd d v1 1 pi vi rt p ,··· , pi d d vi i i=1 i=1 vi i=1 d α |u1 − v1 |p1 |ui − vi |pi rt ,··· d |ui − vi |pi i=1 i=1 rt , , |ud − vd |pd d i=1 |ui − vi |pi , (8.1.1) where rt : S = {v ∈ [0, ∞)d : d i=1 vi = 1} → R+ is a continuous function satisfying sup |rt (v) − rs (v)| → 0, as t − s → 0. (8.1.2) v∈S In particular, we define ∞ HK (t) = 0 ey P sup 0≤ui ≤K,∀i Wt (u) > y dy, (8.1.3) H(t) = lim K −d HK (t). K→∞ Let L be a slowly varying function, define ∆c,i = min{x > 0 : xαpi L(x) = c−2 }, 189 ∀1 ≤ i ≤ d, (8.1.4) 2 2 − − and ∆c = (∆c,1 , · · · , ∆c,d ). For example, if L(x) ≡ 1, then ∆c = (c αp1 , · · · , c αpd ). Our goal is to investigate the asymptotic property of centered Gaussian fields satisfying the following condition: E(X(t)X(t + u)) d |ui |pi = 1 − (1 + o(1)) d α |ui L i=1 |pi |u1 |p1 rt d p i=1 |ui | i i=1 ,··· , |ud |pd d p i=1 |ui | i (8.1.5) , as u → 0, uniformly over t ∈ [D]δ . Theorem 8.1.1 Suppose Gaussian random field X satisfies condition (8.1.5), where 0 < α ≤ 1, p = (p1 , · · · , pd ) with 0 < pi ≤ 2 for all 1 ≤ i ≤ d, and rt : S → R+ is a continuous function such that the convergence in (8.1.2) is uniform in t ∈ [D]δ and supt∈[D] ,v∈S rt (v) < δ ∞. Then as c → ∞, d P sup X(t) > c t∈D 8.2 ∆−1 c,i ∼ Ψ(c) i=1 H(t)dt. D Asymptotically Gaussian Random Fields 2 For c > 0, let Xc be random fields such that EXc (t) = 0, EXc (t) = 0 for all c and t. Define ρc (t, u) = E(Xc (t)Xc (u)). We impose the following conditions for Xc . (C). There exist 0 < α ≤ 1, p = (p1 , · · · , pd ) with 0 < pi ≤ 2 for all 1 ≤ i ≤ d and a slowly 190 varying function L such that as u → 0, ρ(t, t + u) d = 1 − (1 + o(1)) |ui |pi d α |ui |pi rt L i=1 i=1 |u1 |p1 d p i=1 |ui | i ,··· , |ud |pd d p i=1 |ui | i uniformly over t ∈ [D]δ and c > 0. (B1). As c → ∞, P{Xc (t) > c − y/c} ∼ Ψ(c − y/c) uniformly over t ∈ [D]δ and positive, bounded values of y. (B2). The convergence in (8.1.2) is uniform over t ∈ [D]δ , with supt∈[D] ,v∈S rt (v) < ∞. δ Moreover, for any a > 0, a = {a1/p1 , · · · , a1/pd } and positive integers mi , as c → ∞, {c[Xc (t + ak∆c ) − Xc (t)] : 0 ≤ ki < mi }|Xc (t) = c − y/c ⇒ {Wt (ak) : 0 ≤ ki < mi } uniformly over positive, bounded values of y. (B3). There exists a positive function h such that limy→∞ h(y) = 0 and P{Xc (t + u∆c ) > c − γ/c, Xc (t) ≤ c − y/c} ≤ h(y)Ψ(c), for all u ≥ 0 (u is a vector) and γ > 0. (B4). Let pi0 = min{pi , 1 ≤ i ≤ d}, a = (a1/p1 , · · · , a1/pd ). There exist nonincreasing func∞ tions Na on R+ and positive constants γa such that γa → 0 and Na (γa )+ 1 ω s Na (γa + 191 ω)dω = o(a d/pi 0) as a → 0, and sup P Xc (v) > c, Xc (t) ≤ c − y/c ≤ Na (γ)Ψ(c), v∈It,a∆c for all γa ≤ γ ≤ c and s > 0. q (B5). There exists a nonincreasing function f : [0, ∞) → R+ such that f (y) = O(e−y ) for some q > 0 and for all γ ≥ 0 and c sufficiently large, d |ui |pi P{Xc (t) > c − γ/c, Xc (t + u∆c ) > c − γ/c} ≤ Ψ(c − γ/c)f i=1 uniformly in t and t + u∆c belonging to [D]δ . For K > 0 and a > 0, let At = (At (K, a, c)) {t + ak∆c : 0 ≤ ki < mi , k ∈ Zd }, where mi = K/a1/pi and a = (a1/p1 , · · · , a1/pd ). As a discrete set, At will be used to approximate It,K∆c . Lemma 8.2.1 Under (C) and (B1)-(B3), ∞ HK,a (t) 0 ey P sup Wt (ak) > y dy 0≤ki c − γ/c u∈At ∼ Ψ(c − γ/c)(1 + HK,a (t)) uniformly in t ∈ [D]δ . 192 (8.2.1) Proof Let ε > 0. By (B3), there exists y ∗ > γ such that h(y ∗ ) < ε/( d mi ) and i=1 0≤P sup Xc (u) > c − γ/c − P{Xc (t) > c − γ/c} u∈At sup Xc (u) > c − γ/c, c − y ∗ /c < Xc (t) ≤ c − γ/c −P u∈At =P (8.2.2) sup Xc (u) > c − γ/c, Xc (t) ≤ c − y ∗ /c u∈At d mi h(y ∗ )Ψ(c) < εΨ(c), ≤ i=1 since card(At ) = d i=1 mi . By (B1), there exists ξc → 0 such that 2 |P{Xc (t) > c − y/c}/Ψ(c − y/c) − 1| = O(ξc ) (8.2.3) −1 2 uniformly for γ ≤ y ≤ y ∗ ; we can also assume that ξc (y ∗ −γ) ∈ Z. Since eξc = 1+ξc +O(ξc ) and Ψ(c − y/c) ∼ ey Ψ(c), (8.2.3) implies P{c − (y + ξc )/c < Xc (t) ≤ c − y/c} 2 2 = (1 + O(ξc ))ey+ξc Ψ(c) − (1 + O(ξc ))ey Ψ(c) ∼ ξc ey Ψ(c). By (B2), uniformly for t ∈ [D]δ and γ ≤ y ≤ y ∗ , P sup Xc (u) > c − γ/c, c − (y + ξc )/c < Xc (t) ≤ c − y/c u∈At ∼P sup Wt (ak) > y − γ P{c − (y + ξc )/c < Xc (t) ≤ c − y/c} 0≤ki y − γ ξc ey Ψ(c). 0≤ki c − γ/c, c − y ∗ /c < Xc (t) ≤ c − γ/c u∈At y ∗ −1 ∼ Ψ(c) γ ey P Wt (ak) > y − γ dy sup 0≤ki y dy. 0≤ki y}dy < ∞ for all k and At is a finite set, HK,a (t) is finite and its uniform continuity follows from (8.1.1) and (8.1.2), with the fact described in (B2) that the convergence in (8.1.2) is uniform over t ∈ [D]δ . Racall that supt∈[D] ,v∈S rt (v) < ∞, δ yielding the finiteness of supt∈[D] HK,a (t). δ Theorem 8.2.2 Let K > 0. Assume (C) and (B1)-(B4). Then as c → ∞, P sup Xc (u) > c u∈It,K∆c ∼ Ψ(c)(1 + HK (t)) uniform over t ∈ [D]δ , where HK (t) is defined in (8.1.3) and is finite and uniformly continuous in t ∈ [D]δ . 194 Proof Let a > 0. By (B4) and (8.2.1), we have for all large c, 0≤ P Xc (u) > c − P sup u∈It,K∆c sup Xc (u) > c Ψ(c) u∈At ≤ P c − γa /c < sup Xc (u) ≤ c u∈At + P Xc (v) > c, Xc (u) ≤ c − γa /c sup Ψ(c) (8.2.5) v∈Iu,a∆c u∈At d (K/a1/pi )Na (γa ) ≤ 2(Ψ(c − γa /c) − Ψ(c))(1 + HK,a (t))/Ψ(c) + i=1 d ≤ 3(eγa − 1)(1 + HK,a (t)) + (K d /a i=1 1/pi )Na (γa ). d By (B4), for any ε > 0, we can choose a∗ small enough such that Na (γa )/a i=1 1/pi < ε/K d and 3(eγa − 1) < ε for all 0 < a ≤ a∗ . Therefore, by (8.2.1) and (8.2.5), (1 − ε)(1 + HK,a (t)) ≤ P sup Xc (u) > c u∈It,K∆c Ψ(c) ≤ (1 + 2ε)(1 + HK,a (t)) + ε, for all large c and all t ∈ [D]δ and 0 < a ≤ a∗ . By uniform continuity of Wt (u), HK,a (t) → HK (t) as a ↓ 0. Therefore, 1 + HK,a (t) ≤ 1 + HK (t) ≤ (1 + ε)(1 + HK,a∗ (t)), (8.2.6) for all t ∈ [D]δ and 0 < a ≤ a∗ . First note that M 1 + supt∈[D] HK (t) < ∞ in view of (8.2.6) and Lemma 8.2.1, δ therefore |HK (t) − HK,a∗ (t)| ≤ M ε 195 (8.2.7) for all t ∈ [D]δ , by (8.2.6) with a = a∗ . Because HK,a∗ is uniformly continuous by Lemma 8.2.1, ∀ t − u < δ ∗ , t, u ∈ [D]δ , |HK,a∗ (t) − HK,a∗ (u)| ≤ ε, (8.2.8) for some δ ∗ > 0. It follows from (8.2.7) and (8.2.8) that, if t − u < δ ∗ , |HK (t) − HK (u)| ≤ |HK (t) − HK,a∗ (t)| + |HK (u) − HK,a∗ (u)| + |HK,a∗ (t) − HK,a∗ (u)| ≤ 2M ε + ε. Hence HK (t) is uniformly continuous in t ∈ [D]δ . Combining (8.2.7) and the definition of M yields that for all large c and t ∈ [D]δ , −εM − ε(1 − ε)M ≤ P sup Xc (u) > c u∈It,K∆c Ψ(c) − (1 + HK (t)) ≤ 2εM + ε(1 + 2ε)M. Since ε is arbitrary, this proves the theorem. Lemma 8.2.3 Under (C) and (B1)-(B4), supt∈[D] ,K≥1 K −d HK (t) < ∞ and {K −d HK : δ K ≥ 1} is uniformly equicontinuous on [D]δ , that is, sup K≥1,t,s∈[D]δ , t−s ≤ε |K −d HK (t) − K −d HK (s)| → 0, 196 as ε → 0. Proof Let a = (a1/p1 , · · · , a1/pd ), d N (K, a) = ( mi = K/a1/pi , d a−1/pi ) . mi )/( i=1 i=1 Note that the integrand of HK,a (t) involves the set {ak : 0 ≤ ki < mi }, which can be partitioned into N (K, a) + 1 disjoint subsets Lj such that card(Lj ) = 1 ≤ j ≤ N (K, a) and card(LN (K,a)+1 ) = d i=1 mi − N (K, a) d i=1 d i=1 a−1/pi for a−1/pi . It is possible that card(LN (K,a)+1 ) = 0, in this case LN (K,a)+1 is regarded as an empty set. We can therefore use the arguments at the end of the proof of Lemma 8.2.1 to bound N (K,a)+1 K −d P j=1 sup Wt (ak) > y k∈Lj −P sup Ws (ak) > y k∈Lj and thereby establish the uniform equicontinuity and boundedness of {K −d HK,a : K ≥ 1} on [D]δ . Moreover, by partitioning the cube [0, K)d similarly into K d cubes, it can be shown that supK≥1,t∈[D] |K −d HK (t) − K −d HK,a (t)| → 0, as a → 0. Hence we can proceed as in δ (8.2.7) and (8.2.8) but with HK,a and HK replaced by K −d HK,a and K −d HK respectively to prove the uniform equicontinuity and boundedness of {K −d HK : K ≥ 1}. Lemma 8.2.4 Under (C) and (B1)-(B5), there exist constants sK → 0 as K → ∞ such that P sup u∈It,K∆c Xc (u) > c, sup Xc (v) > c ≤ sK K d Ψ(c) v∈B\It,K∆c for c large enough, uniformly over t ∈ [D]δ and over subsets B of [D]δ . 197 (8.2.9) Proof Let a > 0 and 0 < q < q . Then by the property of f described in (B5), d Ga d |wi |pi exp |wi |pi f i=1 w∈aZd < ∞. i=1 Let n be positive integers that are large enough such that d d |wi |pi exp d |w |pi ≥np1 a i=1 i w∈aZd , i=1 |wi |pi f d < εa i=1 1/pi ; i=1 and K > 0 be large enough such that d d (1 − 2n/mi ) Ga < εa i=1 1/pi , 1− i=1 where mi = K/a1/pi . Let F1,t = {t + ak∆c : n ≤ ki < mi − n, k ∈ Zd }, F2,t = At \F1,t , Bt = {t + ak∆c ∈ B\It,K∆c , k ∈ Zd }, guv = min{c − γa , ( d i=1 |(ui − vi )/∆c,i |pi )q }. Then by (B5), P Xc (u) > c − (γa + guv )/c, Xc (v) > c − (γa + guv )/c d |(ui − vi )/∆c,i |pi ≤ Ψ(c − (γa + guv )/c)f (8.2.10) i=1 d ≤ 2eguv Ψ(c − γa /c)f |(ui − vi )/∆c,i |pi , i=1 for all large c. For u ∈ F1,t and v ∈ Bt , ( d i=1 |(ui d i=1 (mi d i=1 |(ui − vi )/∆c,i |pi )q . Noting that card(F1,t ) ≤ − 2n) = d i=1 mi [1 − d i=1 (1 − 2n/mi )], 198 − vi )/∆c,i |pi ≥ np1 a and guv ≤ d i=1 mi , and that card(F2,t ) ≤ u∈At = u∈F1,t + d i=1 mi − u∈F2,t , we obtain from (8.2.10) that for all large c, P Xc (u) > c − (γa + guv )/c, Xc (v) > c − (γa + guv )/c u∈At v∈Bt d ≤ 2Ψ(c − γa /c) d mi |wi exp i=1 w∈aZd , d |w |pi ≥np1 a i=1 i d |pi |wi |pi f i=1 i=1 d + 1− (1 − 2n/mi ) Ga i=1 ≤ 4εK d Ψ(c − γa /c). (8.2.11) Define λw = minu∈At guw if w ∈ Bt , and λw = 0 if w ∈ At . Then sup P Xc (u) > c, u∈It,K∆c ≤ sup Xc (v) > c v∈B\It,K∆c P Xc (u) > c − (γa + guv )/c, Xc (v) > c − (γa + guv )/c u∈At v∈Bt + P w∈At ∪Bt sup Xc (z) > c, Xc (v) ≤ c − (γa + λw )/c . z∈Iw,a∆c 199 (8.2.12) On the right-hand side of (8.2.12), the first sum can be bounded by (8.2.11) and the second sum by Xc (z) > c, Xc (u) ≤ c − γa /c sup P z∈Iu,a∆c u∈At + Xc (z) > c, Xc (v) ≤ c − (γa + λv )/c sup P v∈Iv,a∆c v∈Bt (8.2.13) d ≤ mi Na (γa )Ψ(c) + i=1 ≤ K d a− Na (γa + λv )Ψ(c) v∈Bt d 1/p i Na (γa )Ψ(c) + i=1 Na (γa + λv )Ψ(c), v∈Bt in view of (B4) and that card(At ) = d i=1 mi . To bound the last sum v∈Bt in (8.2.13), first consider the case d = 1. Since λv ≥ min{c−γa , (ak p )q } if ak p ≤ inf u∈At (|v −u|/∆c )p < a(k + 1)p , and since Na is nonincreasing, it follows that Na (γa + λv ) v∈Bt ∞ Na (γa + (ak p )q ) + Na (c)µ(B)/(a∆c ) ≤2 k=1 ∞ (8.2.14) Na (γa + (a1/p k)pq ) + Na (c)µ(B)/(a∆c ) =2 k=1 ≤2 ∞ a−1/p Na (γa + y pq )dy + µ(B)Na (c)/(a∆c ) . 0 ∞ Making change of variable by y pq replaced by ω, and noting that Na (γa ) + 1 ω s Na (γa + ∞ ω)dω = o(a) for all s ≥ 0 as described in (B4), yield that a−1/p 0 Na (γa + y q )dy = o(1) c−γ a as a → 0. Moreover, in view of (8.1.4), Na (c)/∆c = O( c/2−γ ω s Na (γa + ω)dω) = o(a1/p ) a as a → 0 and c → ∞, for s > 2/(αp). Therefore, 200 v∈Bt Na (γa + λv ) ≤ ε for all large c and small a. d p q i=1 j i ) } In general, for d > 1, note that λv ≥ min{c − γa , (a d i=1 |(ui d i=1 (j − vi )/∆c,i |pi < a if a d p i=1 j i ≤ + 1)pi . Let U = {σ ⊂ {1, · · · , d} : card(σ) = d − 1}, since Na is nonincreasing, it follows that Na (γa + λv ) v∈Bt ∞ ≤2 j=1 + ≤2 ≤2 + j pi + 2j) Na γa + a q i=1 σ∈U l∈σ Na (c)µ(B) d 1/pi ∆ ) c,i i=1 (a ∞ 2d (a−1/pl K j=1 + d (a−1/pl K 2d + 2j) Na (γa + dq (a 1/pi p q 0 j) i0 ) (8.2.15) σ∈U l∈σ Na (c)µ(B) d 1/pi ∆ ) c,i i=1 (a ∞ −1/pi 0 2d a (a−1/pl K 0 σ∈U l∈σ + 2a −1/pi 0 y) p q Na (γa + dq y i0 ) Na (c)µ(B) d 1/pi ∆ ) c,i i=1 (a ≤ εK d−1 , for all large c and small a, as can be shown by arguments similar to those in the case d = 1. Combining (8.2.11)-(8.2.15) yields the desired conclusion. Theorem 8.2.5 Assume (C) and (B1)-(B5). Let c → ∞ and c = o(∆c,i ) for all i and hence c ∆c = o(1) as c → ∞. Then P sup Xc (u) > c u∈It, c ∆c 201 ∼ H(t)Ψ(c) d , c (8.2.16) sup P Xc (u) > c, u∈It, c ∆c sup = o(Ψ(c) d ), c Xc (v) > c v∈B\It, ∆ c c (8.2.17) as c → ∞, uniformly over t ∈ [D]δ and over subsets B of [D]δ , where H(t) is defined in (8.1.3) and is uniformly continuous and bounded below on D. Proof Let ε > 0. There exists K ∗ such that sK ≤ ε/3 for all K ≥ K ∗ . For fixed t ∈ D and K ≥ K ∗ , define Λ = {u ∈ K∆c Zd : Iu,K∆c ⊂ It, c ∆c }, Λ = {u ∈ K∆c Zd (8.2.18) : Iu,K∆c ∩ It, c ∆c = ∅}, Ju = Iu,K∆c . Covering It, c ∆c by rectangles with edges length K∆c,i , 1 ≤ i ≤ d. and letting B be a subset of [D]δ containing It, c ∆c , we have u∈Λ sup Xc (v) > c − P sup Xc (v) > c, v∈Ju P v∈Ju sup Xc (w) > c w∈B\Ju (8.2.19) ≤P sup Xc (u) > c u∈It, c ∆c ≤ P u∈Λ sup Xc (v) > c . v∈Ju By Theorem 8.2.2 and Lemma 8.2.4, as c → ∞, (1 + HK (u) − sK K d ) (1+o(1))Ψ(c) u∈Λ ≤P sup (8.2.20) Xc (u) > c ≤ (1 + o(1))Ψ(c) u∈It, ∆ c c (1 + HK (u)), u∈Λ uniformly over t ∈ [D]δ . In view of c ∆c → 0 and the uniform equicontinuity in Lemma 8.2.3, we can choose c∗ large enough so that |K −d HK (u) − K −d HK (t)| ≤ ε/3 for all c ≥ c∗ , √ c ≥ K ≥ K ∗ , t ∈ D and u ∈ Λ. Putting this and the bound sK ≤ ε/3 in (8.2.20) and 202 dividing (8.2.20) by Ψ(c) d , we obtain that for all c ≥ c∗ , c (1 − ε)(K −d HK (t) − 2ε/3) ≤ P sup √ c ≥ K ≥ K ∗ and t ∈ D, Xc (u) > c u∈It, c ∆c (Ψ(c) d ) c (8.2.21) ≤ (1 + ε)(K −d HK (t) + ε/3), since card(Λ) ∼ card(Λ) ∼ K −d d . By Lemma 8.2.3, M c supt∈[D] ,K≥1 K −d HK (t) < ∞. δ Therefore, it follows from (8.2.21) that sup sup P Xc (u) > c u∈It, c ∆c t∈D for all c ≥ c∗ and √ c (Ψ(c) d ) − K −d HK (t) ≤ εM + 2ε/3, c (8.2.22) ≥ K ≥ K ∗ . Letting c → ∞ in (8.2.22) yields ˜ sup |K −d HK (t) − K −d HK (t)| ≤ 2εM + 4ε/3, ˜ t∈D ˜ if K, K ≥ K ∗ , establishing that {K −d HK } is uniformly Cauchy. Hence K −d HK (t) converges uniformly in t ∈ D to H(t), which is also bounded by M . We can therefore proceed as in the second paragraph of the proof of Theorem 8.2.2 to show that H(t) is uniformly continuous in t ∈ D. Moreover, taking K large enough such that supt∈D |K −d HK (t) − H(t)| ≤ ε/3, it follows from (8.2.22) that sup P t∈D sup Ψ(c) d − H(t) ≤ ε(M + 1) c Xc (u) > c u∈It, c ∆c for all c ≥ c∗ , proving (8.2.16). We next show that inf t∈D H(t) > 0. For the function f in (B5), we can choose a > 0 large enough so that k=0 f a d p i=1 |ki | i ≤ 1/2. Let K large enough and mi = K/a1/pi , 203 1 ≤ i ≤ d, and At = At (K, a, c) as defined before so that card(At ) = d i=1 mi . Then by (B1) and (B5), as c → ∞, sup Xc (u) > c P u∈At ≥ P(Xc (u) > c) − u∈At ≥ P(Xc (u) > c, Xc (v) > c) v∈At ,v=u (8.2.23) (1 + o(1))Ψ(c)/2 u∈At d mi Ψ(c)/2, = (1 + o(1)) i=1 uniformly in t ∈ D and mi ≥ 2. Combining (8.2.23) with Theorem 8.2.2 yields 1 + HK (t) = lim P sup c→∞ Xc (u) > c Ψ(c) u∈It,K∆c d ≥ lim sup P c→∞ sup Xc (u) > c u∈At Ψ(c) ≥ mi /2 i=1 uniformly in t ∈ D and mi ≥ 2. Since limK→∞ K −d HK (t) = H(t), it then follows that H(t) ≥ ( d a−1/pi )/2 for all t ∈ D. i=1 Finally, to prove (8.2.17), apply Lemma 8.2.4 to obtain that for all t ∈ D and large c, sup P Xc (u) > c, u∈It, c ∆c ≤ P u∈Λ sup Xc (v) > c v∈B\It, c ∆c sup Xc (v) > c, sup Xc (v) > c v∈Ju v∈B\Ju ≤ card(Λ)sK K d Ψ(c). Since sK → 0 as K → ∞ and card(Λ) ∼ K −d d as c /K → ∞, (8.2.17) follows. c 204 Theorem 8.2.6 Assume (C) and (B1)-(B5). Then as c → ∞, d P sup Xc (t) > c ∆−1 c,i ∼ Ψ(c) t∈D H(t)dt. D i=1 Proof A basic idea of the proof is to cover the set D by rectangles with edges length c ∆c,i , 1 ≤ i ≤ d, and also c → ∞ and c ∆c,i → 0 for all i and hence c ∆c → 0 as c → ∞. Define Λ, Λ and Ju as in (8.2.18) but with K∆c Zd replaced by c ∆c Zd , Iu,K∆c by Iu, c ∆c , and It, c ∆c by D. Then (8.2.19) still holds with these new definitions of Λ, Λ and Ju and also with B replaced by [D]δ . Labeling it as (8.2.19 ), the upper and lower bounds in (8.2.19 ) are both asymptotically equivalent to ( d c d −1 d i=1 ∆c,i ) ( c )Ψ(c) D H(t)dt = Ψ(c)( d ∆−1 ) D H(t)dt, since c ∆c → 0 and H(t) is continuous. This finishes the proof. i=1 c,i 8.3 Proof of Theorem 8.1.1 In view of Theorem 8.2.6, we only need to show that (B1)-(B5) holds for such Gaussian fields. (B1) is obvious. To show (B2), it follows from (8.1.5) that as c → ∞, E{c[X(t + u∆c ) − X(t)]|X(t) = c − y/c} = −c[1 − ρ(t, t + u∆c )](c − y/c) d |ui |pi →− i=1 α rt |u1 |p1 d p i=1 |ui | i 205 (8.3.1) ,··· , |ud |pd d p i=1 |ui | i , Cov{c[X(t + u∆c ) − X(t)], c[X(t + v∆c ) − X(t)]|X(t) = c − y/c} = c2 [ρ(t + u∆c , t + v∆c ) − ρ(t, t + u∆c )ρ(t, t + v∆c )] d → + − p p α p ui i udd u11 p ,··· , pi d d ui i i=1 i=1 ui i=1 p p d α vd d v1 1 pi vi rt pi , · · · , pi d d i=1 vi i=1 vi i=1 d α |u1 − v1 |p1 pi ,··· |ui − vi | rt d p i=1 |ui − vi | i i=1 rt (8.3.2) , |ud − vd |pd d i=1 |ui − vi |pi . Since {c[X(t + ak∆c ) − X(t)] : 0 ≤ ki < mi } is multivariate normal, (B2) then follows. Let γ > 0, φ be the density of standard normal. Since Ψ(c − z/c) ∼ ez Ψ(c) for all z ≥ 0 and there exist constants B > 0, B > 0 such that P{Wt (u) > z − γ} ≤ B exp(−B z 2 ), it follows from (8.3.1) and (8.3.2) that as c → ∞, P{X(t + u∆c ) > c − γ/c, X(t) ≤ c − y/c} c−y/c = P{X(t) ≤ c − y/c} = P{X(t) ≤ c − y/c} ∞ ≤ (1 + o(1))Ψ(c) y −∞ ∞ y P{X(t + u∆c ) > c − γ/c|X(t) = x}φ(x)dx P{X(t + u∆c ) > c − γ/c|X(t) = c − z/c}c−1 φ(c − z/c)dz ez P{Wt (u) > z − γ}dz ≤ h(y)Ψ(c), where h(y) → 0 as y → ∞, establishing (B3). 206 To show (B5) holds, note that P{X(t) > c, X(t + u∆c ) > c} ≤ P{X(t) + X(t + u∆c ) > 2c} 1/2 2c2 1 + ρ(t, t + u∆c ) ∼Ψ c2 1 + ρ(t, t + u∆c ) 1/2 c2 + exp − 2 1 + ρ(t, t + u∆c ) 2 = Ψ(c) c2 1 − ρ(t, t + u∆c ) ≤ Ψ(c) exp − 2 2 . By (8.1.5), there exists η > 0 such that c2 [1 − ρ(t, t + u∆c )] ≥ η( d p α i=1 |ui | i ) L( d p i=1 |ui | i ) for all t, t + u∆c ∈ [D]δ . Hence (B5) holds with f (y) = Bλ exp(−y λ ) with 0 < λ < α, for some Bλ > 0. p ζ/2 Finally we turn to (B4). Let a > 0, a = {a1/p1 , · · · , a1/pd }, 0 < ζ < α, 1 < ξ < 2 i0 , κ= ∞ −r r=0 ξ and wr = ξ −r /(2κ). Define Br = {t + 2−r ak∆c : 0 ≤ ki < 2r , k ∈ Zd }, F = sup X(u) > c , u∈It,a∆c E−1 = {X(t) ≤ c − γ/c}, Er = sup X(v) ≤ c − γ(1 − w0 − · · · − wr )/c) for r ≥ 0, v∈Br recalling that ∞ r=0 wr = 1/2. Note that Br ⊂ Br+1 ⊂ It,a∆c and that by the continuity of 207 ∞ r=0 P(Er−1 X, P(F ∩ E−1 ) ≤ c ∩ Er ). Moreover, c P(Er−1 ∩ Er ) ≤ 2r+d P{X(v) ≤ c − γ(1 − w0 − · · · − wr−1 )/c), sup v∈It,a∆c ,ε∈{0,1}d \{0} (8.3.3) X(v + ε2−r a∆c ) > c − γ(1 − w0 − · · · − wr )/c}. Given X(v) = c − y/c, the conditional distribution of c[X(v + ε2−r a∆c ) − X(v)] is normal with mean −c(c − y/c)[1 − ρ(v, v + ε2−r a∆c )] < 0 and variance c2 [1 − ρ2 (v, v + ε2−r a∆c )], d −rpi )ζ i=1 2 which is bounded by B(a sup P for some B > 0, in view of (8.1.5). Hence c[X(v + ε2−r a∆c ) − X(v)] > wr y|X(v) = c − y/c ε∈{0,1}d d ≤ 2d exp − C(wr y)2 ζ 2−rpi a (8.3.4) i=1 −rpi 0 ≤ 2d exp − C(wr y)2 ζ da2 for some C > 0. Let η p ζ 2 i0 /ξ 2 > 1. Combining (8.3.3) and (8.3.4) with fact P{X(v) ∈ c − y/c} ∼ Ψ(c)ey dy then yields P(F ∩ E−1 ) ∞ 2−r ≤ (1 + o(1))Ψ(c) r=0 (8.3.5) ∞ exp[y − Cη r y 2 /(4aζ dζ κ2 ) + C r]dy γ/2 for some C > 0. Let γa = aζ/3 . Then for large c and γa ≤ γ ≤ c, (8.3.5) is bounded above 208 by Ψ(c)Na (γ), where ∞ ∞ 2−r Na (γ) = 2 exp[y − Cη r y 2 /(4aζ dζ κ2 ) + C r]dy γ/2 r=0 ∞ satisfies Na (γa ) + 1 ω s Na (γa + ω)dω = o(al ) for all s > 0 and l > 0. 8.4 Example: Standardized Fractional Brownian Sheet For a given vector H = (H1 , · · · , Hd ) ∈ (0, 1)d , a d-fractional Brownian sheet B H = {B H (t) : t ∈ Rd } with Hurst index H is a real-valued, centered Gaussian field with covariance function given by d E(B H (t)B H (s)) = i=1 1 |ti |2Hi + |si |2Hi − |ti − si |2Hi , 2 t, s ∈ Rd . ¯ Let D ⊂ Rd such that D having no intersection with any coordinate, define the standardized field B H (t) X(t) = Var(B H (t)) , t ∈ D. It follows that d E(X(t)X(s)) = |ti |2Hi + |si |2Hi − |ti − si |2Hi , 2|ti si |Hi i=1 and hence 1 E(X(t)X(t + u)) = 1 − (1 + o(1)) 2 209 d i=1 ui 2Hi , ti (8.4.1) as u → 0, uniformly over t ∈ [D]δ . Hence (8.1.5) is satisfied with pi = 2Hi for 1 ≤ i ≤ d, α = 1, L( d 2H i=1 |ui | i ) ≡ 1/2 and |u1 |2H1 rt d 2H i=1 |ui | i ,··· , d |ud |2Hd |ui = d 2H i=1 |ui | i −1 d |2Hi i=1 i=1 ui 2Hi . ti In other word, d rt (v) = vi |t |2Hi i=1 i t ∈ D, v ∈ S. , Applying Theorem 8.1.1, we obtain P sup X(t) > c t∈D c2 ∼ Ψ(c) 2 1 d i=1 2Hi H(t)dt, (8.4.2) D where ∞ H(t) = lim K −d K→∞ 0 ey P sup 0≤ui ≤K,∀i Wt (u) > y dy, and {Wt (u) : u ∈ [0, ∞)d } is a Gaussian random field such that Wt (0) = 0 and d E(Wt (u)) = − Cov(Wt (u), Wt (v)) = ui 2Hi , ti i=1 2H d ui i 2H + ui i − |ui − vi |2Hi |ti |2Hi i=1 . By similar discussions in Lemma 7.2.3, we obtain further that P sup X(t) > c t∈D 1 d i=1 2Hi c2 ∼ Ψ(c) 2 d H D 210 1 |t |2Hi i=1 i dt, (8.4.3) where H is the Pickands’ constant defined by ∞ H = lim K −d K→∞ 0 ey P sup W (u) > y dy, 0≤ui ≤K,∀i and {W (u) : u ∈ [0, ∞)d } is a Gaussian random field such that W (0) = 0 and d 2H ui i , E(W (u)) = − i=1 d 2H 2H ui i + ui i − |ui − vi |2Hi . Cov(W (u), W (v)) = i=1 Especially, when H = (1/2, · · · , 1/2), then H = 1 and thus d P sup X(t) > c ∼ Ψ(c)2−d c2d D t∈D −1 |ti | dt. (8.4.4) i=1 This result is very similar to Example 2.2 in Chan and Lai (2006). It is worth mentioning here that we may also apply Piterbarg’s result to get the approximation. Due to the covariance structure (8.4.1), applying Theorem 7.1 on Page 108 in Piterbarg (1996a), we obtain d P sup X(t) > c d 1 D t∈D 1 2 2Hi |ti |2Hi ∼ Ψ(c)c i=1 Hi H −1 dt, i=1 which is the same as (8.4.3). Remark 8.4.1 In certain sense, Theorem 8.1.1 generalize Theorem 7.1 on Page 108 in 211 Piterbarg (1996a), since the latter one is the case that t ∈ D, v ∈ S, rt (v) = Ct v, where Ct is some nondegenerate d × d matrix. 8.5 Example: Standardized Random String Processes We study an anisotropic Gaussian field which is the solution to a stochastic partial differential equation in Mueller and Tribe (2002). We write the original process {Ut (x) : t ≥ 0, x ∈ R} in Mueller and Tribe (2002) as {U (t) : t1 ≥ 0, t2 ∈ R}. Then it is a centered Gaussian field with stationary increments and U (0) = 0. It has the following covariance structure: for t2 , s2 ∈ R, t1 = s1 ≥ 0, E{(U (t) − U (s))2 )} = |t2 − s2 |, and for t2 , s2 ∈ R, t1 > s1 ≥ 0, E{(U (t) − U (s))2 )} = √ |t − s2 | t1 − s1 F √2 , t1 − s1 where F (a) = (2π)−1/2 + 1 G(a − z)G(a − z )(|z| + |z | − |z − z |)dzdz 2 R R = −(2π)−1/2 + G(a − z)|z|dz R a = −(2π)−1/2 + 4G(a) + 2a G(z)dz, 0 212 2 x and G(x) = √1 e− 4 . Now we define the standardized field 4π X(t) = U (t) Var(U (t)) t ∈ R+ × R. , (8.5.1) Note that Var(U (t)) = = √ √ |t | t1 F √2 t1 t2 t1 1 2 − 2 − √ + √ e 4t1 π 2π |t √2 | + |t2 | t1 0 1 − z2 √ e 4 dz, π and E(X(t)X(s)) = Var(U (t)) + Var(U (s)) − E{(U (t) − U (s))2 )} 2 Var(U (t))Var(U (s)) . Thus we obtain that as u → 0, E(X(t)X(t + u)) = 1 − (1 + o(1)) rt,1 |u2 | |u1 | |u1 | + rt,2 |u2 | |u1 | where rt,1 rt,2 |u2 | |u1 | |u2 | |u1 | 1 = 2Var(U (t)) u2 1 2 − 2 − √ + √ e 4|u1 | , π 2π |u2 | √ 2 1 1 |u1 | √ − z4 = e dz. 2Var(U (t)) 0 π 213 |u2 | , |u1 | + |u2 |) ≡ 1 and Hence (8.1.5) is satisfied with p1 = 1/2, p2 = 1, α = 1, L( |u1 | rt |u1 | + |u2 | = , 1 |u1 | + |u2 | |u2 | |u1 | + |u2 | rt,1 |u2 | |u1 | |u1 | + rt,2 |u2 | |u1 | |u2 | In other words, rt (v) = rt,1 v2 v v1 + rt,2 2 v2 , v1 v1 t ∈ D, v ∈ S. Hence we can apply Theorem 8.1.1 to get the approximation to the excursion probability. However, Piterbarg’s result is not applicable for such case. 214 Chapter 9 Vector-valued Smooth Gaussian Random Fields 9.1 Joint Excursion Probability Let {(X(t), Y (s)) : t ∈ T, s ∈ S} be an R2 -valued, centered, unit-variance Gaussian random field, where T and S are rectangles in RN . Let ρ(t, s) = E{X(t)Y (s)}, ρ(T, S) = sup E{X(t)Y (s)}. t∈T,s∈S We will make use of the following conditions. (C1). X, Y ∈ C 2 (RN ) almost surely and their second derivatives satisfy the uniform meansquare H¨lder condition: there exist constants L, η > 0 such that o E(Xij (t) − Xij (t ))2 ≤ L t − t 2η , ∀t, t ∈ T, i, j = 1, . . . , N, E(Yij (s) − Yij (s ))2 ≤ L s − s 2η , ∀s, s ∈ S, i, j = 1, . . . , N. (C2). For every (t, t , s) ∈ T 2 × S with t = t , the Gaussian vector (X(t), X(t), Xij (t), X(t ), X(t ), Xij (t ), Y (s), Y (s), Yij (s), 1 ≤ i ≤ j ≤ N ) 215 is non-degenerate; and for every (s, s , t) ∈ S 2 × T with s = s , the Gaussian vector (Y (s), Y (s), Yij (s), Y (s ), Y (s ), Yij (s ), X(t), X(t), Xij (t), 1 ≤ i ≤ j ≤ N ) is non-degenerate. (C3). For all (t, s) ∈ T × S such that ρ(t, s) = ρ(T, S), (E{Xij (t)Y (s)})i,j∈ζ(t,s) , (E{X(t)Yi j (s)})i ,j ∈ζ (t,s) are both negative semidefinite, where ζ(t, s) = {n : E{Xn (t)Y (s)} = 0, 1 ≤ n ≤ N }, ζ (t, s) = {n : E{X(t)Yn (s)} = 0, 1 ≤ n ≤ N }. Remark 9.1.1 Note that ∂ρ (t, s) = E{Xi (t)Y (s)}, ∂ti ∂ 2ρ (t, s) = E{Xij (t)Y (s)}, ∂ti ∂tj ∂ρ (t, s) = E{X(t)Yi (s)}, ∂si ∂ 2ρ (t, s) = E{X(t)Yij (s)}. ∂si ∂sj Therefore, similarly to Remark 3.1.2, in order to verify (C3), it suffices to consider those points (t, s) ∈ T ×S such that t ∈ ∂k T with 0 ≤ k ≤ N −2 or s ∈ ∂k S with 0 ≤ k ≤ N −2. We decompose T and S into several faces as N T = N ∂k T = k=0 N J, k=0 J∈∂k T S= l=0 216 N ∂l S = L. l=0 L∈∂l S For each J ∈ ∂k T and L ∈ ∂l S, define the number of extended outward maxima above level u as E Mu (X, J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = k, / ε∗ Xj (t) ≥ 0 for all j ∈ σ(J)}, j E Mu (Y, L) := #{s ∈ L : Y (s) ≥ u, Y|L (s) = 0, index( 2 Y|L (s)) = l, ε∗ Yj (s) ≥ 0 for all j ∈ σ(L)}; / j and define the number of maxima above level u as Mu (X, J) := #{t ∈ J : X(t) ≥ u, X|J (t) = 0, index( 2 X|J (t)) = k}, Mu (Y, L) := #{s ∈ L : Y (s) ≥ u, Y|L (s) = 0, index( 2 Y|L (s)) = l}. Similarly to Lemma 2.3.1, we have the following result. Lemma 9.1.2 Under (C1) and (C2), the following relation holds for each u > 0: N E E {Mu (X, J) ≥ 1, Mu (Y, L) ≥ 1} a.s. sup X(t) ≥ u, sup Y (s) ≥ u = t∈T s∈S k,l=0 J∈∂k T,L∈∂l S It follows from Lemma 9.1.2 that N E E P{Mu (X, J) ≥ 1, Mu (Y, L) ≥ 1} P sup X(t) ≥ u, sup Y (s) ≥ u ≤ t∈T s∈S k,l=0 J∈∂k T,L∈∂l S N E E E{Mu (X, J)Mu (Y, L)}. ≤ k,l=0 J∈∂k T,L∈∂l S (9.1.1) 217 On the other hand, by the Bonferroni inequality, Lemma 9.1.2 implies N E E P{Mu (X, J) ≥ 1, Mu (Y, L) ≥ 1} P sup X(t) ≥ u, sup Y (s) ≥ u ≥ t∈T s∈S k,l=0 J∈∂k T,L∈∂l S N E E E P{Mu (X, J) ≥ 1, Mu (J , X) ≥ 1, Mu (Y, L) ≥ 1} − k,l=0 J,J ∈∂ T,J=J k L∈∂l S N E E E P{Mu (X, J) ≥ 1, Mu (Y, L) ≥ 1, Mu (L , Y ) ≥ 1} − k,l=0 J∈∂k T L,L ∈∂l S,L=L N E E E E P{Mu (X, J) ≥ 1, Mu (J , X) ≥ 1, Mu (Y, L) ≥ 1, Mu (L , Y ) ≥ 1}. − k,l=0 J,J ∈∂ T,J=J k L,L ∈∂l S,L=L E E E E Let pij = P{Mu (X, J) = i, Mu (Y, L) = j}, then P{Mu (X, J) ≥ 1, Mu (Y, L) ≥ 1} = ∞ i,j=1 pij E E and E{Mu (X, J)Mu (Y, L)} = ∞ i,j=1 ijpij , and hence E E E E E{Mu (X, J)Mu (Y, L)} − P{Mu (X, J) ≥ 1, Mu (Y, L) ≥ 1} ∞ ∞ = (ij − 1)pij ≤ i,j=1 [i(i − 1)j + j(j − 1)i]pij i,j=1 E E E E E E = E{Mu (X, J)[Mu (X, J) − 1]Mu (Y, L)} + E{Mu (Y, L)[Mu (Y, L) − 1]Mu (X, J)}. 218 We therefore obtain the following lower bound for the excursion probability, N E E E{Mu (X, J)Mu (Y, L)} P sup X(t) ≥ u, sup Y (s) ≥ u ≥ t∈T s∈S k,l=0 J∈∂k T,L∈∂l S E E E E E E − E{Mu (X, J)[Mu (X, J) − 1]Mu (Y, L)} − E{Mu (Y, L)[Mu (Y, L) − 1]Mu (X, J)} N E E E E{Mu (X, J)Mu (X, J )Mu (Y, L)} − k,l=0 J,J ∈∂ T,J=J k L∈∂l S N E E E E{Mu (X, J)Mu (Y, L)Mu (Y, L )}. −2 k,l=0 J∈∂k T L,L ∈∂l S,L=L (9.1.2) We will show that the upper bound in (9.1.1) makes the major contribution and the other terms in the lower bound in (9.1.2) are super-exponentially small. Lemma 9.1.3 Let Di be compact sets in RN , i = 1, 2, 3. Let {(ξ1 (x1 ), ξ2 (x2 ), ξ3 (x3 )) : (x1 , x2 , x3 ) ∈ D1 × D2 × D3 } be an R3 -valued, C 2 , centered, unit-variance, non-degenerate Gaussian random field and let ρ12 (x1 , x2 ) = Eξ1 (x1 )ξ2 (x2 ), ρ12 = ρ13 (x1 , x3 ) = Eξ1 (x1 )ξ3 (x3 ), ρ13 = ρ23 (x2 , x3 ) = Eξ2 (x2 )ξ3 (x3 ), ρ23 = 219 sup x1 ∈D1 ,x2 ∈D2 sup x1 ∈D1 ,x3 ∈D3 sup x2 ∈D2 ,x3 ∈D3 ρ12 (x1 , x2 ), ρ13 (x1 , x3 ), ρ23 (x2 , x3 ). If ρ12 ≥ ρ13 ∨ ρ23 , then there exists some constant α > 0 such that as u → ∞, sup x1 ∈D1 ,x2 ∈D2 ,x3 ∈D3 = o exp E{|ξ1 (x1 )ξ2 (x2 )ξ3 (x3 )|m 1{ξ (x )≥u,ξ (x )≥u,ξ (x )≥u} } 1 1 2 2 3 3 u2 − αu2 − 1 + ρ12 (9.1.3) , where m is a fixed positive number. Proof Let ξ(x1 , x2 , x3 ) = [ξ1 (x1 ) + ξ2 (x2 ) + ξ3 (x3 )]/3, then there exists a positive number m such that for all (x1 , x2 , x3 ) ∈ D1 × D2 × D3 and u large enough, E{|ξ1 (x1 )ξ2 (x2 )ξ3 (x3 )|m 1{ξ (x )≥u,ξ (x )≥u,ξ (x )≥u} } 1 1 2 2 3 3 ≤ E{(ξ1 (x1 ) + ξ2 (x2 ) + ξ3 (x3 ))m 1{ξ (x )≥u,ξ (x )≥u,ξ (x )≥u} } 1 1 2 2 3 3 ≤ E{(ξ1 (x1 ) + ξ2 (x2 ) + ξ3 (x3 ))m (9.1.4) 1{[ξ1 (x1 )+ξ2 (x2 )+ξ3 (x3 )]/3≥u} } = E{(3ξ(x1 , x2 , x3 ))m 1{ξ(x ,x ,x )≥u} }. 1 2 3 It follows from the assumption ρ12 ≥ ρ13 ∨ ρ23 that sup x1 ∈D1 ,x2 ∈D2 ,x3 ∈D3 Var(ξ(x1 , x2 , x3 )) 3 + 2[ρ12 (x1 , x2 )) + ρ13 (x1 , x3 ) + ρ23 (x2 , x3 )] 9 x1 ∈D1 ,x2 ∈D2 ,x3 ∈D3 3 + 6ρ12 1 + 2ρ12 ≤ = , 9 3 = sup and hence ρ12 ∈ (−1/2, 1). Combining this with (9.1.4), we see that for any ε > 0, as 2 3u u → ∞, the first line in (9.1.3) is o(exp{εu2 − 2(1+2ρ ) }). Now the result follows by taking 12 220 α to be a positive number less than 3 1 1 − ρ12 − = . 2(1 + 2ρ12 ) 1 + ρ12 2(1 + ρ12 )(1 + 2ρ12 ) Lemma 9.1.4 Let D1 , . . . , Dn be compact sets in RN , where n ≥ 3, and let (ξ1 (x1 ), ξ2 (x2 ), ξ3 (x3 ), . . . , ξn (xn ) : xi ∈ Di , i = 1 . . . , n) be an Rn -valued, C 2 , centered, unit-variance, non-degenerate Gaussian random vector. Let m be a fixed positive number and ρ12 (x1 , x2 ) = E{ξ1 (x1 )ξ2 (x2 )}, ρ12 = sup x1 ∈D1 ,x2 ∈D2 ρ12 (x1 , x2 ). Then lim u−2 log E{|ξ1 (x1 )ξ2 (x2 )|m 1{ξ (x )≥u,ξ (x )≥u} |ξ3 (x3 ) = · · · = ξn (xn ) = 0} 1 1 2 2 1 . ≤− 1 + ρ12 (x1 , x2 ) u→∞ If {(x1 , . . . , xn ) ∈ D1 × · · · × Dn : (9.1.5) ρ12 (x1 , x2 ) = ρ12 , E{(ξ1 (x1 ) + ξ2 (x2 ))ξi (xi )} = 0, ∀i = 3, . . . , n} = ∅, 221 then there exists some constant α > 0 such that as u → ∞, sup xi ∈Di ,i=1,...,n = o exp Proof E{|ξ1 (x1 )ξ2 (x2 )|m 1{ξ (x )≥u,ξ (x )≥u} |ξ3 (x3 ) = · · · = ξn (xn ) = 0} 1 1 2 2 − αu2 − u2 1 + ρ12 . Let ξ(x1 , x2 ) = [ξ1 (x1 ) + ξ2 (x2 )]/2, then there exists a positive number m such that for all xi ∈ Di , i = 1, . . . , n and u large enough, E{|ξ1 (x1 )ξ2 (x2 )|m 1{ξ (x )≥u,ξ (x )≥u} |ξ3 (x3 ) = · · · = ξn (xn ) = 0} 1 1 2 2 ≤ E{[(ξ1 (x1 ) + ξ2 (x2 ))/2]m 1{[ξ (x )+ξ (x )]/2≥u} |ξ3 (x3 ) = · · · = ξn (xn ) = 0} 1 1 2 2 = E{(ξ(x1 , x2 ))m 1{ξ(x ,x )≥u} |ξ3 (x3 ) = · · · = ξn (xn ) = 0}. 1 2 Note that Var(ξ(x1 , x2 )|ξ3 (x3 ) = · · · = ξn (xn ) = 0) ≤ Var(ξ(x1 , x2 )) = 1 + ρ12 (x1 , x2 ) , 2 where the equality holds if and only if ξ(x1 , x2 ) is independent of (ξ3 (x3 ), . . . , ξn (xn )). Now our result follows from the continuity of the conditional expectation and the compactness of Di , i = 1, . . . , n. The following result is similar to Lemma 3 in Piterbarg (1996b). Lemma 9.1.5 Let (X, Y ) = {(X(t), Y (s)) : t ∈ K ⊂ RN , s ∈ D ⊂ RN } be an R2 -valued, centered, unit-variance Gaussian random field satisfying (C1) and (C2). Then for any ε > 0, 222 there exists δ > 0 such that for K with diam(K) ≤ δ and u large enough, E{Mu (X, K)[Mu (X, K) − 1]Mu (Y, D)} ≤ Vol(K) exp − u2 + εu2 , 2βX (K, D) where βX (K, D) = sup Var t∈K,s∈D,e∈SN −1 X(t) + Y (s) 2 X(t)= Y (s)=0, 2 X(t)e=0 . Similarly, for any ε > 0, there exists δ > 0 such that for D with diam(D) ≤ δ and u large enough, E{Mu (X, K)Mu (Y, D)[Mu (Y, D) − 1]} ≤ Vol(D) exp − u2 + εu2 , 2βY (K, D) where βY (K, D) = Proof sup Var t∈K,s∈D,e∈SN −1 X(t) + Y (s) 2 X(t)= Y (s)=0, 2 Y (s)e=0 . The proof will be similar to the original proof of Lemma 3 in Piterbarg (1996b). The only difference is that the integral here involves both X and Y exceeding u. But we may apply the arguments for proving Lemma 9.1.3 and Lemma 9.1.4 to handle the double integral, to make it bounded above by the integral of (X + Y )/2 exceeding u, and then the desired result follows. Lemma 9.1.6 Let (X, Y ) = {(X(t), Y (s)) : t ∈ J ⊂ RN , s ∈ L ⊂ RN } be an R2 -valued, centered, unit-variance Gaussian random field satisfying (C1), (C2) and (C3). Then there 223 exists α > 0 such that as u → ∞, E{Mu (X, J)[Mu (X, J) − 1]Mu (Y, L)} = o exp − u2 − αu2 1 + ρ(J, L) , E{Mu (X, J)Mu (Y, L)[Mu (Y, L) − 1]} = o exp u2 − − αu2 1 + ρ(J, L) , (9.1.6) where ρ(J, L) = supt∈J,s∈L ρ(t, s). Proof We only prove the first line in (9.1.6), since the proof for the second line is the same. The set J may be covered by congruent cubes Ki with disjoint interiors, edges parallel to coordinate axes and sizes so small that the conditions of Lemma 9.1.5 are satisfied for each union of two neighboring cubes Ki and Kj . Then E{Mu (X, J)[Mu (X, J) − 1]Mu (Y, L)} ≤E [Mu (X, Kj ) − 1] Mu (Y, L) Mu (X, Ki ) i =E j Mu (X, Kj ) − Mu (X, Ki ) i j i E{Mu (X, Ki )2 Mu (Y, L)} + = i Mu (X, Ki ) Mu (Y, L) E{Mu (X, Ki )Mu (X, Kj )Mu (Y, L)}− i=j − E{Mu (X, Ki )Mu (Y, L)} i E{Mu (X, Ki )[Mu (X, Ki ) − 1]Mu (Y, L)} + = i E{Mu (X, Ki )Mu (X, Kj )Mu (Y, L)}. i=j (9.1.7) 224 Then by the Kac-Rice formula and Lemma 9.1.3, there exists α > 0 such that for u large enough, E{Mu (X, Ki )Mu (X, Kj )Mu (Y, L)} ≤ exp − |i−j|≥2 u2 − α u2 . 1 + ρ(J, L) (9.1.8) If Ki and Kj are neighboring, say j = i + 1, we have E{Mu (X, Ki ∪ Ki+1 )[Mu (X, Ki ∪ Ki+1 ) − 1]Mu (Y, L)} = E{[Mu (X, Ki ) + Mu (X, Ki+1 )][Mu (X, Ki ) + Mu (X, Ki+1 ) − 1]Mu (Y, L)} = 2E{Mu (X, Ki )Mu (X, Ki+1 )Mu (Y, L)} + E{Mu (X, Ki )[Mu (X, Ki ) − 1]Mu (Y, L)} + E{Mu (X, Ki+1 )[Mu (X, Ki+1 ) − 1]Mu (Y, L)}. Applying Lemma 9.1.5, we see that for u large enough, E{Mu (X, Ki )[Mu (X, Ki ) − 1]Mu (Y, L)} + i E{Mu (X, Ki )Mu (X, Kj )Mu (Y, L)} |i−j|=1 ≤ exp − u2 + εu2 . 2βX (J, L) (9.1.9) It is obvious that βX (J, L) ≤ 1+ρ(J,L) , 2 and we will show βX (J, L) < 1 + ρ(J, L) . 2 225 (9.1.10) By the definition of βX (J, L) in Lemma 9.1.5, if βX (J, L) = 1+ρ(J,L) , 2 then due to the ¯ ¯ continuity, there are some (t, s) ∈ J × L and e ∈ SN −1 such that Var X(t) + Y (s) 2 X(t)= Y (s)=0, 2 X(t)e=0 = 1 + ρ(J, L) . 2 (9.1.11) This implies ρ(t, s) = ρ(J, L), E{X(t) Y (s)} = E{Y (s) X(t)} = 0. By (C3), E{Y (s) 2 X(t)} becomes negative semidefinite. But E{X(t) 2 X(t)} is always negative definite due to the constant variance, so that E{Y (s) 2 X(t)} + E{X(t) 2 X(t)} e = 0, ∀e ∈ SN −1 . This contradicts (9.1.11) and hence (9.1.10) holds. Plugging (9.1.8) and (9.1.9) into (9.1.7), we finish the proof. Lemma 9.1.7 Let {(X(t), Y (s)) : t ∈ T, s ∈ S} be an R2 -valued, centered, unit-variance Gaussian random field satisfying (C1), (C2) and (C3). Then there exists α > 0 such that as u → ∞, E E E E{Mu (X, J)Mu (X, J )Mu (Y, L)} = o exp − u2 − αu2 1 + ρ(J, L) , E E E E{Mu (X, J)Mu (Y, L)Mu (Y, L u2 − − αu2 1 + ρ(J, L) , )} = o exp where J and J are different faces of T , L and L are different faces of S. 226 (9.1.12) Proof We only prove the first line in (9.1.12), since the proof for the second line is the same. If the two faces J and J are not neighboring, by similar arguments in Lemma 2.3.6 and Lemma 9.1.3, it is straightforward to verify that the high moment in (9.1.12) is superexponentially small. Thus we turn to considering the case when J and J are neighboring, ¯ ¯ i.e., I := J ∩ J = ∅. Without loss of generality, assume σ(J) = {1, . . . , m, m + 1, . . . , k}, σ(J ) = {1, . . . , m, k + 1, . . . , k + k − m}, σ(L) = {1, . . . , l}, where 0 ≤ m ≤ k ≤ k ≤ N and k ≥ 1. If k = 0, we consider σ(J) = ∅ by convention. Under such assumption, J ∈ ∂k T , J ∈ ∂k T , dim(I) = m and L ∈ ∂l S. We assume also that all elements in ε(J) and ε(J ) are 1. We first consider the case k ≥ 1. By the Kac-Rice metatheorem, E E E E{Mu (X, J)Mu (X, J )Mu (Y, L)} J ∞ 0 J u L u u ∞ dzk+1 · · · 0 dy dx dx ds dt dt ∞ ∞ ∞ ≤ ∞ ∞ dzk+k −m 0 dwm+1 · · · 0 dwk E{ det 2 X|J (t)||det 2 X|J (t )||det 2 Y|L (s)||X(t) = x, X(t ) = x , Y (s) = y, X|J (t) = 0, Xk+1 (t) = zk+1 , . . . , Xk+k −m (t) = zk+k −m , X|J (t ) = 0, Xm+1 (t ) = wm+1 , . . . , Xk (t ) = wk , Y|L (s) = 0} × pt,t ,s (x, x , y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk , 0) := A(t, t , s) dtdt ds, J×J ×L 227 (9.1.13) where pt,t ,s (x, x , y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk , 0) is the density of (X(t), X(t ), Y (s), X|J (t), Xk+1 (t), . . . , Xk+k −m (t), X|J (s), Xm+1 (s), . . . , Xk (s), Y|L (s)) evaluated at (x, x , y, 0, zk+1 , . . . , zk+k −m , 0, wm+1 , . . . , wk , 0). Similarly to Lemma 3.1.7, by Lemma 9.1.4 and continuity, if ¯ I0 := {(t, s) ∈ I × S :ρ(t, s) = ρ(T, S), E{Xi (t)Y (s)} = E{X(t)Yj (s)} = 0, ∀i = 1, . . . , k + k − m, j = 1, . . . , l} = ∅, E E E then E{Mu (X, J)Mu (X, J )Mu (Y, L)} is super-exponentially small. Therefore, similarly to the proof in Lemma 3.1.8, we only need to consider the alternative case, which is I0 = ∅. Define B(I0 , δ) := {(t, t , s) ∈ J × J × S : d((t, s), I0 ) ∨ d((t , s), I0 ) < δ}, where δ is a small positive positive number to be specified. Then the difference between J×J ×L A(t, t , s)dtdt ds and B(I ,δ) A(t, t , s)dtdt ds is super-exponentially small. Hence 0 we turn to estimating B(I ,δ) A(t, t , s) dtdt ds. 0 Due to (C3), we may choose δ small enough such that for all (t, t , s) ∈ B(I0 , δ), ΛJ∪J (t, s) = −(E{[X(t) + Y (s)] 2 X(t)})i,j=1,...,k+k −m are positive definite. 228 Let {e1 , e2 , . . . , eN } be the standard orthonormal basis of RN . For t ∈ J and s ∈ J , let et,t = (t − t)T / t − t and let αi (t, t , s) = ei , ΛJ∪J (t, s)et,t , then N N ei , ΛJ∪J (t, s)et,t ei = ΛJ∪J (t, s)et,t = αi (t, t , s)ei . (9.1.14) i=1 i=1 There exists some α0 > 0 such that et,t , ΛJ∪J (t, s)et,t ≥ α0 (9.1.15) for all t and t . Since all elements in ε(J) and ε(J ) are 1, we have the following representation, t = (t1 , . . . , tm , tm+1 , . . . , tk , bk+1 , . . . , bk+k −m , 0, . . . , 0), t = (t1 , . . . , tm , bm+1 , . . . , bk , tk+1 , . . . , tk+k −m , 0, . . . , 0), where ti ∈ (ai , bi ) for all i ∈ σ(J) and tj ∈ (aj , bj ) for all j ∈ σ(J ). Therefore, ei , et,t ≥ 0, ∀ m + 1 ≤ i ≤ k, ei , et,t ≤ 0, ∀ k + 1 ≤ i ≤ k + k − m, ei , et,t = 0, ∀ k + k − m < i ≤ N. (9.1.16) Let Di = {(t, t , s) ∈ B(I0 , δ) : αi (t, t , s) ≥ βi }, Di = {(t, t , s) ∈ B(I0 , δ) : αi (t, t , s) ≤ −βi }, if m + 1 ≤ i ≤ k, if k + 1 ≤ i ≤ k + k − m, m D0 = (t, t , s) ∈ B(I0 , δ) : αi (t, t , s) ei , et,t ≥ β0 , i=1 229 (9.1.17) where β0 , β1 , . . . , βk+k −m are positive constants such that β0 + k+k −m i=m+1 βi < α0 . It follows from (9.1.16) and (9.1.17) that, if (t, s) does not belong to any of D0 , Dm , . . . , Dk+k −m , then by (9.1.14), k+k −m N αi (t, t , s) ei , et,t ≤ β0 + ΛJ∪J (t, s)et,t , et,t = βi < α0 , i=m+1 i=1 which contradicts (9.1.15). Thus D0 ∪ ∪k+k −m Di is a covering of B(I0 , δ), by (9.1.13), i=m+1 E E E E{Mu (X, J)Mu (X, J )Mu (Y, L)} k+k −m ≤ A(t, t , s) dtdt ds. A(t, t , s) dtdt ds + D0 i=m+1 Di We first show that D A(t, t , s) dtdt ds is super-exponentially small. 0 D0 A(t, t , s) dtdt ds ∞ ∞ ≤ dx dtdt ds D0 u u dy p X (t), X (t ), Y (s) (0, 0, 0) |J |L |J × pX(t),Y (s) (x, y| X|J (t) = 0, X|J (t ) = 0, Y|L (s) = 0) × E{|det 2 X|J (t)||det 2 X|J (t )||det 2 Y|L (s)||X(t) = x, Y (s) = y, X|J (t) = X|J (t ) = Y|L (s) = 0}. 230 (9.1.18) Note that Var(X(t) + Y (s)| X|J (t), X|J (t ), Y|L (s)) ≤ Var(X(t) + Y (s)|X1 (t), . . . , Xm (t), X1 (t ), . . . , Xm (t )) = Var(X(t) + Y (s)|X1 (t), . . . , Xm (t), X1 (t) + Xm (t) + 1 X1 (t), t − t + t − t 1+η Yt,t , . . . , m Xm (t), t − t + t − t 1+η Yt,t ) = Var(X(t) + Y (s)|X1 (t), . . . , Xm (t), 1 X1 (t), et,t + t − t η Yt,t , . . . , m Xm (t), et,t + t − t η Yt,t ) ≤ Var(X(t) + Y (s)| 1 X1 (t), et,t + t − t η Yt,t , . . . , = Var(X(t) + Y (s)| X1 (t), et,t , . . . , m Xm (t), et,t + t − t η Yt,t ) Xm (t), et,t ) + o(1). (9.1.19) Hence there exist constants C2 > 0 and ε0 > 0 such that for t − t sufficiently small, Var((X(t) + Y (s))/2| X|J (t), X|J (t ), Y|L (s)) ρ(T, S) + 1 − C2 ≤ 2 m 2 αi (t, t i=1 (9.1.20) ρ(T, S) + 1 − ε0 , , s) + o(1) < 2 where the last inequality is due to the fact that (t, t , s) ∈ D0 implies m m 2 αi (t, t i=1 2 αi (t, t , s) ≥ i=1 , s)| ei , et,t |2 1 ≥ m m 2 αi (t, t , s) et,t , ei i=1 β2 ≥ 0. m Similarly, we can use the techniques in the proof of Theorem 2.3.8 to show that for i = m + 1, . . . , k + k − m, D A(t, t , s) dtdt ds are super-exponentially small, . i 231 Now, the approximation obtained still contains the absolute values of determinants, which are hard to be computed. However, we will show that removing these absolute values only causes exponentially small difference, and then we will get the approximation based on the mean Euler characteristic of the excursion set. Proposition 9.1.8 Let {(X(t), Y (s)) : t ∈ T, s ∈ S} be an R2 -valued, centered, unitvariance Gaussian random field satisfying (C1), (C2) and (C3). Then there exists α > 0 such that for any J ∈ ∂k T , L ∈ ∂l S, as u → ∞, E E E{Mu (X, J)Mu (Y, L)} = (−1)k+l J L E{det 2 X(t)det 2 Y (s)1{X(t)≥u, ε∗ X (t)≥0 for all j ∈σ(J)} / j j × 1{Y (s)≥u, ε∗ Y (s)≥0 for all j ∈σ(L)} | X|J (t) = / j j × p X (t), Y (s) (0, 0)dtds + o exp |J |L − where ρ(T, S) = supt∈T,s∈S ρ(t, s). 232 Y|L (s) = 0} u2 − αu2 1 + ρ(T, S) , (9.1.21) Proof To simplify the proof, let us consider the case when k = l = N , and the proof for general cases is similar. By the Kac-Rice formula, E E E{Mu (X, J)Mu (Y, L)} E{|det 2 X(t)||det 2 Y (s)|1{X(t)≥u, index( 2 X(t))=N } J L = × 1{Y (s)≥u, index( 2 Y (s))=N } | X(t) = Y (s) = 0}p X(t), Y (s) (0, 0)dtds ∞ = (−1)N +N J L p X(t), Y (s) (0, 0)dtds u ∞ u E{det 2 X(t)det 2 Y (s) × 1{index( 2 X(t))=N } 1{index( 2 Y (s))=N } |X(t) = x, Y (s) = y, X(t) = × pX(t),Y (s) (x, y| X(t) = Y (s) = 0)dxdy ∞ := J L Y (s) = 0} p X(t), Y (s) (0, 0)dtds ∞ A(x, y, t, s)dxdy. u u Similarly to the proof in Lemma 3.1.5, define ¯ ¯ O(J, L) = {(t, s) ∈ J × L : ρ(t, s) = ρ(T, S), E{X(t) Y (s)} = E{Y (s) X(t)} = 0}, Uδ (J, L) = {(t, s) ∈ J × L : d((t, s), O(J, L)) < δ}, where δ is a small positive number to be specified. Then, similarly, we only need to estimate ∞ Uδ (J,L) p X(t), Y (s) (0, 0)dtds ∞ A(x, y, t, s)dxdy. u u Due to (C3), we may choose δ small enough such that E{[X(t) + Y (s)] 2 Y (s)} and E{[X(t) + Y (s)] 2 X(t)} are negative definite for all (t, s) ∈ Uδ (J, L). Also note that as 233 δ → 0, both E{X(t) Y (s)} and E{Y (s) X(t)} tend to 0, therefore, E{Xij (t)|X(t) = x, Y (s) = y, X(t) = Y (s) = 0}  −1  1 ρ(T, S)   ρ(T, S) 1   1  x − ρ(T, S)y  = (E{Xij (t)X(t)}, E{Xij (t)Y (s)})  , 1 − ρ(T, S)2 y − ρ(T, S)x   x    y  = (1 + o(1))(E{Xij (t)X(t)}, E{Xij (t)Y (s)})  and similarly, E{Yij (s)|X(t) = x, Y (s) = y, X(t) = = Y (s) = 0}   1  x − ρ(T, S)y  (E{Yij (s)X(t)}, E{Yij (s)Y (s)})  . 1 − ρ(T, S)2 y − ρ(T, S)x Note that E{X(t) 2 X(t)} and E{Y (s) 2 Y (s)} are both negative definite, E{X(t) 2 Y (s)} and E{Y (s) 2 X(t)} both negative semidefinite. There exists ε0 ∈ (0, 1 − ρ(T, S)) such that for δ small enough and all (x, y) ∈ [u, ∞)2 with (ε0 + ρ(T, S))y < x < (ε0 + ρ(T, S))−1 y, Σ1 (x, y) := E{ 2 X(t)|X(t) = x, Y (s) = y, X(t) = Y (s) = 0}, Σ2 (x, y) := E{ 2 Y (s)|X(t) = x, Y (s) = y, X(t) = Y (s) = 0}, are both negative definite. Define ∆1 (x, y) = 2 X(t) − Σ 1 (x, y), ∆2 (x, y) = 234 2 Y (s) − Σ (x, y). 2 Then, conditioning on (X(t) = x, Y (s) = y, X(t) = Y (s) = 0), ∆1 (x, y) and ∆2 (x, y) are both centered Gaussian random matrices. We write ∞ ∞ A(x, y, t, s)dxdy = u u ∞ dy u (ε0 +ρ(T,S))y ∞ ∞ dy + (ε0 +ρ(T,S))−1 y ∞ u (ε0 +ρ(T,S))−1 y ∞ dx A(x, y, t, s)dx + u A(x, y, t, s)dx+ (ε0 +ρ(T,S))−1 x A(x, y, t, s)dy. Since (ε0 + ρ(T, S))−1 > 1, the last two integrals above are super-exponentially small. Meanwhile, (ε0 +ρ(T,S))−1 y ∞ dy u A(x, y, t, s)dx (ε0 +ρ(T,S))y (ε0 +ρ(T,S))−1 y ∞ = dy u (ε0 +ρ(T,S))y E{det(∆1 (x, y) + Σ1 (x, y))det(∆2 (x, y) + Σ2 (x, y)) × 1{index(∆ (x,y)+Σ (x,y))=N } 1{index(∆ (x,y)+Σ (x,y)))=N } | 1 1 2 2 |X(t) = x, Y (s) = y, X(t) = Y (s) = 0}pX(t),Y (s) (x, y| X(t) = Y (s) = 0)dx. Using the same arguments in the proof of Lemma 2.3.2, we see that removing the two indicator functions above only causes a super-exponentially small difference. Therefore, there exists α > 0 such that for u large enough, E E E{Mu (X, J)Mu (Y, L)} ∞ = J L p X(t), Y (s) (0, 0)dtds ∞ u u pX(t),Y (s) (x, y| X(t) = × E{det 2 X(t)det 2 Y (s)|X(t) = x, Y (s) = y, X(t) = + o exp − u2 − αu2 1 + ρ(T, S) . 235 Y (s) = 0) Y (s) = 0}dxdy Define the excursion set Au (X, T ) = {t ∈ T : X(t) ≥ u}, Au (Y, S) = {s ∈ S : Y (s) ≥ u}, Au (X, T ) × Au (Y, S) = {(t, s) ∈ T × S : X(t) ≥ u, Y (s) ≥ u}. Let X|J (t) = 0, index( 2 X|J (t)) = i, µi (X, J) := #{t ∈ J : X(t) ≥ u, ε∗ Xj (t) ≥ 0 for all j ∈ σ(J)}, / j µi (Y, L) := #{s ∈ L : Y (s) ≥ u, Y|L (s) = 0, index( 2 Y|L (s)) = i, ε∗ Yj (s) ≥ 0 for all j ∈ σ(L)}, / j where ε∗ = 2εj − 1 and the index of a matrix is defined as the number of its negative j eigenvalues. Then, it follows from the Morse theorem that the Euler characteristic of the excursion set can be represented as N k (−1)k ϕ(Au (X, T )) = (−1)i µi (X, J), i=0 k=0 J∈∂k T l N (−1)l ϕ(Au (Y, S)) = l=0 L∈∂l S (−1)i µi (Y, L). i=0 Since for two sets D1 and D2 , ϕ(D1 × D2 ) = ϕ(D1 )ϕ(D2 ), we have ϕ(Au (X, T ) × Au (Y, S)) = ϕ(Au (X, T )) × ϕ(Au (Y, S)) N k (−1)k+l = k,l=0 J∈∂k T,L∈∂l S (9.1.22) l (−1)i µ i=0 i (J) (−1)j µ j=0 236 j (L) . Now we can state our result as follows. Theorem 9.1.9 Let {(X(t), Y (s)) : t ∈ T, s ∈ S} be an R2 -valued, centered, unit-variance Gaussian random field satisfying (C1), (C2) and (C3). Then there exists α > 0 such that as u → ∞, P sup X(t) ≥ u, sup Y (s) ≥ u t∈T N s∈S E E E{Mu (X, J)Mu (Y, L)} + o exp = k,l=0 J∈∂k T,L∈∂l S = E{ϕ(Au (X, T ) × Au (Y, S))} + o exp − − u2 − αu2 1 + ρ(T, S) u2 − αu2 1 + ρ(T, S) , (9.1.23) where ρ(T, S) = supt∈T,s∈S ρ(t, s). Proof The first equality in (9.1.23) follows from the combination of (9.1.1), (9.1.2), Lemma 9.1.6 and Lemma 9.1.7. The second equality in (9.1.23) follows from applying Proposition 9.1.8 and (9.1.22). 237 Example 9.1.10 Let T = S = [0, 1], then P sup X(t) ≥ u, sup Y (s) ≥ u = P{X(0) ≥ u, Y (0) ≥ u, X (0) < 0, Y (0) < 0} t∈T s∈S + P{X(0) ≥ u, Y (1) ≥ u, X (0) < 0, Y (1) > 0} + P{X(1) ≥ u, Y (0) ≥ u, X (1) > 0, Y (0) < 0} + P{X(1) ≥ u, Y (1) ≥ u, X (1) > 0, Y (1) > 0} ∞ 1 + (−1) pX (t) (0)dt 0 u ∞ 0 pX(t),Y (0),Y (0) (x, y, z|X (t) = 0) −∞ u × E{X (t)|X(t) = x, Y (0) = y, Y (0) = z, X (t) = 0}dxdydz ∞ 1 pX (t) (0)dt + (−1) 0 u ∞ ∞ u pX(t),Y (1),Y (1) (x, y, z|X (t) = 0) 0 × E{X (t)|X(t) = x, Y (1) = y, Y (1) = z, X (t) = 0}dxdydz ∞ 1 pY (s) (0)ds + (−1) 0 u ∞ 0 −∞ u pX(0),Y (s),X (0) (x, y, z|Y (s) = 0) × E{Y (s)|X(0) = x, Y (s) = y, X (0) = z, Y (s) = 0}dxdydz ∞ 1 + (−1) 0 pY (s) (0)ds u ∞ ∞ u pX(1),Y (s),X (1) (x, y, z|Y (s) = 0) 0 × E{Y (s)|X(1) = x, Y (s) = y, X (1) = z, Y (s) = 0}dxdydz 1 0 ∞ 1 + 0 pX (t),Y (s) (0, 0)dtds u ∞ u pX(t),Y (s) (x, y|X (t) = Y (s) = 0) × E{X (t)Y (s)|X(t) = x, Y (s) = y, X (t) = Y (s) = 0}dxdy + o exp − u2 − αu2 1 + ρ(T, S) . 238 9.2 Vector-valued Gaussian Processes Let {(X(t), Y (t)) : t ∈ T } be an R2 -valued, centered, unit-variance Gaussian process, where T = [a, b] is a finite interval in R. We want to estimate the following probability P{∃t ∈ T such that X(t) ≥ u, Y (t) ≥ u}. Let ρ(t) = E{X(t)Y (t)}, ρ(T ) = sup ρ(t). t∈T We will make use of the following conditions. (D1). X, Y ∈ C 2 (RN ) almost surely, and for each pair (t, s) ∈ T 2 with t = s, (X(t), X (t), X (t), Y (t), Y (t), Y (t), X(s), X (s), Y (s), Y (s)) is non-degenerate. (D2). For all t ∈ T such that ρ(t) = ρ(T ) (hence E{X (t)Y (t)} + E{X (t)Y (t)} = 0), E{X (t)Y (t)} = −E{X(t)Y (t)} = 0. 239 Define ◦ ◦ ◦ ◦ ◦ ◦ µ(X, T ) = #{t ∈ T : Y (t) > X(t) ≥ u, X (t) = 0, X (t) < 0}, µ(Y, T ) = #{t ∈ T : X(t) > Y (t) ≥ u, Y (t) = 0, Y (t) < 0}, µ(X = Y, T ) = #{t ∈ T : X(t) = Y (t) ≥ u, X (t)Y (t) < 0}, µ(X, a) = 1{Y (a)>X(a)≥u,X (a)<0} , (9.2.1) µ(Y, a) = 1{X(a)>Y (a)≥u,Y (a)<0} , µ(X, b) = 1{Y (b)>X(b)≥u,X (b)>0} , µ(Y, b) = 1{X(b)>Y (b)≥u,Y (b)>0} . Lemma 9.2.1 Under (D1), for each u > 0, we have {∃t ∈ T such that X(t) ≥ u, Y (t) ≥ u} ◦ ◦ ◦ = {µ(X, T ) ≥ 1} ∪ {µ(Y, T ) ≥ 1} ∪ {µ(X = Y, T ) ≥ 1} ∪ {µ(X, a) ≥ 1} ∪ {µ(Y, a) ≥ 1} ∪ {µ(X, b) ≥ 1} ∪ {µ(Y, b) ≥ 1} a.s. Proof Note that {∃t ∈ T such that X(t) ≥ u, Y (t) ≥ u} = {∃t ∈ T such that (X ∧ Y )(t) ≥ u}. Then the result follows similarly to Lemma 2.3.1. 240 Lemma 9.2.2 Under (D1) and (D2), there exists some constant α > 0 such that as u → ∞, ◦ Eµ(X, T ) = o exp ◦ Eµ(Y, T ) = o exp u2 − αu2 1+ρ u2 − − αu2 1+ρ − , . ◦ ◦ Proof We only show the proof for Eµ(X, T ), since the proof for Eµ(Y, T ) will be the same. By the Kac-Rice formula, ∞ b ◦ Eµ(X, T ) = a pX (t) (0) ∞ dx u x dy pX(t),Y (t) (x, y|X (t) = 0) × E{|X (t)|1{X (t)<0} |X(t) = x, Y (t) = y, X (t) = 0}. We only need to show that P{Y (t) > X(t) ≥ u|X (t) = 0} is super-exponentially small. But P{Y (t) > X(t) ≥ u|X (t) = 0} ≤ P{(X(t) + Y (t))/2 ≥ u|X (t) = 0}, and due to (D2), for each t ∈ T such that ρ(t) = ρ(T ), Var((X(t) + Y (t))/2|X (t) = 0) < Var((X(t) + Y (t))/2) = (1 + ρ(T ))/2. By continuity, we obtain sup Var((X(t) + Y (t))/2|X (t) = 0) < (1 + ρ(T ))/2, t∈T completing the proof. 241 Lemma 9.2.3 Under (D1) and (D2), there exists some constant α > 0 such that as u → ∞, ◦ ◦ E{µ(X = Y, T )[µ(X = Y, T ) − 1]} = o exp Proof − u2 − αu2 1 + ρ(T ) . By the Kac-Rice formula, ◦ ◦ E{µ(X = Y, T )[µ(X = Y, T ) − 1]} b b = a a ∞ pX(t)−Y (t),X(s)−Y (s) (0, 0)dtds ∞ × u u dxdy pX(t),X(s) (x, y|X(t) − Y (t) = 0, X(s) − Y (s) = 0) × E{|X (t) − Y (t)||X (s) − Y (s)|1{X (t)Y (t)<0} 1{X (s)Y (s)<0} | |X(t) = x, X(s) = y, X(t) − Y (t) = 0, X(s) − Y (s) = 0}. Similarly to the proof of Lemma 3 in Piterbarg (1996b), we will write T as the union of several small intervals, and then it suffices to prove that there exists α > 0 such that Var(X(t)|X(t) − Y (t), X(s) − Y (s)) < 1 + ρ(T ) −α, 2 ∀|t − s| < δ (9.2.2) and lim u−2 log P{X(t) ≥ u, Y (s) ≥ u|X(t) − Y (t) = 0, X(s) − Y (s) = 0} u→∞ 1 <− −α, 1 + ρ(T ) (9.2.3) ∀|t − s| ≥ δ, where δ is a small positive number to be specified. 242 Note that for all t ∈ T , Var(X(t)|X(t) − Y (t)) = 1 − 1 + ρ(t) 1 + ρ(T ) (1 − ρ(t))2 = ≤ . 2(1 − ρ(t)) 2 2 But for those t such that ρ(t) = ρ(T ), E{X(t)(X (t) − Y (t))} = −E{X(t)Y (t)} = 0 by (D2), it then follows from continuity that sup Var(X(t)|X(t) − Y (t), X (t) − Y (t)) < t∈T 1 + ρ(T ) . 2 (9.2.4) Therefore (9.2.2) follows by noting that as |t − s| → 0, Var(X(t)|X(t) − Y (t), X(s) − Y (s)) = (1 + o(1))Var(X(t)|X(t) − Y (t), X (t) − Y (t)). Now we turn to proving (9.2.3). By continuity, it suffices to show that there is not any pair (t, s) with |t − s| ≥ δ such that lim u−2 log P{X(t) ≥ u, Y (s) ≥ u|X(t) − Y (t) = 0, X(s) − Y (s) = 0} u→∞ (9.2.5) 1 =− . 1 + ρ(T ) 243 By (9.2.4) and the following evident inequality P{X(t) ≥ u, Y (s) ≥ u|X(t) − Y (t) = 0, X(s) − Y (s) = 0} ≤ min P{X(t) ≥ u|X(t) − Y (t) = 0, X(s) − Y (s) = 0}, P{Y (s) ≥ u|X(t) − Y (t) = 0, X(s) − Y (s) = 0}, P{[X(t) + Y (s)]/2 ≥ u|X(t) − Y (t) = 0, X(s) − Y (s) = 0} , if (9.2.5) holds, then we have ρ(t) = ρ(s) = ρ(T ), E{X(t)[X(s) − Y (s)]} = E{Y (t)[X(s) − Y (s)]} = 0, (9.2.6) E{X(s)[X(t) − Y (t)]} = E{Y (s)[X(t) − Y (t)]} = 0, and Var([X(t) + Y (s)]/2|X(t) − Y (t) = 0, X(s) − Y (s) = 0) = 1 + ρ(T ) . 2 (9.2.7) But by the conditional formula for Gaussian variables, (9.2.6) implies Var([X(t) + Y (s)]/2|X(t) − Y (t) = 0, X(s) − Y (s) = 0) 1 + E{X(t)X(s)} 2ρ(T ) + E{X(t)X(s)} − 1 − (1 − ρ(T )) = 2 2 1 + ρ(T ) < , 2 = which contradicts (9.2.7). Thus there is no pair (t, s) with |t − s| ≥ δ such that (9.2.5) holds, and hence (9.2.3) is true, completing the proof. 244 Lemma 9.2.4 Under (D1) and (D2), there exists some constant α > 0 such that as u → ∞, ◦ ◦ max E{µ(X = Y, T )µ(X, a)}, E{µ(X = Y, T )µ(Y, a)}, ◦ ◦ E{µ(X = Y, T )µ(X, b)}, E{µ(X = Y, T )µ(Y, b)} = o exp − u2 − αu2 1 + ρ(T ) , ◦ Proof We only show the proof for E{µ(X = Y, T )µ(X, b)}, since the other terms can be proved similarly. By the Kac-Rice metatheorem, b ◦ E{µ(X = Y, T )µ(X, b)} = a E{|X (t) − Y (t)|1{X(t)≥u,X (t)Y (t)<0} × 1{Y (b)>X(b)≥u,X (b)>0} |X(t) − Y (t) = 0}dt. We only need to show that there exists α > 0 such that for δ small enough, lim u−2 log P{X(t) ≥ u, X(b) − Y (b) < 0, X (b) > 0|X(t) − Y (t) = 0} u→∞ 1 <− −α, 1 + ρ(T ) (9.2.8) ∀|t − b| < δ, and lim u−2 log P{X(t) ≥ u, X(b) ≥ u, Y (b) ≥ u|X(t) − Y (t) = 0} u→∞ 1 −α, <− 1 + ρ(T ) (9.2.9) ∀|t − b| ≥ δ. We show (9.2.8) first. Note that we only need to consider the case when ρ(b) = ρ(T ). Under this situation, by (D2), either E{X (b)Y (b)} < 0 or E{Y (b)X(b)} < 0. If E{X (b)Y (b)} < 245 0, then as |t − b| → 0, E{X(t)X (b)|X(t) − Y (t) = 0} = E{X(t)X (b)} − = (1 + o(1)) (1 − ρ(t))E{X (b)[X(t) − Y (t)]} 2(1 − ρ(t)) E{X (b)Y (b)} . 2 It then follows from Lemma 2.3.10 that lim u−2 log P{X(t) ≥ u, X (b) > 0|X(t) − Y (t) = 0} u→∞ 1 <− −α, 1 + ρ(T ) (9.2.10) ∀|t − s| < δ. For the alternative case, E{Y (t)X(t)} < 0, consider Taylor’s expansion, X(b) − Y (b) = X(t) − Y (t) + (b − t)(X (t) − Y (t)) + (b − t)1+η Zt,b , where η > 0 and Zt,b is a Gaussian random field with uniformly finite variance. Then as |t − b| → 0, E{X(t)[X(b) − Y (b)]|X(t) − Y (t) = 0} = E{X(t)[X(b) − Y (b)]} − (1 − ρ(t))E{[X(t) − Y (t)][X(b) − Y (b)]} 2(1 − ρ(t)) = −(1 + o(1))(b − t)E{X(b)Y (b)}, 246 and also Var(X(b)−Y (b)|X(t)−Y (t)) ≤ C1 (t−b)2 for some positive constant C1 . Therefore, E{X(t)[X(b) − Y (b)]|X(t) − Y (t) = 0} Var(X(t)|X(t) − Y (t))Var(X(b) − Y (b)|X(t) − Y (t)) ≥ −C2 E{X(b)Y (b)} > 0, for some positive constant C2 . It then follows from Lemma 2.3.10 that lim u−2 log P{X(t) ≥ u, X(b) − Y (b) < 0|X(t) − Y (t) = 0} u→∞ 1 −α, <− 1 + ρ(T ) (9.2.11) ∀|t − s| < δ. Now, (9.2.10) and (9.2.11) imply (9.2.8). We turn to proving (9.2.9). Note that P{X(t) ≥ u, X(b) ≥ u, Y (b) ≥ u|X(t) − Y (t) = 0} ≤ P{X(t) ≥ u, [X(b) + Y (b)]/2 ≥ u|X(t) − Y (t) = 0}, and 1 + ρ(T ) , 2 1 + ρ(T ) Var([X(b) + Y (b)]/2|X(t) − Y (t)) ≤ . 2 Var(X(t)|X(t) − Y (t)) ≤ Due to the regularity condition (D1), we obtain lim u−2 log P{X(t) ≥ u, [X(b) + Y (b)]/2 ≥ u|X(t) − Y (t) = 0} u→∞ <− 1 −α, 1 + ρ(T ) ∀|t − b| ≥ δ, and then (9.2.9) follows. 247 Define the excursion set Au (T, X ∧ Y ) = {t ∈ T : (X ∧ Y )(t) ≥ u}. Then Morse theorem gives ◦ ◦ ◦ ◦ ϕ(Au (T, X ∧ Y )) = µ(X, T ) − µ (X, T ) + µ(Y, T ) − µ (Y, T ) (9.2.12) ◦ + µ(X = Y, T ) + µ(X, a) + µ(Y, a) + µ(X, b) + µ(Y, b), where ◦ ◦ ◦ ◦ µ (X, T ) = #{t ∈ T : Y (t) > X(t) ≥ u, X (t) = 0, X (t) > 0}, µ (Y, T ) = #{t ∈ T : X(t) > Y (t) ≥ u, Y (t) = 0, Y (t) > 0}, and the rest terms on the right hand side of (9.2.12) are defined in (9.2.1). Now we obtain the following mean Euler characteristic approximation. Theorem 9.2.5 Let {(X(t), Y (t)) : t ∈ R} be an R2 -valued, centered, unit-variance Gaussian process satisfying (D1) and (D2), and let T = [a, b] be a closed finite interval in R. Then there exists some constant α > 0 such that as u → ∞, P{∃t ∈ T such that X(t) ≥ u, Y (t) ≥ u} ◦ = E{µ(X = Y, T )} + E{µ(X, a)} + E{µ(Y, a)} + E{µ(X, b)} + E{µ(Y, b)} + o exp − (9.2.13) u2 − αu2 1 + ρ(T ) = E{ϕ(Au (T, X ∧ Y ))} + o exp − u2 − αu2 1 + ρ(T ) , where ρ(T ) = supt∈T ρ(t). Proof By Lemma 9.2.1, we can find the upper bound and lower bound, similarly to (2.3.1) and (2.3.2) respectively, for the excursion probability. Applying Lemma 9.2.2, Lemma 9.2.3 248 and Lemma 9.2.4 yields the first equality in (9.2.13). The last line of (9.2.13) follows from (9.2.12). Remark 9.2.6 Based on the proof of Lemma 9.2.3 and Lemma 9.2.4, the term E{µ(X = ◦ ◦ Y, T )} in the approximation (9.2.13) can be replaced by a simpler one E{µ (X = Y, T )}, where ◦ ◦ µ (X = Y, T ) = #{t ∈ T : X(t) = Y (t) ≥ u}. It follows from the Kac-Rice metatheorem that b ◦ E{µ (X = Y, T )} = a E{|X (t) − Y (t)|1{X(t)≥u} |X(t) − Y (t) = 0}dt. 249 BIBLIOGRAPHY 250 BIBLIOGRAPHY [1] R. J. Adler (1981), The Geometry of Random Fields. Wiley, New York. [2] R. J. Adler (2000), On excursion sets, tube formulas and maxima of random fields. Ann. Appl. Probab. 10, 1–74. [3] R. J. Adler and J. E. Taylor (2007), Random Fields and Geometry. Springer, New York. [4] R. J. Adler and J. E. Taylor (2011), Topological complexity of smooth random func´ ´ e tions. Lecture Notes in Mathematics, 2019. Ecole d’Et´ de Probabilit´s de Saint-Flour. e Springer, Heidelberg. [5] R. J. Adler, J. E. Taylor and K. J. Worsley (2012), Applications of Random Fields and Geometry: Foundations and Case Studies. In preparation. [6] A. B. Anshin (2006), On the probability of simultaneous extremes of two Gaussian nonstationary processes. Theory Probab. Appl. 50, 353–366. [7] M. Arendarczyk and K. Debicki (2011), Asymptotics of supremum distribution of a Gaussian process over a Weibullian time. Bernoulli 17, 194–210. [8] M. Arendarczyk and K. Debicki (2012), Exact asymptotics of supremum of a stationary Gaussian process over a random interval. Stat. Prob. Lett 82, 645–652. [9] A. Auffinger (2011), Random Matrices, Complexity of Spin Glasses and Heavy Tailed Processes. Ph.D. Thesis, New York University. [10] J.-M. Aza¨ J.-M. Bardet and M. Wschebor (2002), On the tails of the distribution ıs, of the maximum of a smooth stationary Gaussian process. ESAIM Probab. Statist. 6, 177–184. [11] J.-M. Aza¨ and C. Delmas (2002), Asymptotic expansions for the distribution of the ıs maximum of Gaussian random fields. Extremes. 5, 181–212. [12] J.-M. Aza¨ and M. Wschebor (2005), On the distribution of the maximum of a ıs Gaussian field with d parameters. Ann. Appl. Probab. 15, 254–278. 251 [13] J.-M. Aza¨ and M. Wschebor (2008), A general expression for the distribution of the ıs maximum of a Gaussian field and the approximation of the tail. Stoch. Process. Appl. 118 (2008), 1190–1218. [14] J.-M. Aza¨ and M. Wschebor (2009), Level Sets and Extrema of Random Processes ıs and Fields. John Wiley & Sons, Hoboken, NJ. [15] K. Bartz, S. C. Kou and R. J. Adler (2011), Estimating thresholding levels for random fields via Euler characteristics. Preprint. [16] C. Borell (1975), The Brunn-Minkowski inequality in Gauss space. Invent. Math. 30, 205–216. [17] N. Chamandy, K. J. Worsley, J. E. Taylor and F. Gosselin (2008), Tilted Euler characteristic densities for central limit random fields, with application to “bubbles”. Ann. Statist. 36, 2471–2507. [18] H. P. Chan and T. L. Lai (2006), Maxima of asymptotically Gaussian random fields and moderate deviation approximations to boundary crossing probabilities of sums of random variables with multidimensional indices. Ann. Probab. 34, 80–121. [19] D. Cheng and Y. Xiao (2012), The mean Euler characteristic and excursion probability of Gaussian random fields with stationary increments. arXiv:1211.6693. [20] K. Debicki (2002), Ruin probability for Gaussian integrated processes. Stoch. Process. Appl. 98, 151–174. [21] K. Debicki, K. M. Kosi´ski, M. Mandjes, and T. Rolski (2010), Extremes of multidin mensional Gaussian processes. Stoch. Process. Appl. 120, 2289–2301. [22] A. B. Dieker (2006), Extremes of Gaussian processes over an infinite horizon. Stoch. Process. Appl. 115, 207–248. [23] T. Gneiting. (2012), Strictly and non-strictly positive definite functions on spheres. arXiv:1111.7077v4. [24] J. Husler and V. I. Piterbarg (1999), Extremes of a certain class of Gaussian processes. Stoch. Process. Appl. 83, 257–271. [25] S. G. Kobelkov (2005), The ruin problem for the stationary Gaussian processes. Theory Probab. Appl. 49, 155–163. 252 [26] T. J. Kozubowski, M. M. Meerschaert, F. J. Molz and S. Lu (2004), Fractional Laplace model for hydraulic conductivity. Geophysical Res. Lett. 31, L08501. [27] A. Ladneva and V. Piterbarg (2000), On double extremes of Gaussian stationary processes. website: http://ladneva.stormpages.com/article.pdf [28] T. J. Kozubowski, M. M. Meerschaert and K. Podgorski (2006), Fractional Laplace motion. Adv. Appl. Probab. 38, 451–464. [29] H. Landau and L. A. Shepp (1970), On the supremum of a Gaussian process. Sankya 32, 369–378. [30] J. Liu (2012), Tail approximations of integrals of Gaussian random fields. Ann. Probab. 40, 1069–1104. [31] M. B. Marcus and L. A. Shepp (1970), Sample behaviour of Gaussian processes. Proceedings of the 6th Berkeley Symposium on Mathematics, Statistics and Probability, Vol. 2, University of California Press, Berkeley, CA, 423–442. [32] D. Marinucci and G. Peccati (2011), Random Fields on the Sphere. Representation, Limit Theorems and Cosmological Applications. Cambridge University Press. [33] D. Marinucci and S. Vadlamani (2013), High-frequency asymptotics for LipschitzKilling curvatures of excursion sets on the sphere. arXiv:1303.2456. [34] G. Matheron (1973), The intrinsic random functions and their applications. Adv. Appl. Probab. 5, 439–468. [35] C. Mueller and R. Tribe (2002), Hitting properties of a random string. Electronic J. Probab. 7, 1–29. [36] Y. Nardi (2006), Gaussian, and Asymptotically Gaussian, Random Fields and Their Maxima. Ph.D. dissertation, Hebrew Univ. of Jerusalem. [37] Y. Nardi, D. O. Siegmund and B. Yakir (2008), The distribution of maxima of approximately Gaussian random fields. Ann. Statist. 36, 1375–1403. [38] J. Pickands III (1969a), Asyptotic properties of the maximum in a stationary Gaussian processes. Trans. Amer. Math. Soc. 145, 75–86. 253 [39] J. Pickands III (1969b), Upcrossing probabilities for stationary Gaussian processes. Trans. Amer. Math. Soc. 145, 51–73. [40] V. I. Piterbarg (1996a), Asymptotic Methods in the Theory of Gaussian Processes and Fields. Translations of Mathematical Monographs, Vol. 148, American Mathematical Society, Providence, RI. [41] V. I. Piterbarg (1996b), Rice’s method for large excursions of Gaussian random fields. Technical Report NO. 478, Center for Stochastic Processes, Univ. North Carolina. [42] V. I. Piterbarg and S. Stamatovic (2001), On maximum of Gaussian non-centered fields indexed on smooth manifolds. Asymptotic methods in probability and statistics with applications (St. Petersburg, 1998), 189–203, Stat. Ind. Technol., Birkh¨user a Boston, Boston, MA. [43] V. I. Piterbarg and S. Stamatovic (2005), Crude asymptotics of the probability of simultaneous high extrema of two Gaussian processes: the dual action function. Russ. Math. Surv. 60, 167–168. [44] J. Potthoff (2010), Sample properties of random fields III: differentiability. Commun. Stoch. Anal. 4, 335–353. [45] C. Qualls and H. Watanabe (1973), Asymptotic properties of Gaussian random Fields. Trans. Amer. Math. Soc. 177, 155–171. [46] I. J. Schoenberg (1942), Positive definite functions on spheres. Duke Math. J. 9, 96–108. [47] K. Shafie, B. Sigal, D. O. Siegmund and K. J. Worsley (2003), Rotation space random fields with an application to fMRI data. Ann. Statist. 31, 1732–1771. [48] M. L. Stein (1999), Interpolation of Spatial Data: Some Theory for Kriging. Springer, New York. [49] M. L. Stein (2012), On a class of space-time intrinsic random functions. Bernoulli, to appear. [50] J. Sun (1993), Tail Probabilities of the Maxima of Gaussian Random Fields. Ann. Probab. 21, 34–71. 254 [51] G. Szeg¨ (1975), Othogonal Polynomials. American Mathematical Society, Provio dence, RI. [52] A. Takemura and S. Kurki (2002), On the equivalence of the tube and Euler characteristic methods for the distribution of the maximum of Gaussian fields over piecewise smooth domains. Ann. Appl. Probab. 12, 768-796. [53] J. E. Taylor and R. J. Adler (2003), Euler characteristics for Gaussian fields on manifolds. Ann. Probab. 31, 533–563. [54] J. E. Taylor, A. Takemura, and R. J. Adler (2005), Validity of the expected Euler characteristic heuristic. Ann. Probab. 33, 1362–1396. [55] J. E. Taylor and K. J. Worsley (2007), Detecting sparse signals in random fields, with an application to brain mapping. J. Amer. Statist. Assoc. 102, 913–928. [56] J. E. Taylor and K. J. Worsley (2008), Random fields of multivariate test statistics, with applications to shape analysis. Ann. Statist. 36, 1–27. [57] B. S. Tsirelson, I. A. Ibragimov and V. N. Sudakov (1976), Norms of Gaussian sample functions. Preceedings of the 3rd Japan-USSR Symposium on Probability Theory (Tashkent, 1975), Lecture Notes in Mathematics, Vol. 550, Springer-Verlag, Berlin, 20–41. [58] R. Wong (2001), Asymptotic Approximations of Integrals. SIAM, Philadelphia, PA. [59] K. J. Worsley (1994), Local maxima and the expected Euler characteristic of excursion sets of χ2 , F , and t fields. Adv. Appl. Probab. 26, 13–42. [60] K. J. Worsley (1996), The geometry of random images. Chance 9, 27–39. [61] Y. Xiao (2009), Sample path properties of anisotropic Gaussian random fields. In: A Minicourse on Stochastic Partial Differential Equations, (D. Khoshnevisan and F. Rassoul-Agha, editors), Lecture Notes in Math. 1962, pp. 145–212. Springer, New York. [62] Y. Xue and Y. Xiao (2011), Fractal and smoothness properties of space-time Gaussian models. Frontiers Math. China 6, 1217–1246. [63] A. M. Yaglom (1957), Some classes of random fields in n-dimensional space, related to stationary random processes. Th. Probab. Appl. 2, 273–320. 255