MULTIVARIATEGAUSSIANRANDOMFIELDS:EXTREMEVALUES,PARAMETER ESTIMATIONANDPREDICTION By YuzhenZhou ADISSERTATION Submittedto MichiganStateUniversity inpartialentoftherequirements forthedegreeof Statistics{DoctorofPhilosophy 2015 ABSTRACT MULTIVARIATEGAUSSIANRANDOMFIELDS:EXTREMEVALUES, PARAMETERESTIMATIONANDPREDICTION By YuzhenZhou MotivatedbythewideapplicationsofmultivariateGaussianrandominspatial modeling,westudythetailprobabilityoftheextremes,theinferenceoffractalindicesand largecovariancemodelingofmultivariateGaussianrandomFirst,weestablishthe preciseasymptoticsfortheextremesofbivariateGaussianrandombyapplyingthe doublesummethod.ThemainresultscanbeappliedtobivariateMaternSecond,we studythejointasymptoticpropertiesofestimatingthefractalindicesofbivariateGaussian randomprocessesunderasymptotics,whichindicatesthattheestimatorsareasymp- toticallyindependentofthecrosscorrelationinmostcases.Third,weaframework tocouplehigh-dimensionalandspatiallyindexedLiDARsignalswithforestvariablesusing afullyBayesianfunctionalspatialdataanalysis,whichisabletocapturewithinandamong LiDARsignal/forestvariablesassociationwithinandacrosslocations.Theproposedmod- elingframeworkisillustratedbyasimulatedstudyandbyanalyzingLiDARandspatially coincidingforestinventorydatacollectedonthePenobscotExperimentalForest,Maine. Copyrightby YUZHENZHOU 2015 Idedicatethisdissertationtomyfamily. iv ACKNOWLEDGMENTS ThisisapleasuretoexpressmysinceregratitudetomyadvisorProfessorYiminXiaoforhis greatmentoringandcontinuoussupport.Iamfullyindebtedtohimforhisunderstanding, patienceandencouragement.Heisalwayshelpfulandtrieshisbesttocreateopportunities foryoungresearchers.Heisopen-mindedandhasgreatenthusiasmonpursuingexcellence inresearch,whichinspiresmetosetupacareerinacademia. IwouldliketothankProfessorAndrewFinley,whoisthementorofthelastchapter ofthethesis.Andybringsmeintoaveryinterestingappliedtopic.Healwaysshowshis enthusiasminresearchandgivesmemanyvaluablesuggestionsduringtheproject.Ireally appreciateallhisforhelpingmemakeprogress. IalsowishtothankProfessorPingshouZhong.We'vebeenworkingonaprojectduring theearlyyearsofmyPhD.Hisknowledge,experienceandencouragementhelpedmealot whileIwasstartingtodoresearch. Inaddition,IwouldliketothankProfessorChaeYoungLimandProfessorLifengLuo forservingonmydissertationcommittee.Iamextremelygratefulfortheirassistanceand suggestionsthroughoutthethesiswork. Finally,Iwouldliketothankmyfamilyandmyfriendsfortheirloveandsupport.I givemyspecialthankstomyloveYingjie,whowasalwaysstandingbymeinmyhardtimes duringthiswork. v TABLEOFCONTENTS LISTOFTABLES .................................... viii LISTOFFIGURES ................................... ix Chapter1Introduction ............................... 1 1.1MultivariateGaussianrandom ......................1 1.2Overview ......................................2 Chapter2TailasymptoticsfortheExtremesofBivariateGaussianRan- domFields ................................ 5 2.1Introduction ....................................5 2.2MainResultsandDiscussions ..........................9 2.3Anexample:positivelycorrelatedbivariateMatern ...........12 2.4Proofsofthemainresults ............................15 2.5ProofofLemmas .................................26 Chapter3Jointasymptoticsofestimatingthefractalindicesofbivariate Gaussianrandomprocesses ...................... 51 3.1Introduction ....................................51 3.2ThebivariateGaussianrandomprocesses ....................53 3.3Theincrement-basedestimators .........................54 3.3.1Thedilateddiscretizedprocesses ................54 3.3.2TheGLSestimatorsfor( 11 ; 22 ) > ...................57 3.4Asymptoticproperties ..............................58 3.4.1Varianceof Z n andasymptoticnormality ................59 3.4.2Linearestimatorsof( 11 ; 22 ) > .....................61 3.5Anexample:thebivariateMaternon R ..................64 3.6Proofofthemainresults .............................67 Chapter4ABayesianfunctionaldatamodelforcouplinghigh-dimensional LiDARandforestvariablesoverlargegeographicdomains .. 86 4.1Introduction ....................................86 4.2Preliminary:MoGaussianpredictiveprocess ...............88 4.3Themodel .....................................90 4.3.1MoGaussianpredictivemodelfor z ................90 4.3.2Jointmodelof y and Z ..........................92 4.3.3Sptionoftherandomof Z and y ..............96 4.4Bayesianimplementationandcomputationalissue ...............97 4.4.1Dataequation ...............................97 vi 4.4.2Priorspandfullconditionalsampling ............99 4.5Predictions ....................................102 4.5.1Predictionsfor y and Z atnewlocations ................102 4.5.2Predictionsfor y given Z areobserved .................103 4.6illustrations ....................................104 4.6.1Simulationexperiments ..........................105 4.6.2ForestLiDARandbiomassdataanalysis ................107 4.6.2.1Datadescriptionandpreparation ...............107 4.6.2.2Results .............................108 BIBLIOGRAPHY .................................... 111 vii LISTOFTABLES Table4.1Parametercredibleintervals,50%(2 : 5% ; 97 : 5%)andpredictivevali- dation.Entriesinitalicsindicatewherethetruevalueismissed. ..107 Table4.2Parametercredibleintervals,50%(2 : 5% ; 97 : 5%)andpredictivevali- dation. ..................................109 viii LISTOFFIGURES Figure4.1Predicted y given Z areknown.Topleft: n z =300(fullknots);top right: n z =200;bottomleft: n z =100;bottomright: n z =50.Any redpointonthebluelinerepresentsthecasewhenthepredicted y is equaltothetrue y . ...........................106 Figure4.2Left:Interpolated y ,thesmallpoints indicatewhere y 'sarerecorded; Right:Signal Z measuredatthebigreddiscmarkedontheleftgraph. 108 Figure4.3Predicted y given Z areknown.Topleft: n =339(fullknots);top right: n =200;bottomleft: n =100;bottomright: n =50. Anyredpointonthebluelinerepresentsthecasewhenthepredicted biomassisequaltotheobservedbiomass. ...............110 ix Chapter1 Introduction 1.1MultivariateGaussianrandom Beforepresentingthemotivationandmainworkofthisthesis,Iwouldliketogivetheformal ofrandomrealvaluedGaussianrandomandmultivariateGaussian random(see,e.g.,[ AT07 ]). Arandomisastochasticprocessindexedbyaparameterspace,whichcouldbea subsetofEuclideanspace R N .Formally,itisby 1.1.1 (Randoms) . Let ; F ; P )beacompleteprobabilityspaceand T be atopologicalspace.Denoteby R T bethespaceofallreal-valuedfunctionon T .Then, ameasurablemapping X : ! R T iscalleda real-valuedrandom .Measurable mappingsfromto( R T ) d ;d> 1,arecalled multivariaterandom orvector-valued random Hence, X ( ! )isaunivariate(ormultivariate)functionand X ( !;t )itsvalueat t .We usuallyomit ! andwritetherandomat t 2 T as X ( t ). Areal-valuedGaussianrandomisarandomeld X indexedbyaparameterspace T whosedimensionaldistributionsof( X ( t 1 ) ;:::;X ( t n )) > aremultivariateGaussianfor each n 2 N andeach( t 1 ;:::;t n ) 2 T n .Thedistributionof X isdeterminedbyitsmeanand 1 covariancefunctions,thatare ( t ):= E ( X ( t )) ;C ( s;t ):=Cov(X(s) ; X(t)) : Let X bethevector-valuedrandomtakingvaluesin R d .Denoteby X =( X 1 ;:::;X d ) > where X i isits i thcoordinateprocess. X iscalledmultivariateGaussianrandomiffor anyvector 2 R d nf 0 g , P d i =1 i X i ( t )isareal-valuedGaussianrandomThedistribu- tionof X isdeterminedbyitsvector-valuedmeanandmatrix-valuedcovariancefunctions, thatare ( t ):=( E ( X 1 ( t )) ; E ( X 2 ( t )) ;:::; E ( X d ( t ))) > and Cov( X ( s ) ; X ( t ))= 0 B B B B B B B B B @ C 11 ( s;t ) C 12 ( s;t ) C 1 d ( s;t ) C 21 ( s;t ) C 22 ( s;t ) C 2 d ( s;t ) . . . . . . . . . . . . C d 1 ( s;t ) C d 2 ( s;t ) C dd ( s;t ) 1 C C C C C C C C C A ; where C ij ( s;t ):=Cov( X i ( s ) ;X j ( t )) ;i;j =1 ; 2 ;:::;d . 1.2Overview Thisworkismotivatedbythefactorthatthereisanincreasingneedforanalyzingmulti- variatespatialdatasets[ GDFG10 , Wac03 ].Thereisarichliteratureonmodelingunivariate spatialdata[ Cre93 , Ste99 ].However,inthemultivariatesetting,modelspis morechallengingbecausewealsowishtocapturecross-covarianceamongoutcomesand sites[ GDFG10 , CW11 , BCG14 ]. MultivariateGaussianrandomareagoodcandidatemodeltocharacterizetheco- 2 variancestructureofthemultivariatespatialdatasets.[ GKS10 ]introducedthefullbivariate Matern X ( t )=( X 1 ( t ) ;X 2 ( t )),whichisa R 2 -valued,stationaryGaussianrandom on R N withzeromeanandmatrix-valuedMaterncovariancefunctions.Asspatiallycorre- latederrorthismodelwasappliedtoprobabilisticweatherldforecastingforsurface pressureandtemperatureovertheNorthAmericanPacNorthwest. WhilethemultivariateGaussianrandomarewidelyusedinspatialmodeling,it raisesmanyinterestingproblemsintheaspectsofboththeoryandmodeling.Inthiswork,we focusonthreetopicsinthisarea:thetailprobabilityoftheextremes,thejointasymptotics offractalindicesunderasymptotics,andlargecovariancemodelingwithGaussian predictiveprocesses.Therestchaptersshowthedetailsofthesethreeproblems. InChapter2,westudythetailprobabilityoftheextremesforaclassofbivariatespatial model,i.e., P max s 2 A 1 X 1 ( s ) >u; max t 2 A 2 X 2 ( t ) >u ; as u !1 : Applyingthedouble summethod[ Pit96 , Ans06 ],weestablishanexplicitformforthetailprobabilityofdouble extremesforthebivariateWefoundthattheareawherethecrosscorrelationattainsits maximumhashighestchancetocauseextremeevents.Also,thesmoothnessofthesurface foreachcomponenttsextremeprobability. InChapter3,westudythejointasymptoticsofthefractalindicesforbivariateGaussian randomprocesses.Wewanttoseehowthecrossdependencestructurewouldthe oftheestimators.Thefractalindexofeachcomponentisestimatedrespectively bytheincrement-basedmethod[ CW00 , CW04 ].Weestablishedthejointasymptoticsof thebivariateestimatorsunderasymptotics,whichindicatedthattheestimatorsare asymptoticallyindependentofthecrosscorrelationinmostcases. InChapter4,weaframeworktocouplehigh-dimensionalandspatiallyindexed LiDARsignalswithforestvariablesusingafullyBayesianfunctionalspatialdataanalysis. 3 ThismodelingframeworkallowsustocapturewithinandamongLiDARsignal/forestvari- ablesassociationwithinandacrosslocations.However,thecomputationalcomplexityofsuch modelsincreasesincubicorderwiththenumberofspatiallocationsandthedimensionofthe LiDARsignal,andthenumberofforestvariables|acharacteristiccommontomultivariate spatialprocessmodels.Toaddressthiscomputationchallenge,weproposedanapproxi- matedmodelbyemployingthemoGaussianpredictiveprocesses[ BGFS08 , FSBG09 ] twice,bothinlocationsandinheights. Weendtheintroductionwithsomenotation.Forany t 2 R N , j t j denotesits l 2 -norm.An integervector k 2 Z N iswrittenas k =( k 1 ;:::;k N ).For k 2 Z N and T 2 R + =[0 ; 1 ),we thecube[ k T; ( k +1) T ]:= Q N i =1 [ k i T; ( k i +1) T ].Foranyinteger n , mes n ( )denotes the n -dimensionalLebesguemeasure.Anunsppositiveandconstantwillbe denotedby C 0 .Morespconstantsarenumberedby C 1 ;C 2 ;:::: 4 Chapter2 TailasymptoticsfortheExtremesof BivariateGaussianRandomFields Let f X ( t )=( X 1 ( t ) ;X 2 ( t )) > ;t 2 R N g bean R 2 -valuedcontinuouslocallystationaryGaus- sianrandomwith E [ X ( t )]= 0 .Foranycompactsets A 1 ;A 2 ˆ R N ,preciseasymptotic behavioroftheexcursionprobability P max s 2 A 1 X 1 ( s ) >u; max t 2 A 2 X 2 ( t ) >u ; as u !1 isinvestigatedbyapplyingthedoublesummethod.Theexplicitresultsdependnotonlyon thesmoothnessparametersofthecoordinate X 1 and X 2 ,butalsoontheirmaximum correlation ˆ . 2.1Introduction Forareal-valuedGaussianrandom X = f X ( t ) ;t 2 T g ,where T istheparameter set,onprobabilityspace ; F ; P ),theexcursionprobability P f sup t 2 T X ( t ) >u g hasbeenstudiedextensively.Extendingtheseminalworkof[ Pic69 ],[ Pit96 ]developeda systematictheoryonasymptoticsoftheaforementionedexcursionprobabilityforabroad classofGaussianrandomTheirmethod,whichiscalledthedoublesummethod,has 5 beenfurtherextendedby[ CL06 ]tonon-Gaussianrandomand,recently,by[ DHJ14 ] toanon-stationaryGaussianrandom f X ( s;t ) ; ( s;t ) 2 R 2 g whosevariancefunction attainsitsmaximumonatenumberofdisjointlinesegments.ForsmoothGaussian randommoreaccurateapproximationresultshavebeenestablishedbyusingintegral andtial-geometricmethods(see,e.g.,[ Adl00 ],[ AT07 ],[ AW09 ]andthereferences therein).ForGaussianandasymptoticallyGaussianrandomthechangeofmeasure methodwasdevelopedby[ NSY08 ]and[ Yak13 ].Manyoftheresultsintheaforementioned referenceshavefoundimportantapplicationsinstatisticsandotherscienareas.Werefer to[ ATW10 ]and[ Yak13 ]forfurtherinformation. However,onlyafewauthorshavestudiedtheexcursionprobabilityofmultivariateran- dom[ PS05 ]and[ DKMR10 ]establishedlargedeviationresultsfortheexcursionprob- abilityinmultivariatecase.[ Ans06 ]obtainedpreciseasymptoticsforaspecialclassofnon- stationarybivariateGaussianprocesses,underquiterestrictiveconditions.[ HJ14 ]recently derivedpreciseasymptoticsfortheexcursionprobabilityofabivariatefractionalBrownian motionwithconstantcrosscorrelation.Thelasttwopapersonlyconsidermultivariatepro- cessesontherealline R withspcrossdependencestructures.[ CX14 ]establisheda preciseapproximationtotheexcursionprobabilitybyusingthemeanEulercharacteristics oftheexcursionsetforabroadclassofsmoothbivariateGaussianrandomon R N .In thepresentchapterweinvestigateasymptoticsoftheexcursionprobabilityofnon-smooth bivariateGaussianrandomson R N ,wherethemethodsaretotallytfromthe smoothcase. Ourworkisalsomotivatedbytherecentincreasinginterestinusingmultivariateran- domformodelingmultivariatemeasurementsobtainedatspatiallocations(see,e.g., [ GDFG10 ],[ Wac03 ]).Severalclassesofmultivariatespatialmodelshavebeenintroduced 6 by[ GKS10 ],[ AGS12 ]and[ KN12 ].WewillshowinSection2thatthemainresultsofthis chapterareapplicabletobivariateGaussianrandomwithMaterncross-covariances introducedby[ GKS10 ].Furthermore,weexpectthattheexcursionprobabilitiesconsidered inthischapterwillhaveinterestingstatisticalapplications. Let f X ( t ) ;t 2 R N g bean R 2 -valued(not-necessarilystationary)Gaussianrandom with E [ X ( t )]= 0 .Wewrite X ( t ) , ( X 1 ( t ) ;X 2 ( t )) > and r ij ( s;t ):= E [ X i ( s ) X j ( t )] ;i;j =1 ; 2 : (2.1.1) Throughoutthischapter,weimposethefollowingassumptions. i) r ii ( s;t )=1 c i j t s j i + o ( j t s j i ),where i 2 (0 ; 2)and c i > 0( i =1 ; 2)are constants. ii) j r ii ( s;t ) j < 1forall j t s j > 0, i =1 ; 2. iii) r 12 ( s;t )= r 21 ( s;t ):= r ( j t s j ).Namely,thecrosscorrelationisisotropic. iv) Thefunction r ( ):[0 ; 1 ) ! R attainsmaximumonlyatzerowith r (0)= ˆ 2 (0 ; 1), i.e., j r ( t ) j <ˆ forall t> 0.Moreover,weassume r 0 (0)=0 ;r 00 (0) < 0andthereexists > 0,forany s 2 [0 ; ], r 00 ( s )existsandcontinuous. Thecrosscorrelationhereismeaningfulandcommoninspatialstatisticswhere itisusuallyassumedthatthecorrelationdecreasesasthedistancebetweentwoobservations increases(see,e.g.,[ GDFG10 ],[ GKS10 ]).Weonlyassumethatthecrosscorrelationistwice continuouslytiablearoundtheareawherethemaximumcorrelationisattained,which isaweakerassumptionthanthatin[ CX14 ]whoconsideredsmoothbivariateGaussian 7 Foranycompactsets A 1 ;A 2 ˆ R N ,weinvestigatetheasymptoticbehaviorofthefol- lowingexcursionprobability P max s 2 A 1 X 1 ( s ) >u; max t 2 A 2 X 2 ( t ) >u ; as u !1 : (2.1.2) ThemainresultsofthischapterareTheorems2.1and2.2below,whichdemonstratethat theexcursionprobability( 2.1.2 )dependsnotonlyonthesmoothnessparametersofthe coordinate X 1 and X 2 ,butalsoontheirmaximumcorrelation ˆ .Theproofsof ourTheorems2.1and2.2willbebasedonthedoublesummethod.Comparedwiththe earlierworksof[ LP00 ],[ Ans06 ]and[ HJ14 ],themainyinthepresentworkisthat thecorrelationfunctionof X 1 and X 2 attainsitsmaximumovertheset D := f ( s;s ): s 2 A 1 \ A 2 g whichmayhavetgeometricSeveralnon-trivialmo forcarryingouttheargumentsinthedoublesummethodhavetobemade. Thisworkraisesseveralopenquestions.Forexampleitwouldbeinterestingtostudy theexcursionprobabilitieswhen f X ( t ) ;t 2 R N g isanisotropicornon-stationary,ortaking valuesin R d with d 3.Inthelastproblem,thecovarianceandcross-covariancestructures becomemorecomplicated.Weexpectthatthepairwisemaximumcrosscorrelationsandthe size(e.g.,theLebesguemeasure)ofthesetwhereallthepairwisecrosscorrelationsattain theirmaximumvalues(ifnotempty)willplayanimportantrole. Therestsectionsinthischapterareorganizedasfollows.Section 2.2 statesthemain theoremswithsomediscussions.Weprovidesanapplicationofthemaintheoremstothe bivariateGaussianwithMaterncross-covariancesintroducedby[ GKS10 ]inSection 2.3 .WestatethekeylemmasandprovideproofsofourmaintheoremsinSection 2.4 .The proofsofthelemmasaregiveninSection 2.5 . 8 2.2MainResultsandDiscussions WerecallthePickandsconstantrst(see,e.g.,[ Pic69 , Pit96 ]).Let ˜ = f ˜ ( t ) ;t 2 R N g be a(rescaled)fractionalBrownianmotionwithHurstindex = 2 2 (0 ; 1),whichisacentered Gaussianwithcovariancefunction E [ ˜ ( t ) ˜ ( s )]= j t j + j s j j t s j . Asin[ LP00 ]and[ Ans06 ],weforanycompactsets S ; T ˆ R N , H ( S ; T ):= Z 1 0 e s P sup t 2 S ˜ ( t ) j t j >s; sup t 2 T ˜ ( t ) j t j >s ds: (2.2.1) Let H ( T )= H ( T ; T ).Then,thePickandsconstantisas H :=lim T !1 H ([0 ;T ] N ) T N ; (2.2.2) whichispositiveand(cf.[ Pit96 ]). BeforemovingtothetailprobabilityofextremesofabivariateGaussianrandomlet usconsiderthetailprobabilityofastandardbivariateGaussianvector( ˘; )withcorrelation ˆ .Itisknownthat(see,e.g.,[ LP00 ]) P ( ˘>u;>u ) u;ˆ )(1+ o (1)) ; as u !1 ; where u;ˆ ):= (1+ ˆ ) 2 2 ˇu 2 p 1 ˆ 2 exp u 2 1+ ˆ : Theexponentialpartofthetailprobabilityaboveisdeterminedbythecorrelation ˆ .As shownbyTheorems 2.2.1 and 2.2.2 below,similarphenomenonalsohappensforthetail 9 probabilityofdoubleextremesof f X ( t ) ;t 2 R N g ,wheretheexponentialpartisdetermined bythemaximumcrosscorrelationofthecoordinate X 1 and X 2 . Wewillstudydoubleextremesof X onthedomain A 1 A 2 where A 1 ;A 2 arebounded Jordanmeasurablesetsin R N .Thatis,theboundariesof A 1 and A 2 have N -dimensional Lebesguemeasure0(see,e.g.,[ Pit96 ],p.105).Weonlyconsiderthecasewhen A 1 \ A 2 6 = ; , inwhichthemaximumcrosscorrelation ˆ canbeattained. If mes N ( A 1 \ A 2 ) 6 =0,wehavethefollowingtheorem. Theorem2.2.1. Let f X ( t ) ;t 2 R N g beabivariateGaussianrandomthatthe assumptionsinSection 2.1 .If mes N ( A 1 \ A 2 ) 6 =0 ,thenas u !1 , P max s 2 A 1 X 1 ( s ) >u; max t 2 A 2 X 2 ( t ) >u =(2 ˇ ) N 2 ( r 00 (0)) N 2 c N 1 1 c N 2 2 mes N ( A 1 \ A 2 ) H 1 H 2 (1+ ˆ ) N ( 2 1 + 2 2 1) u N ( 2 1 + 2 2 1) u;ˆ )(1+ o (1)) : (2.2.3) If mes N ( A 1 \ A 2 )=0,theabovetheoremisnotinformative.Wehavenotbeenableto obtainageneralexplicitformulainthegeneral.Instead,weconsiderthespecialcases A 1 = A 1 ;M N Y j = M +1 [ S j ;T j ]and A 2 = A 2 ;M N Y M +1 [ T j ;R j ] ; (2.2.4) where A 1 ;M and A 2 ;M are M dimensionalJordansetswith mes M ( A 1 ;M \ A 2 ;M ) 6 =0and S j T j R j ;j = M +1 ;:::;N; 0 M N 1.Forsimplicityofnotation,let mes 0 ( ) 1. Ournexttheoremshowsthattheexcursionprobabilityissmallerthanthatin( 2.2.3 )bya factorof u M N . Theorem2.2.2. Let f X ( t ) ;t 2 R N g beabivariateGaussianrandomthatthe 10 assumptionsinSection 2.1 ,andlet A 1 ;A 2 beasin( 2.2.4 )with mes M ( A 1 ;M \ A 2 ;M ) > 0 . Thenas u !1 , P max s 2 A 1 X 1 ( s ) >u; max t 2 A 2 X 2 ( t ) >u =(2 ˇ ) M 2 ( r 00 (0)) 2 N M 2 c N 1 1 c N 2 2 H 1 H 2 mes M ( A 1 ;M \ A 2 ;M ) (1+ ˆ ) 2 N M 2 N 1 2 N 2 u M + N ( 2 1 + 2 2 2) u;ˆ )(1+ o (1)) : (2.2.5) Remark 2.2.3 . ThefollowingaresomeadditionalremarksaboutTheorems 2.2.1 and 2.2.2 . Theexcursionprobabilityin( 2.1.2 )dependsontheregionwherethemaximumcross correlationisattained.Inoursetting,themaximumcrosscorrelation ˆ isattainedon D := f ( s;s ) j s 2 A 1 \ A 2 g . ForTheorem 2.2.2 ,letusconsidertheextremecasewhen M =0,i.e., A 1 \ A 2 = f ( T 1 ;:::;T N ) g .Theexponentialpartstillreaches u 2 1+ ˆ ,althoughthemaximumcross correlation ˆ isattainedatasinglepoint. Tocompareourresultswith[ Ans06 ],weconsideracenteredGaussianprocess f X ( t )= ( X 1 ( t ) ;X 2 ( t )) > ;t 2 R g and A 1 = A 2 =[0 ;T ].Inoursetting,thecrosscorrelation attainsitsmaximumontheline D = f ( s;s ) j s 2 [0 ;T ] g ,whilein[ Ans06 ]itonly attainsatauniquepointin[0 ;T ] [0 ;T ]becauseoftheassumption C2 .Thisisthe reasonwhythepowerof u inoursettingsis 2 1 + 2 2 3insteadof 2 1 + 2 2 4in [ Ans06 ]. EventhoughTheorem 2.2.2 onlydealswithaspecialcaseof A 1 , A 2 with mes N ( A 1 \ A 2 )=0,itsmethodofproofcanbeappliedtomoregeneralcasesprovidedsome informationon A 1 and A 2 isprovided.Thekeystepistoreevaluatetheseries 11 inLemma 2.4.5 . 2.3Anexample:positivelycorrelatedbivariateMatern Inthissection,weapplyTheorems 2.2.1 and 2.2.2 tobivariateGaussianrandomwith theMaterncorrelationfunctionsintroducedby[ GKS10 ]. TheMaterncorrelationfunction M ( h j ;a ),where a> 0 ;> 0arescaleandsmoothness parameters,iswidelyusedtomodelcovariancestructuresinspatialstatistics.Itisd as M ( h j ;a ):= 2 1 ) ( a j h j ) K ( a j h j ) ; (2.3.1) where K isamoBesselfunctionofthesecondkind.In[ GKS10 ],theauthorsintroduce thefullbivariateMatern X ( s )=( X 1 ( s ) ;X 2 ( s )) > ,i.e.,an R 2 -valuedGaussianrandom on R N withzeromeanandmatrix-valuedcovariancefunctions: C ( h )= 0 B @ C 11 ( h ) C 12 ( h ) C 21 ( h ) C 22 ( h ) 1 C A ; (2.3.2) where C ij ( h ):= E [ X i ( s + h ) X j ( s )]arespby C 11 ( h )= ˙ 2 1 M ( h j 1 ;a 1 ) ; (2.3.3) C 22 ( h )= ˙ 2 2 M ( h j 2 ;a 2 ) ; (2.3.4) C 12 ( h )= C 21 ( h )= ˆ˙ 1 ˙ 2 M ( h j 12 ;a 12 ) : (2.3.5) 12 Accordingto[ GKS10 ],theabovemodelisvalidifandonlyif ˆ 2 1 + N= 2 + N= 2) 1 2 ) 12 ) 2 12 + N= 2) 2 a 2 1 1 a 2 2 2 a 4 12 12 inf t 0 ( a 2 12 + t 2 ) 2 12 + N ( a 2 1 + t 2 ) 1 + N= 2 ( a 2 2 + t 2 ) 2 + N= 2 : (2.3.6) Especially,when a 1 = a 2 = a 12 ,condition( 2.3.6 )isreducedto ˆ 2 1 + N= 2 + N= 2) 1 2 ) 12 ) 2 12 + N= 2) 2 ; (2.3.7) inwhichcasethechoiceof ˆ isfairly HerewefocusonastandardizedbivariateMaternthatis,weassume ˙ 1 = ˙ 2 =1, a 1 = a 2 = a 12 =1and ˆ> 0.Moreover,weassume 1 ; 2 2 (0 ; 1)and 12 > 1.Inthiscase, thebivariateMatern f X ( t ) ;t 2 R N g theassumptionsinSection 2.1 . Indeed,Assumptioni)inSection 2.1 issince M ( h j i ;a )=1 c i j t j 2 i + o ( j t j 2 i ) ; where c i = i ) 2 2 i i ) (see,e.g.,[ Ste99 ],p.32).Assumptionii)holdsimmediatelyifwe usethefollowingintegralrepresentationof M ( h j ;a )(see,e.g.,[ AS72 ],Section9 : 6) M ( h j ;a )= +1 = 2) p ˇ ) Z 1 0 cos( a j h j r ) (1+ r 2 ) +1 = 2 dr: (2.3.8) Assumptioniii)holdsbytheofcrosscorrelationin( 2.3.5 ).ForAssumptioniv), weonlyneedtocheckthesmoothnessof M ( h j ;a ).Byanotherintegralrepresentationof 13 M ( h j ;a )(see,e.g.,[ AS72 ],Section9 : 6),i.e., M ( h j ;a )= 2 1 2 ( a j h j ) 2 +1 = ) Z 1 1 e a j h j r ( r 2 1) 1 = 2 dr; onecanverifythat M ( h j ;a )istiablewhen j h j6 =0.Meanwhile, M 00 (0 j ;a ) existsandiscontinuouswhen > 1whichcanbeprovenbytakingtwicederivativestothe integralrepresentationin( 2.3.8 )w.r.t. j h j .SoAssumptioniv)holds. ApplyingTheorem 2.2.1 tothedoubleexcursionprobabilityof X ( s )over[0 ; 1] N ,wehave P max s 2 [0 ; 1] N X 1 ( s ) >u; max t 2 [0 ; 1] N X 2 ( t ) >u =(2 ˇ ) N 2 ( C 00 12 (0)) N 2 c N 2 1 1 c N 2 2 2 (1+ ˆ ) N ( 1 1 + 1 2 1) H 2 1 H 2 2 u N ( 1 1 + 1 2 1) u;ˆ )(1+ o (1)) ; as u !1 : Secondly,whenthetwomeasurementsareobservedontworegionswhichonlysharepart ofboundaries,weuseTheorem 2.2.2 toobtaintheexcursionprobability.Forexample,if X 1 ( s )areobservedontheregion[0 ; 1] N and X 2 ( s )on[0 ; 1] N 1 [1 ; 2],thenas u !1 , P max s 2 [0 ; 1] N X 1 ( s ) >u; max t 2 [0 ; 1] N 1 [1 ; 2] X 2 ( t ) >u =(2 ˇ ) N 1 2 ( C 00 12 (0)) N +1 2 c N 2 1 1 c N 2 2 2 (1+ ˆ ) 1 N ( 1 1 + 1 2 1) H 2 1 H 2 2 u N ( 1 1 + 1 2 1) 1 u;ˆ )(1+ o (1)) : 14 2.4Proofsofthemainresults TheproofsofTheorems 2.2.1 and 2.2.2 arebasedonthedoublesummethod[ Pit96 ]and theworkof[ LP00 ].Sincethelatterdealswiththetailprobability P (max t 2 [ T 1 ;T 2 ] X ( t ) > u; max t 2 [ T 3 ;T 4 ] X ( t ) >u )ofaunivariateGaussianprocess f X ( t ) ;t 2 R g ,theirmethodis nottforcarryingoutthedoublesummethodforabivariaterandom ThelemmasbelowextendLemma1andLemma9in[ LP00 ]tothebivariaterandom f ( X 1 ( t ) ;X 2 ( t )) > ;t 2 R N g .Moreover,wehavestrengthenedtheconclusionbyshowing thattheconvergenceisuniformincertainsense.Thiswillbeusefullaterfordealingwith sumsoflocalapproximationsaroundtheregionswherethemaximumcrosscorrelationis attained.ThedetailswillbeillustratedintheproofofTheorem 2.2.1 (see,e.g.,( 2.4.10 ), ( 2.4.21 )).Inthefollowinglemmas, f X ( t ) ;t 2 R N g isabivariateGaussianrandomas inSection 2.1 . Lemma2.4.1. Let s u and t u betwo R N -valuedfunctionsof u andlet ˝ u := t u s u .For anycompactsets S and T in R N ,wehave P max s 2 s u + u 2 1 S X 1 ( s ) >u; max t 2 t u + u 2 2 T X 2 ( t ) >u = (1+ ˆ ) 2 2 ˇ p 1 ˆ 2 H 1 0 @ c 1 1 1 S (1+ ˆ ) 2 1 1 A H 2 0 @ c 1 2 2 T (1+ ˆ ) 2 2 1 A u 2 exp u 2 1+ r ( j ˝ u j ) (1+ o (1)) ; (2.4.1) where o (1) ! 0 uniformlyw.r.t. ˝ u satisfying j ˝ u j C p log u=u as u !1 . Lemma2.4.2. Let s u ;t u and ˝ u bethesameasinLemma 2.4.1 .Forall T> 0 , m ; n 2 Z N , 15 wehave P max s 2 s u + u 2 1 [0 ;T ] N X 1 ( s ) >u; max t 2 t u + u 2 2 [0 ;T ] N X 2 ( t ) >u; max s 2 s u + u 2 1 [ m T; ( m +1) T ] X 1 ( s ) >u; max t 2 t u + u 2 2 [ n T; ( n +1) T ] X 2 ( t ) >u = (1+ ˆ ) 2 2 ˇ p 1 ˆ 2 u 2 e u 2 1+ r ( j ˝ u j ) H 1 c 1 1 1 [0 ;T ] N (1+ ˆ ) 2 1 ; c 1 1 1 [ m T; ( m +1) T ] (1+ ˆ ) 2 1 H 2 c 1 2 2 [0 ;T ] N (1+ ˆ ) 2 2 ; c 1 2 2 [ n T; ( n +1) T ] (1+ ˆ ) 2 2 1+ o (1) ; (2.4.2) where H ( ; ) isdin ( 2.2.1 ) and o (1) ! 0 uniformlyforall s u and t u thatsatisfy j ˝ u j C p log u=u as u !1 . Nowwearereadytoproveourmaintheorems. ProofofTheorem 2.2.1 . Let= A 1 A 2 ; ( u )= C p log u=u ,where C isaconstantwhose valuewillbedeterminedlater.Let D = ( s;t ) 2 : j t s j ( u ) : (2.4.3) Since P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g P max s 2 A 1 X 1 ( s ) >u; max t 2 A 2 X 2 ( t ) >u P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g + P [ ( s;t ) 2 nD f X 1 ( s ) >u;X 2 ( t ) >u ) g ; 16 itisttoprovethat,bychoosingappropriateconstant C ,wehave P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g =(2 ˇ ) N 2 ( r 00 (0)) N 2 c N 1 1 c N 2 2 (1+ ˆ ) N ( 2 1 + 2 2 1) mes N ( A 1 \ A 2 ) H 1 H 2 u N ( 2 1 + 2 2 1) u;ˆ )(1+ o (1)) ; as u !1 (2.4.4) and lim u !1 P S ( s;t ) 2 nD f X 1 ( s ) >u;X 2 ( t ) >u ) g P S ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g =0 : (2.4.5) Weprove( 2.4.4 )Forany T> 0and i =1 ; 2,let d i ( u )= Tu 2 i and,forany k =( k 1 ;:::;k N ) 2 Z N , ( i ) k , N Y j =1 [ k j d i ( u ) ; ( k j +1) d i ( u )]=[ k d i ( u ) ; ( k +1) d i ( u )] : (2.4.6) Let C = f ( k ; l ): (1) k (2) l \D6 = ;g and C = f ( k ; l ): (1) k (2) l Dg : (2.4.7) Itiseasytoseethat [ ( k ; l ) 2C (1) k (2) l D [ ( k ; l ) 2C (1) k (2) l : 17 ThustheLHSof( 2.4.4 )isboundedaboveby P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g X ( k ; l ) 2C P max s 2 (1) k X 1 ( s ) >u; max t 2 (2) l X 2 ( t ) >u = X ( k ; l ) 2C P max s 2 k d 1 ( u (1) 0 X 1 ( s ) >u; max t 2 l d 2 ( u (2) 0 X 2 ( t ) >u : (2.4.8) Let ˝ kl := l d 2 ( u ) k d 1 ( u ) =( l 1 d 2 ( u ) k 1 d 1 ( u ) ;:::;l N d 2 ( u ) k N d 1 ( u )) : (2.4.9) For( k ; l ) 2C , j ˝ kl j ( u )+ p N ( d 1 ( u )+ d 2 ( u )) 2 ( u )forall u largeenough,since d 1 ( u )= o ( ( u ))and d 2 ( u )= o ( ( u )),as u !1 .Hence,byapplyingLemma 2.4.1 tothe RHSof( 2.4.8 ),weobtain P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g (1+ ˆ ) 2 (1+ ( u )) 2 ˇ p 1 ˆ 2 u 2 H 1 0 @ c 1 1 1 [0 ;T ] N (1+ ˆ ) 2 1 1 A H 2 0 @ c 1 2 2 [0 ;T ] N (1+ ˆ ) 2 2 1 A X ( k ; l ) 2C exp u 2 1+ r ( j ˝ kl j ) = H 1 0 @ c 1 1 1 [0 ;T ] N (1+ ˆ ) 2 1 1 A H 2 0 @ c 1 2 2 [0 ;T ] N (1+ ˆ ) 2 2 1 A u;ˆ )(1+ ( u )) X ( k ; l ) 2C exp ˆ u 2 1 1+ r ( j ˝ kl j ) 1 1+ ˆ ; (2.4.10) 18 wheretheglobalerrorfunction ( u ) ! 0,as u !1 .Theuniformconvergenceof( 2.4.1 )in Lemma 2.4.1 guaranteesthatthelocalerrorterm o (1)foreachpair( k ; l ) 2C isuniformly boundedby ( u ). Theseriesinthelastequalityof( 2.4.10 )isdealtbythefollowingkeylemma,whichgives thepowerofthethreshold u in( 2.4.4 ). Lemma2.4.3. Recalltheset C din ( 2.4.7 ) .Let h ( u ):= X ( k ; l ) 2C exp ˆ u 2 1 1+ r ( j ˝ kl j ) 1 1+ ˆ : (2.4.11) Then,undertheassumptionsofTheorem 2.2.1 ,wehave h ( u )=(2 ˇ ) N= 2 ( r 00 (0)) N= 2 (1+ ˆ ) N T 2 N mes N ( A 1 \ A 2 ) u N ( 2 1 + 2 2 1) (1+ o (1)) ; as u !1 : (2.4.12) Moreover,ifwereplace C in( 2.4.11 )by C din ( 2.4.7 ) ,then( 2.4.12 )stillholds. WedefertheproofofLemma 2.4.3 toSection 2.5 andcontinuewiththeproofofTheorem 2.2.1 .Applying( 2.4.12 )to( 2.4.10 ),weobtain P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g (2 ˇ ) N 2 ( r 00 (0)) N 2 (1+ ˆ ) N T 2 N mes N ( A 1 \ A 2 ) H 1 0 @ c 1 1 1 [0 ;T ] N (1+ ˆ ) 2 1 1 A H 2 0 @ c 1 2 2 [0 ;T ] N (1+ ˆ ) 2 2 1 A u N ( 2 1 + 2 2 1) u;ˆ )(1+ 1 ( u )) ; (2.4.13) 19 where 1 ( u ) ! 0,as u !1 .Hence, limsup u !1 P S ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g u N ( 2 1 + 2 2 1) u;ˆ ) (2 ˇ ) N 2 ( r 00 (0)) N 2 (1+ ˆ ) N mes N ( A 1 \ A 2 ) T 2 N H 1 0 @ c 1 1 1 [0 ;T ] N (1+ ˆ ) 2 1 1 A H 2 0 @ c 1 2 2 [0 ;T ] N (1+ ˆ ) 2 2 1 A : (2.4.14) Theaboveinequalityholdsforevery T> 0.Therefore,letting T !1 ,wehave limsup u !1 P S ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g u N ( 2 1 + 2 2 1) u;ˆ ) (2 ˇ ) N 2 ( r 00 (0)) N 2 c N 1 1 c N 2 2 (1+ ˆ ) N ( 2 1 + 2 2 1) mes N ( A 1 \ A 2 ) H 1 H 2 : (2.4.15) Ontheotherhand,thelowerboundforLHSof( 2.4.4 )canbederivedasfollows.Let B = f ( k ; l ; k 0 ; l 0 ):( k ; l ) 6 =( k 0 ; l 0 ) ; ( k ; l ) ; ( k 0 ; l 0 ) 2Cg : (2.4.16) ByBonferroni'sinequalityandsymmetricpropertyof B ,theLHSof( 2.4.4 )isboundedbelow by P [ ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u g X ( k ; l ) 2C P max s 2 (1) k X 1 ( s ) >u; max t 2 (2) l X 2 ( t ) >u 1 2 X ( k ; l ; k 0 ; l 0 ) 2B P max s 2 (1) k X 1 ( s ) >u; max t 2 (2) l X 2 ( t ) >u; (2.4.17) 20 max s 2 (1) k 0 X 1 ( s ) >u; max t 2 (2) l 0 X 2 ( t ) >u , 1 2 : Since C and C arealmostthesame,asimilarargumentasin( 2.4.10 ) ˘ ( 2.4.15 )showsthat 1 isboundedfrombelowby 1 (2 ˇ ) N 2 ( r 00 (0)) N 2 (1+ ˆ ) N mes N ( A 1 \ A 2 ) T 2 N H 1 0 @ c 1 1 1 [0 ;T ] N (1+ ˆ ) 2 1 1 A H 2 0 @ c 1 2 2 [0 ;T ] N (1+ ˆ ) 2 2 1 A u N ( 2 1 + 2 2 1) u;ˆ )(1 2 ( u )) ; (2.4.18) where 2 ( u ) ! 0,as u !1 .Hence,letting T !1 ,wehave liminf u !1 1 u N ( 2 1 + 2 2 1) u;ˆ ) (2 ˇ ) N 2 ( r 00 (0)) N 2 c N 1 1 c N 2 2 (1+ ˆ ) N ( 2 1 + 2 2 1) mes N ( A 1 \ A 2 ) H 1 H 2 : (2.4.19) Next,weconsider 2 in( 2.4.17 ).Tosimplifythenotation,welet I ( k ; l ; k 0 ; l 0 ):= P max s 2 (1) k X 1 ( s ) >u; max t 2 (2) l X 2 ( t ) >u; max s 2 (1) k 0 X 1 ( s ) >u; max t 2 (2) l 0 X 2 ( t ) >u : For m =( m 1 ;:::;m N ) 2 Z N ,let H ;c ( m ) , H c 1 [0 ;T ] N (1+ ˆ ) 2 ; c 1 [ m T; ( m +1) T ] (1+ ˆ ) 2 ! : (2.4.20) 21 Rewriting 2 andapplyingLemma 2.4.2 ,weobtain 2 = 1 2 X ( k ; l ) 2C X ( k 0 ; l 0 ) 2C k 0 = k ; l 0 6 = l + X ( k 0 ; l 0 ) 2C k 0 6 = k ; l 0 = l + X ( k 0 ; l 0 ) 2C k 0 6 = k ; l 0 6 = l I ( k ; l ; k 0 ; l 0 ) = (1+ ˆ ) 2 (1+ 3 ( u )) 4 ˇ p 1 ˆ 2 u 2 X ( k ; l ) 2C e u 2 1+ r ( j ˝ kl j ) H 1 ;c 1 ( 0 ) X ( k 0 ; l 0 ) 2C k 0 = k ; l 0 6 = l H 2 ;c 2 ( l 0 l ) + H 2 ;c 2 ( 0 ) X ( k 0 ; l 0 ) 2C k 0 6 = k ; l 0 = l H 1 ;c 1 ( k 0 k )+ X ( k 0 ; l 0 ) 2C k 0 6 = k ; l 0 6 = l H 1 ;c 1 ( k 0 k ) H 2 ;c 2 ( l 0 l ) (1+ ˆ ) 2 (1+ 3 ( u )) 4 ˇ p 1 ˆ 2 u 2 X ( k ; l ) 2C e u 2 1+ r ( j ˝ kl j ) H 1 ;c 1 ( 0 ) X n 6 = 0 H 2 ;c 2 ( n ) + H 2 ;c 2 ( 0 ) X m 6 = 0 H 1 ;c 1 ( m )+ X m 6 = 0 ; n 6 = 0 H 1 ;c 1 ( m ) H 2 ;c 2 ( n ) ; (2.4.21) where 3 ( u ) ! 0,as u !1 .Accordingtotheuniformconvergenceof( 2.4.2 ),thelocal errorterm o (1)foreachpair( k 0 ; l 0 ) 2C isboundedaboveby 3 ( u ).Toestimate H ;c ( ), wemakeuseofthefollowinglemma,whoseproofisagainpostponedtoSection 2.5 . Lemma2.4.4. Recall H ;c ( ) din ( 2.4.20 ) .Let i 0 =argmax 1 i N j m i j .Thenthere existpositiveconstants C 1 and T 0 suchthatforall T T 0 , H ;c ( 0 ) C 1 T N ;(2.4.22) H ;c ( m ) C 1 T N 1 2 ; when j m i 0 j =1;(2.4.23) H ;c ( m ) C 1 T 2 N e c 8(1+ ˆ ) 2 ( j m i 0 1) T ; when j m i 0 j 2 : (2.4.24) 22 Consequently, X m 2 Z N nf 0 g H ;c ( m ) C 1 T N 1 2 : (2.4.25) ApplyingLemmas 2.4.3 and 2.4.4 totheRHSof( 2.4.21 ),weobtain 2 C 0 (1+ ˆ ) 2 (1+ 3 ( u )) 4 ˇ p 1 ˆ 2 u 2 T 2 N 1 2 X ( k ; l ) 2C exp u 2 1+ r ( j ˝ kl j ) C 0 (2 ˇ ) N 2 ( r 00 (0)) N 2 (1+ ˆ ) N mes N ( A 1 \ A 2 ) T 1 2 u N ( 2 1 + 2 2 1) u;ˆ )(1+ 4 ( u )) ; (2.4.26) where 4 ( u ) ! 0,as u !1 .Byletting u !1 and T !1 successively,wehave limsup u !1 2 u N ( 2 1 + 2 2 1) u;ˆ ) =0 : (2.4.27) Bycombining( 2.4.17 ),( 2.4.19 )and( 2.4.27 ),wehave liminf u !1 P S ( s;t ) 2D f X 1 ( s ) >u;X 2 ( t ) >u ) g u N ( 2 1 + 2 2 1) u;ˆ ) liminf u !1 1 u N ( 2 1 + 2 2 1) u;ˆ ) limsup u !1 2 u N ( 2 1 + 2 2 1) u;ˆ ) (2.4.28) (2 ˇ ) N 2 ( r 00 (0)) N 2 c N 1 1 c N 2 2 (1+ ˆ ) N ( 2 1 + 2 2 1) mes N ( A 1 \ A 2 ) H 1 H 2 : Itisnowclearthat( 2.4.4 )followsfrom( 2.4.15 )and( 2.4.28 ). Nowweprove( 2.4.5 ). Y ( s;t ):= X 1 ( s )+ X 2 ( t ) ; for( s;t ) 2 nD : (2.4.29) 23 For x =( s 1 ;t 1 ) ;y =( s 2 ;t 2 ) 2 nD ,let j x y j = p j s 1 s 2 j 2 + j t 1 t 2 j 2 .Thenwecan verifythat E j Y ( x ) Y ( y ) j 2 C 0 j x y j min( 1 2 ) ; 8 x;y 2 nD : (2.4.30) ByapplyingTheorem8 : 1in[ Pit96 ],weobtainthatthenumeratorof( 2.4.5 )isbounded aboveby P [ ( s;t ) 2 nD f X 1 ( s ) >u;X 2 ( t ) >u ) g P max ( s;t ) 2 nD Y ( s;t ) > 2 u ! C 0 u 1+ 2 N min( 1 2 ) exp u 2 1+max ( s;t ) 2 nD r ( j t s j ) ! : (2.4.31) Since r ( j t s j )= ˆ + 1 2 r 00 (0) j t s j 2 (1+ o (1))and r ( )attainsmaximumonlyatzero,we have max ( s;t ) 2 nD r ( j t s j ) ˆ 1 3 ( r 00 (0)) 2 ( u ) (2.4.32) for u largeenough.So( 2.4.31 )isatmost C 0 u 1+ 2 N min( 1 2 ) exp u 2 1+ ˆ 1 3 ( r 00 (0)) 2 ( u ) ! C 0 u 1+ 2 N min( 1 2 ) exp u 2 1+ ˆ exp 1 3 ( r 00 (0)) 2 ( u ) u 2 (1+ ˆ ) 2 ! = 2 ˇ p 1 ˆ 2 C 0 (1+ ˆ ) 2 u 1+ 2 N min( 1 2 ) r 00 (0) 3(1+ ˆ ) 2 C 2 u;ˆ ) ; (2.4.33) wheretheinequalityholdssince 1 x y 1 x + y x 2 ; 8 x>y .Compare( 2.4.33 )with( 2.4.4 ),itis 24 easytosee( 2.4.5 )holdsifandonlyif 1+ 2 N min( 1 ; 2 ) r 00 (0) 3(1+ ˆ ) 2 C 2 3(1+ ˆ ) 2 r 00 (0) N 2 min( 1 ; 2 ) +1 2 1 2 2 +1 + 1 2 ; (2.4.35) weconclude( 2.4.5 ). ProofofTheorem 2.2.2 . FromtheproofofTheorem 2.2.1 ,weseethattheexponentialde- cayingrateoftheexcursionprobabilityisonlydeterminedbytheregionwherethemaximum crosscorrelationisattained.Inthecaseof mes N ( A 1 \ A 2 )=0but A 1 \ A 2 6 = ; ,theexpo- nentialpart, e u 2 1+ ˆ ,remainsthesame.Yet,thedimensionreductionof A 1 \ A 2 doest thepolynomialpoweroftheexcursionprobability,whichisdeterminedbythequantity h ( u )= X ( k ; l ) 2C exp ˆ u 2 1 1+ r ( j ˝ kl j ) 1 1+ ˆ inLemma 2.4.3 .UndertheassumptionsofTheorem 2.2.2 ,theset C andthebehaviorof h ( u )change.WewillmakeuseofthefollowinglemmawhichplaystheroleofLemma 2.4.3 . Lemma2.4.5. UndertheassumptionsofTheorem 2.2.2 ,wehave h ( u )=(2 ˇ ) M= 2 ( r 00 (0)) M= 2 N (1+ ˆ ) 2 N M T 2 N mes M ( A 1 ;M \ A 2 ;M ) u M + N 2 1 + 2 2 2 (1+ o (1)) ; as u !1 : (2.4.36) 25 Moreover,ifwereplace C with C din ( 2.4.7 ) ,thentheabovestatementstillholds. TherestoftheproofofTheorem 2.2.2 isthesameasthatofTheorem 2.2.1 anditis omittedhere. 2.5ProofofLemmas ForprovingLemma 2.4.1 ,wewillmakeuseofthefollowing Lemma2.5.1. Let s u and t u betwo R N -valuedfunctionsof u andlet ˝ u := t u s u .For anycompactrectangles S and T in R N , ˘ u ( s ):= u ( X 1 ( s u + u 2 1 s ) u )+ x; 8 s 2 S ; u ( t ):= u ( X 2 ( t u + u 2 2 t ) u )+ y; 8 t 2 T (2.5.1) andforany t 2 R N ; let ˘ ( t ):= p c 1 ˜ 1 ( t ) c 1 j t j 1 1+ ˆ ; ( t ):= p c 2 ˜ 2 ( t ) c 2 j t j 2 1+ ˆ ; (2.5.2) where ˜ 1 ( t ) ;˜ 2 ( t ) aretwoindependentfractionalBrownianmotionswithindices 1 = 2 and 2 = 2 ,respectively.Then,thedimensionaldistributions(abbr.f.d.d.)of ( ˘ u ( ) ; u ( )) , given X 1 ( s u )= u x u ;X 2 ( t u )= u y u ,convergeuniformlytothef.d.d.of ( ˘ ( ) ; ( )) forall s u and t u thatsatisfy j ˝ u j C p log u=u .Furthermore,as u !1 , P max s 2 S ˘ u ( s ) >x; max t 2 T u ( t ) >y X 1 ( s u )= u x u ;X 2 ( t u )= u y u ! P max s 2 S ˘ ( s ) >x; max t 2 T ( t ) >y ; (2.5.3) 26 wheretheconvergenceisuniformforall s u and t u thatsatisfy j ˝ u j C p log u=u . Proof. First,weprovetheuniformconvergenceofdimensionaldistributions.Given X 1 ( s u )= u x u ;X 2 ( t u )= u y u ,thedistributionofthebivariaterandom( ˘ u ( ) ; u ( )) isstillGaussian.Thankstothefollowinglemma(whoseproofwillbegivenattheendofthis section),ittoprovetheuniformconvergenceofconditionalmeanandconditional variance. Lemma2.5.2. Let X ( u;˝ u )=( X 1 ( u;˝ u ) ;:::;X n ( u;˝ u )) > beaGaussianrandomvector withmean ( u;˝ u )=( 1 ( u;˝ u ) ;:::; n ( u;˝ u ) > andcovariancematrix u;˝ u ) withentries ˙ ij ( u;˝ u )=Cov( X i ( u;˝ u ) ;X j ( u;˝ u )) ;i;j =1 ; 2 ;:::;n .Similarly,let X =( X 1 ;:::;X n ) > beaGaussianrandomvectorwithmean =( 1 ;:::; n ) andcovariancematrix =( ˙ ij ) n i;j =1 . Let F u ( ) and F ( ) bethedistributionfunctionsof X ( u;˝ u ) and X respectively.If lim u !1 max ˝ u j j ( u;˝ u ) j j =0 ; lim u !1 max ˝ u j ˙ ij ( u;˝ u ) ˙ ij j =0 ;i;j =1 ; 2 ;:::;n; (2.5.4) thenforany x 2 R N , lim u !1 max ˝ u j F u ( x ) F ( x ) j =0 : (2.5.5) WecontinuewiththeproofofLemma 2.5.1 andpostponetheproofofLemma 2.5.2 to theendofthissection.Recallthat,fortworandomvectors X;Y 2 R m ,theircovarianceis asCov( X;Y ):= E [( X E X )( Y E Y ) > ]andthevariancematrixof X is asVar( X ):=Cov( X;X ).Theconditionalmeanof( ˘ u ( t ) ; u ( t )) > given X 1 ( s u )= u 27 x u ;X 2 ( t u )= u y u ,is E 0 B @ ˘ u ( t ) u ( t ) X 1 ( s u )= u x u X 2 ( t u )= u y u 1 C A = E ˘ u ( t ) u ( t ) +Cov 0 B @ 0 B @ ˘ u ( t ) u ( t ) 1 C A ; 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 1 C A 0 B @ Var 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 1 C A 1 0 B @ u x u u y u 1 C A = 0 B @ u 2 + x u 2 + y 1 C A + u 1 r 2 ( j ˝ u j ) 0 B @ r 11 ( s u + u 2 1 t;s u ) r ( j ˝ u u 2 1 t j ) r ( j ˝ u + u 2 2 t j ) r 22 ( t u + u 2 2 t;t u ) 1 C A 0 B @ 1 r ( j ˝ u j ) r ( j ˝ u j )1 1 C A 0 B @ u x u u y u 1 C A , 0 B @ a 1 ( u ) a 2 ( u ) 1 C A ; (2.5.6) where a 1 ( u )= u 2 (1 r 11 ( s u + u 2 1 t;s u )) u 2 ( r ( j ˝ u u 2 1 t j ) r ( j ˝ u j )) 1+ r ( j ˝ u j ) + ( x yr ( j ˝ u j ))(1 r 11 ( s u + u 2 1 t;s u )) 1 r 2 ( j ˝ u j ) + ( y xr ( j ˝ u j ))( r ( j ˝ u j ) r ( j ˝ u u 2 1 t j )) 1 r 2 ( j ˝ u j ) (2.5.7) and a 2 ( u )= u 2 (1 r 22 ( t u + u 2 2 t;t u )) u 2 ( r ( j ˝ u + u 2 2 t j ) r ( j ˝ u j )) 1+ r ( j ˝ u j ) + ( y xr ( j ˝ u j ))(1 r 22 ( t u + u 2 1 t;t u )) 1 r 2 ( j ˝ u j ) 28 + ( x yr ( j ˝ u j ))( r ( j ˝ u j ) r ( j ˝ u + u 2 2 t j )) 1 r 2 ( j ˝ u j ) : (2.5.8) Applyingthemeanvaluetheoremtwice,weseethatfor u largeenough, j r ( j ˝ u + u 2 t j ) r ( j ˝ u j ) jj u 2 t j max s isbetween j ˝ u j and j ˝ u + u 2 t j j r 0 ( s ) j j u 2 t j max j s 2 C p log u=u j r 0 ( s ) j j u 2 t j max j s 2 C p log u=u j s j max j t s j j r 00 ( t ) j ! 2 C j t j p log u u 1 2 max j t 2 C p log u=u j r 00 ( t ) j 4 C j r 00 (0) jj t j p log u u 1 2 ; (2.5.9) wherethesecondinequalityholdsbecauseof u 2 = o ( p log u=u ),as u !1 andthelast inequalityholdssince r 00 ( )iscontinuousinaneighborhoodofzero.Thus( 2.5.9 )implies that,as u !1 , u 2 j r ( j ˝ u + u 2 t j ) r ( j ˝ u j ) j 4 C j r 00 (0) jj t j p log u u 1 2 ! 0 ; (2.5.10) wheretheconvergenceisuniformforall s u and t u thatsatisfy j ˝ u j C p log u=u .Wealso noticethatfor i =1 ; 2andall s 2 R N , 1 r ii ( s + u 2 t;s )= c i u 2 j t j i + o ( u 2 ) ; as u !1 : (2.5.11) 29 By( 2.5.6 ),( 2.5.10 ),and( 2.5.11 ),weconcludethat,as u !1 ; E 0 B @ ˘ u ( t ) u ( t ) X 1 ( s u )= u x u X 2 ( t u )= u y u 1 C A ! 0 B @ c 1 j t j 1 1+ ˆ c 2 j t j 2 1+ ˆ 1 C A ; (2.5.12) wheretheconvergenceisuniformw.r.t. s u and t u satisfying j ˝ u j C p log u=u . Next,weconsidertheconditionalcovariancematrixof( ˘ u ( t ) ˘ u ( s ) ; u ( t ) u ( s )) > . Var 0 B @ 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A X 1 ( s u ) X 2 ( t u ) 1 C A =Var 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A Cov 0 B @ 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A ; 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 1 C A Var 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 1 Cov 0 B @ 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A ; 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 1 C A > : (2.5.13) Let h u ( t;s ):= r ( j ˝ u + u 2 2 t u 2 1 s j ).Applying( 2.5.10 )and( 2.5.11 ),weobtain Var 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A = 0 B B B B B B B B B B B B @ 2 u 2 (1 r 11 ( s u + u 2 ( h u ( t;t ) h u ( s;t ) u 2 1 s;s u + u 2 1 t )) h u ( t;s )+ h u ( s;s )) u 2 ( h u ( t;t ) h u ( s;t )2 u 2 (1 r 22 ( t u + h u ( t;s )+ h u ( s;s )) 2 2 s;t u + u 2 2 t )) 1 C C C C C C C C C C C C A 30 = 0 B @ 2 c 1 j t s j 1 (1+ o (1)) o (1) o (1)2 c 2 j t s j 2 (1+ o (1)) 1 C A ; (2.5.14) where o (1)convergestozerouniformlyw.r.t. ˝ u satisfying j ˝ u j C p log u=u ,as u !1 . Also,wehave Cov 2 6 4 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A ; 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 3 7 5 = 0 B B B B B B B B B B B B @ u ( r 11 ( s u + u 2 1 t;s u ) u ( r ( j ˝ u u 2 1 t j ) r 11 ( s u + u 2 1 s;s u )) r ( j ˝ u u 2 1 s j )) u ( r ( j ˝ u + u 2 2 t j ) u ( r 22 ( t u + u 2 2 t;t u ) r ( j ˝ u + u 2 2 s j )) r 22 ( t u + u 2 2 s;t u )) 1 C C C C C C C C C C C C A = 0 B @ o (1) o (1) o (1) o (1) 1 C A ; (2.5.15) as u !1 ; and Var 0 B @ X 1 ( s u ) X 2 ( t u ) 1 C A 1 = 1 1 r 2 ( j ˝ u j ) 0 B @ 1 r ( j ˝ u j ) r ( j ˝ u j )1 1 C A : (2.5.16) By( 2.5.13 ){( 2.5.16 ),weconcludethatas u !1 , Var 0 B @ 0 B @ ˘ u ( t ) ˘ u ( s ) u ( t ) u ( s ) 1 C A X 1 ( s u ) X 2 ( t u ) 1 C A ! 0 B @ 2 c 1 j t s j 1 0 02 c 2 j t s j 2 1 C A ; (2.5.17) 31 wheretheconvergenceisuniformw.r.t. ˝ u satisfying j ˝ u j C p log u=u .Hence,theuniform convergenceoff.d.d.inLemma 2.5.1 followsfrom( 2.5.12 ),( 2.5.17 )andLemma 2.5.2 . NowweprovethesecondpartofLemma 2.5.1 .Thecontinuousmappingtheorem(see, e.g.,[ Bil68 ],p.30)canbeusedtoprove( 2.5.3 )holdswhen s u and t u arexed.Sincewe needtoproveuniformconvergencew.r.t. s u and t u ,weuseadiscretizationmethodinstead. Let f ( u;x;y ):= P max s 2 S ˘ u ( s ) >x; max t 2 T u ( t ) >y X 1 ( s u )= u x u ;X 2 ( t u )= u y u (2.5.18) and f ( x;y ):= P max s 2 S ˘ ( s ) >x; max t 2 T ( t ) >y : (2.5.19) Withoutlossofgenerality,supposethat S =[ a;b ] N and T =[ c;d ] N ,where ax; max t 2T n u ( t ) >y X 1 ( s u )= u x u ;X 2 ( t u )= u y u (2.5.21) andisboundedfromaboveby g m;n ( u;x;y )whichisas P max s 2S m ˘ u ( s ) >x max t 2T n u ( t ) >y X 1 ( s u )= u x u ;X 2 ( t u )= u y u + P max s 2 S ˘ u ( s ) >x; max s 2S m ˘ u ( s ) x X 1 ( s u )= u x u ;X 2 ( t u )= u y u + P max t 2 T u ( t ) >y; max t 2T n u ( t ) y X 1 ( s u )= u x u ;X 2 ( t u )= u y u , f m;n ( u;x y )+ s m;n ( u;x;y )+ t m;n ( u;x;y ) ; (2.5.22) where > 0isanysmallconstant.Let f m;n ( x;y ):= P max s 2S m ˘ ( s ) >x; max t 2T n ( t ) >y : (2.5.23) Sincethedimensionaldistributionsof( ˘ u ( ) ; u ( ))convergeuniformlytothoseof ( ˘ ( ) ; ( )),wehave lim u !1 max j ˝ u C p log u=u j f m;n ( u;x;y ) f m;n ( x;y ) j =0 : (2.5.24) 33 Thecontinuityofthetrajectoryof( ˘ ( ) ; ( ))yields lim m !1 n !1 f m;n ( x;y )= f ( x;y ) : (2.5.25) By( 2.5.24 )and( 2.5.25 ),weconclude lim m !1 n !1 lim u !1 max j ˝ u C p log u=u j f m;n ( u;x;y ) f ( x;y ) j =0 : (2.5.26) Letusconsidertheconditionalprobability s m;n ( u;x;y )in( 2.5.22 ). s m;n ( u;x;y ) P max j s t j ˘ u ( s ) ˘ u ( t ) j > X 1 ( s u )= u x u ;X 2 ( t u )= u y u 1 E max j s t j ˘ u ( s ) ˘ u ( t ) j X 1 ( s u )= u x u ;X 2 ( t u )= u y u ! = 1 E P u max j s t j x ( s ) x ( t ) j ; (2.5.27) where P u istheprobabilitymeasureon( C ( S ) ; B ( C ( S ))as P u ( A ):= P ˘ u ( ) 2 A X 1 ( s u )= u x u ;X 2 ( t u )= u y u ; forall A 2B ( C ( S ))and x ( )isthecoordinaterandomelementon( C ( S ) ; B ( C ( S )) ; P u ),i.e., x ( t;! )= ! ( t ) ; 8 ! 2 C ( S )and t 2 S .Considerthecanonicalmetric d u ( s;t ):= h E P u j x ( s ) x ( t ) j 2 i 1 = 2 = h E j ˘ u ( s ) ˘ u ( t ) j 2 X 1 ( s u )= u x u ;X 2 ( t u )= u y u 1 = 2 : 34 By( 2.5.17 ),wehave d u ( s;t ) 2 p c 1 j s t j 1 = 2 (2.5.28) for u largeenoughandall s u , t u suchthat j ˝ u j C p log u=u .Ifwechoose = 2 p c 1 2 1 , then d u ( s;t ) < forall t;s 2 S with j t s j < .Hence N d u ( S ; ) C 2 N 1 ; (2.5.29) where N d u ( S ; )denotestheminimumnumberof d u -ballswithradius thatareneededto cover S .ByDudley'sTheorem(see,e.g.,Theorem1.3.3in[ AT07 ])and( 2.5.28 ),wehave E P u max j s t j x ( s ) x ( t ) j K Z 2 p c 1 1 = 2 0 q log N d u ( S ; ) (2.5.30) where K< 1 isaconstant(whichdoesnotdependon )and,thanksto( 2.5.29 ),thelast integralgoesto0as ! 0(or,equivalently,as m !1 ;n !1 ).By( 2.5.27 )and( 2.5.30 ), weconcludethat lim m !1 n !1 limsup u !1 max j ˝ u C p log u j =u j s m;n ( u;x;y ) j =0 : (2.5.31) Asimilarargumentshowsthat lim m !1 n !1 limsup u !1 max j ˝ u C p log u j =u j t m;n ( u;x;y ) j =0 : (2.5.32) Since j f ( u;x;y ) f ( x;y ) jj f m;n ( u;x;y ) f ( x;y ) j + j g m;n ( u;x;y ) f ( x;y ) j j f m;n ( u;x;y ) f ( x;y ) j + j f m;n ( u;x y ) f ( x y ) j 35 + j f ( x y ) f ( x;y ) j + j s m;n ( u;x;y ) j + j t m;n ( u;x;y ) j ; (2.5.33) wecombine( 2.5.26 ),( 2.5.31 )and( 2.5.32 )toobtain limsup u !1 max j ˝ u C p log u j =u j f ( u;x;y ) f ( x;y ) jj f ( x y ) f ( x;y ) j +lim m !1 n !1 limsup u !1 max j ˝ u C p log u=u j f m;n ( u;x;y ) f ( x;y ) j + j s m;n ( u;x;y ) j + j t m;n ( u;x;y ) j + j f m;n ( u;x y ) f ( x y ) j = j f ( x y ) f ( x;y ) j : Sincethelastterm ! 0as # 0,wehavecompletedtheproofofthesecondpartofthe lemma. NowwearereadytoprovethemainlemmasinSection3. ProofofLemma 2.4.1 . Let ˚ ( a;b )bethedensityof( X 1 ( s u ) ;X 2 ( t u )) > ,i.e., ˚ ( a;b )= 1 2 ˇ p 1 r 2 ( j ˝ u j ) exp ˆ 1 2 a 2 2 r ( j ˝ u j ) ab + b 2 1 r 2 ( j ˝ u j ) ˙ : (2.5.34) Byconditioningandachangeofvariables,theLHSof( 2.4.1 )becomes P max s 2 s u + u 2 1 S X 1 ( s ) >u; max t 2 t u + u 2 2 T X 2 ( t ) >u = Z R 2 P max s 2 s u + u 2 1 S X 1 ( s ) >u; max t 2 t u + u 2 2 T X 2 ( t ) >u X 1 ( s u )= u x u ; X 2 ( t u )= u y u ˚ u x u ;u y u u 2 dxdy = 1 2 ˇ p 1 r 2 ( j ˝ u j ) u 2 exp u 2 1+ r ( j ˝ u j ) Z R 2 f ( u;x;y ) ~ ˚ ( u;x;y ) dxdy; (2.5.35) 36 where f ( u;x;y )isin( 2.5.18 )with ˘ u ( ) ; u ( )in( 2.5.1 ),andwhere ~ ˚ ( u;x;y ) :=exp ˆ 1 2(1 r 2 ( j ˝ u j )) x 2 + y 2 u 2 2(1 r ( j ˝ u j ))( x + y ) 2 r ( j ˝ u j ) xy u 2 ˙ : Sincemax j ˝ u C p log u=u j r ( j ˝ u j ) ˆ j! 0as u !1 ,itiseasytocheckthat max j ˝ u C p log u=u ~ ˚ ( u;x;y ) e x + y 1+ ˆ ! 0 ; as u !1 : (2.5.36) Recall H ( )in( 2.2.1 )and f ( x;y )in( 2.5.19 ).Since ˘ ( ), ( )areindependent,and f ˘ ( t ) ;t 2 R N g d = ˆ (1+ ˆ ) h ˜ 1 p c 1 1+ ˆ 2 1 t p c 1 1+ ˆ 2 1 t 1 i ;t 2 R N ˙ ; f ( t ) t 2 R N g d = ˆ (1+ ˆ ) h ˜ 2 p c 2 1+ ˆ 2 2 t p c 2 1+ ˆ 2 2 t 2 i ;t 2 R N ˙ ; where d =meansequalityofalldimensionaldistributions,wehave Z R 2 f ( x;y ) e x + y 1+ ˆ dxdy = Z R e x 1+ ˆ P max s 2 S ˘ ( s ) >x dx Z R e y 1+ ˆ P max t 2 T ( t ) >y dy =(1+ ˆ ) 2 H 1 c 1 1 1 S (1+ ˆ ) 2 1 H 2 c 1 2 2 T (1+ ˆ ) 2 2 : (2.5.37) By( 2.5.35 )and( 2.5.37 ),toconcludethelemma,ittoprove lim u !1 Z R 2 max j ˝ u C p log u=u f ( u;x;y ) ~ ˚ ( u;x;y ) f ( x;y ) e x + y 1+ ˆ dxdy =0 : (2.5.38) 37 Firstly,applyingLemma 2.5.1 togetherwith( 2.5.36 ),wehave max j ˝ u C p log u=u f ( u;x;y ) ~ ˚ ( u;x;y ) f ( x;y ) e x + y 1+ ˆ ! 0 ; as u !1 : (2.5.39) Secondly,asin[ LP00 ],wecananintegrabledominatingfunction g 2 L ( R 2 )suchthat for u largeenough, max j ˝ u C p log u=u f ( u;x;y ) ~ ˚ ( u;x;y ) f ( x;y ) e x + y 1+ ˆ g ( x;y ) : (2.5.40) Therefore,( 2.5.38 )followsfromthedominatedconvergencetheorem.Thistheproof. ProofofLemma 2.4.2 . Weclaimthatforanycompactsets S and T ,theidentity H ( S )+ H ( T ) H ( S [ T )= H ( S ; T )(2.5.41) holds.Indeed,ifwelet X =sup t 2 S ( ˜ ( t ) j t j )and Y =sup t 2 T ( ˜ ( t ) j t j ),then H ( S )+ H ( T ) H ( S [ T )= E e X + E e Y E e max( X;Y ) = E e X 1 f Xu 9 = ; ;B = 8 < : max s 2 s u + u 2 1 T 2 X 1 ( s ) >u 9 = ; ; C = 8 < : max t 2 t u + u 2 2 T 1 X 2 ( t ) >u 9 = ; ;D = 8 < : max t 2 t u + u 2 2 T 3 X 2 ( t ) >u 9 = ; : 38 ItiseasytocheckthattheLHSof( 2.4.2 )isequalto P ( A \ B \ C \ D ) =[ P ( A \ C )+ P ( B \ C ) P (( A [ B ) \ C )] +[ P ( A \ D )+ P ( B \ D ) P (( A [ B ) \ D )] [ P ( A \ ( C [ D ))+ P ( B \ ( C [ D )) P (( A [ B ) \ ( C [ D ))] : (2.5.42) Let R ( u )= (1+ ˆ ) 2 2 ˇ q 1 ˆ 2 u 2 exp u 2 1+ r ( j ˝ u j ) and q ;c = (1+ ˆ ) 2 c 1 .ByLemma 2.4.1 ,wehave P ( A \ C )= R ( u ) H 1 T 1 q 1 ;c 1 ! H 2 T 1 q 2 ;c 2 ! (1+ 1 ( u )) ; P ( B \ C )= R ( u ) H 1 T 2 q 1 ;c 1 ! H 2 T 1 q 2 ;c 2 ! (1+ 2 ( u )) ; P (( A [ B ) \ C )= R ( u ) H 1 T 1 [ T 2 q 1 ;c 1 ! H 2 T 1 q 2 ;c 2 ! (1+ 3 ( u )) ; where,for i =1 ; 2 ; 3, i ( u ) ! 0uniformlyw.r.t. ˝ u satisfying j ˝ u j C p log u=u ,as u !1 . These,togetherwith( 2.5.41 ),imply P ( A \ C )+ P ( B \ C ) P (( A [ B ) \ C ) = R ( u ) H 2 T 1 q 2 ;c 2 ! H 1 T 1 q 1 ;c 1 ; T 2 q 1 ;c 1 ! (1+ o (1)) : (2.5.43) Similarly,wehave P ( A \ D )+ P ( B \ D ) P (( A [ B ) \ D ) = R ( u ) H 2 T 3 q 2 ;c 2 ! H 1 T 1 q 1 ;c 1 ; T 2 q 1 ;c 1 ! (1+ o (1))(2.5.44) 39 and P ( A \ ( C [ D ))+ P ( B \ ( C [ D )) P (( A [ B ) \ ( C [ D )) = R ( u ) H 2 T 1 [ T 3 q 2 ;c 2 ! H 1 T 1 q 1 ;c 1 ; T 2 q 1 ;c 1 ! (1+ o (1)) : (2.5.45) By( 2.5.42 ){( 2.5.45 ),wehave P ( A \ B \ C \ D ) = R ( u ) H 1 T 1 q 1 ;c 1 ; T 2 q 1 ;c 1 ! H 2 ;c 2 T 1 q 2 ;c 2 ; T 3 q 2 ;c 2 ! (1+ o (1)) ; whichconcludesthelemma. ProofofLemma 2.4.3 . Let f ( j t j )= 1 1+ r ( j t j ) .Recall ˝ kl in( 2.4.9 )and j ˝ kl j 2 ( u ), when u islarge.ByTaylor'sexpansion, f ( j ˝ kl j )= f (0)+ 1 2 f 00 (0) j ˝ kl j 2 (1+ kl ( u )) ; where f (0)= 1 1+ ˆ , f 00 (0)= r 00 (0) (1+ ˆ ) 2 and,as u !1 , kl ( u )convergestozerouniformlyw.r.t. all( k ; l ) 2C .Therefore,forany > 0,wehave X ( k ; l ) 2C e 1 2 f 00 (0)(1+ ) u 2 j ˝ kl j 2 h ( u ) X ( k ; l ) 2C e 1 2 f 00 (0)(1 ) u 2 j ˝ kl j 2 (2.5.46) when u islargeenough.For a> 0,let h ( u;a ):= X ( k ; l ) 2C e au 2 j ˝ kl j 2 : (2.5.47) 40 Inordertoprove( 2.4.12 ),ittoprovethat lim u !1 u N d N 1 ( u ) d N 2 ( u ) h ( u;a )= ˇ a N 2 mes N ( A 1 \ A 2 ) : (2.5.48) Tothisend,wewrite u N d N 1 ( u ) d N 2 ( u ) h ( u;a ) = 1 u N X ( k ; l ) 2C e a P N j =1 ( l j ud 2 ( u ) k j ud 1 ( u )) 2 ( ud 1 ( u )) N ( ud 2 ( u )) N : (2.5.49) Let p ( u ):= 1 u N X ( k ; l ) 2C min ( s;t ) 2 u (1) k u (2) l e a j t s j 2 ( ud 1 ( u )) N ( ud 2 ( u )) N ; q ( u ):= 1 u N X ( k ; l ) 2C max ( s;t ) 2 u (1) k u (2) l e a j t s j 2 ( ud 1 ( u )) N ( ud 2 ( u ) N : Itfollowsfrom( 2.5.49 )that p ( u ) u N d N 1 ( u ) d N 2 ( u ) h ( u;a ) q ( u ) ; (2.5.50) and p ( u ) 1 u N Z s 2 uA 1 ;t 2 uA 2 j t s C p log u e a j t s j 2 dtds q ( u ) : (2.5.51) Observethat 1 u N ZZ s 2 uA 1 ;t 2 uA 2 j t s C p log u e a j t s j 2 dtds = 1 u N ZZ y 2 uA 1 ;x + y 2 uA 2 j x C p log u e a j x j 2 dxdy 41 = 1 u N Z j x C p log u e a j x j 2 dx Z R N 1 f y 2 uA 1 \ ( uA 2 x ) g dy = Z j x C p log u e a j x j 2 dx Z R N 1 f z 2 A 1 \ ( A 2 x=u ) g dz ! mes N ( A 1 \ A 2 ) Z R N e a j x j 2 dx = ˇ a N 2 mes N ( A 1 \ A 2 ) ; (2.5.52) as u !1 ,wheretheconvergenceholdsbythedominatedconvergencetheorem.Indeed, R R N 1 f z 2 A 1 \ ( A 2 x=u ) g dz isboundedbymax j j < 1 mes N ( A 1 \ ( A 2 ))uniformlyfor j x j C p log u when u islargeenough. Itfollowsfrom( 2.5.50 ){( 2.5.50 )that,forconcluding( 2.5.48 ),itremainstoverify D ( u ):= q ( u ) p ( u ) ! 0 ; as u !1 : (2.5.53) ^ D := n ( s;t ) 2 A 1 A 2 : j t s j ( u )+ p Nd 1 ( u )+ p Nd 2 ( u ) o : (2.5.54) Bytheof C in( 2.4.7 ),weseethat D S ( k ; l ) 2C (1) k (2) l ^ D : Since d 1 ( u )= o ( ( u ))and d 2 ( u )= o ( ( u ))as u !1 ,theset ^ D isasubsetof ~ D := f ( s;t ) 2 A 1 A 2 : j t s j 2 ( u ) g when u islarge. Write D ( u )in( 2.5.53 )asasumover( k ; l ) 2C .Toestimatethecardinalityof C ,we noticethat mes 2 N ( ~ D )= ZZ s 2 A 1 ;t 2 A 2 1 fj t s 2 ( u ) g dsdt (2.5.55) = Z j x 2 ( u ) Z y 2 A 1 \ ( A 2 x ) dydx K ( u ) N ; (2.5.56) 42 forall u largeenough,where K =2 N +1 ˇ N= 2 1 ( N= 2)max j 1 mes N ( A 1 \ ( A 2 )).Hence, forlarge u ,thenumberofsummandsin( 2.5.49 )isboundedby # f ( k ; l ) j ( k ; l ) 2Cg mes 2 N ( ~ D ) mes 2 N (1) k (2) l ) K ( u ) N d N 1 ( u ) d N 2 ( u ) : (2.5.57) Next,byapplyingtheinequality e x e y y x for y x> 0toeachsummandin D ( u ),weobtain max ( s;t ) 2 u (1) k u (2) l e a j t s j 2 min ( s;t ) 2 u (1) k u (2) l e a j t s j 2 a 0 B @ max ( s;t ) 2 u (1) k u (2) l j t s j 2 min ( s;t ) 2 u (1) k u (2) l j t s j 2 1 C A = a max j t s j + j t 1 s 1 j )( j t s jj t 1 s 1 j ; (2.5.58) wherethelastmaximumistakenover( s;t;s 1 ;t 1 ) 2 u (1) k u (2) l u (1) k u (2) l Since j t s j 2 ( u )forall( t;s ) 2 u (1) k u (2) l when u islarge,theinequality j t s jj t 1 s 1 j j t t 1 j + j s s 1 j impliesthat( 2.5.58 )isatmost 4 a p Nu 2 ( u ) d 1 ( u )+ d 2 ( u ) (2.5.59) when u islargeenough.By( 2.5.59 )and( 2.5.57 ),wecanverifythat D ( u ) 1 u N K ( ( u )) N d N 1 ( u ) d N 2 ( u ) 4 a p Nu 2 ( u ) d 1 ( u )+ d 2 ( u ) ud 1 ( u ) N ud 2 ( u ) N C 0 (log u ) N +1 2 u 1 2 1 + u 1 2 2 ! 0 ; as u !1 : 43 Therefore( 2.5.48 )holds.Similarly,wecancheckthatthesamestatementholdswhilechang- ingtheset C to C . ProofofLemma 2.4.4 . Inequality( 2.4.22 )holdsimmediatelybyLemma6 : 2in[ Pit96 ].Hence weonlyconsiderthecasewhen m 6 = 0 .Supposethat f X ( t ) ;t 2 R N g isarealval- uedcontinuousGaussianprocesswith E [ X ( t )]=0andcovariancefunction r ( t )satisfying r ( t )=1 j t j + o ( j t j )foraconstant 2 (0 ; 2).ApplyingLemma6 : 1in[ Pit96 ],wesee thatforany S> 0, P max t 2 u 2 [0 ;S ] N X ( t ) >u; max t 2 u 2 [ m S; ( m +1) S ] X ( t ) >u = P max t 2 u 2 [0 ;S ] N X ( t ) >u + P max t 2 u 2 [ m S; ( m +1) S ] X ( t ) >u P max t 2 u 2 ([0 ;S ] N [ [ m S; ( m +1) S ]) X ( t ) >u ) = H ([0 ;S ] N )+ H ([ m S; ( m +1) S ]) H ([0 ;S ] N [ [ m S; ( m +1) S ]) 1 p 2 ˇu e 1 2 u 2 (1+ o (1)) = H ([0 ;S ] N ; [ m S; ( m +1) S ]) 1 p 2 ˇu e 1 2 u 2 (1+ o (1)) ; as u !1 ; (2.5.60) wherethelastequalityholdsthanksto( 2.5.41 ). Ontheotherhand,byapplyingLemma6 : 3in[ Pit96 ]andtheinequalityinf s 2 [0 ; 1] N ;t 2 [ m ; m +1] j s t jj m i 0 j 1(recallthat i 0 isinLemma 2.4.4 ),wehave P max t 2 u 2 [0 ;S ] N X ( t ) >u; max t 2 u 2 [ m S; ( m +1) S ] X ( t ) >u C 0 S 2 N 1 p 2 ˇu e 1 2 u 2 exp 1 8 ( j m i 0 j 1) S (2.5.61) 44 forall u largeenough.Itfollowsfrom( 2.5.60 )and( 2.5.61 )that H ([0 ;S ] N ; [ m S; ( m +1) S ]) C 0 S 2 N exp 1 8 ( j m i 0 j 1) S ; (2.5.62) whichimplies( 2.4.24 )byletting S = c 1 T (1+ ˆ ) 2 . When j m i 0 j =1,theaboveupperboundisnotsharp.Instead,wederive( 2.4.23 )in Lemma 2.4.4 asfollows.Forconcreteness,supposethat i 0 = N and m N =1.Byapplying Lemmas6 : 1-6 : 3in[ Pit96 ],wehave P max t 2 u 2 [0 ;S ] N X ( t ) >u; max t 2 u 2 [ m S; ( m +1) S ] X ( t ) >u P max t 2 u 2 ( Q N 1 j =1 [ m j S; ( m j +1) S ] [ S;S + p S ]) X ( t ) >u + P max t 2 u 2 [0 ;S ] N X ( t ) >u; max t 2 u 2 ( Q N 1 j =1 [ m j S; ( m j +1) S ] [ S + p S; 2 S + p S ]) X ( t ) >u C 0 S N 1 2 1 p 2 ˇu e 1 2 u 2 + C 0 S 2 N 1 p 2 ˇu e 1 2 u 2 e 1 8 S = 2 C 0 S N 1 2 1 p 2 ˇu e 1 2 u 2 (2.5.63) for u and S large.Hence,when j m i 0 j =1,wehave H ([0 ;S ] N ; [ m S; ( m +1) S ]) C 0 S N 1 2 (2.5.64) forlarge S .Thisimplies( 2.4.23 )byletting S = c 1 T (1+ ˆ ) 2 . Noticethat # f m 2 Z N j max 1 i N j m i j = k g =(2 k +1) N (2 k 1) N ;k =1 ; 2 ;::: (2.5.65) 45 By( 2.4.23 ),( 2.4.24 )andthefact R 1 T x N e ax dx ˘ 1 a T N e aT as T !1 ,wehave X m 6 = 0 H ;c ( m )= 1 X k =1 X j m i 0 j = k H ;c ( m ) C 0 (3 N 1) T N 1 2 + C 0 1 X k =2 [(2 k +1) N (2 k 1) N ] T 2 N e c 8(1+ ˆ ) 2 ( k 1) T C 0 (3 N 1) T N 1 2 + C 0 T 2 N Z 1 1 x N e c 8(1+ ˆ ) 2 x T dx C 0 T N 1 2 for T largeenough.ThiscompletestheproofofLemma 2.4.4 . ProofofLemma 2.4.5 . TheproofissimilartothatofLemma 2.4.3 .Indeed,weonlyneed tomodify( 2.5.49 )and( 2.5.55 )intheproofofLemma 2.4.3 .Forany y =( y 1 ;:::;y N ) 2 R N and1 i j N ,let y i : j =( y i ;:::;y j ).Ononehand,withatscaling, h ( u;a )in ( 2.5.49 )hasthefollowingasymptotics: u 2 N M d N 1 ( u ) d N 2 ( u ) h ( u;a ) ˇ 1 u M ZZ y 2 uA 1 ;x + y 2 uA 2 j x C p log u e a j x j 2 dxdy = 1 u M Z j x C p log u e a j x j 2 Z R M 1 f y 1: M 2 uA 1 ;M \ ( uA 2 ;M x 1: M ) g dy 1: M N Y j = M +1 Z R 1 f y j 2 [ uS j ;uT j ] \ [ uT j x j ;uR j x j ] g dy j dx = Z j x C p log u e a j x j 2 N Y j = M +1 x j 1 x j > 0 Z R M 1 z 1: M 2 A 1 ;M \ ( A 2 ;M x 1: M =u ) dz 1: M dx ! mes M ( A 1 ;M \ A 2 ;M ) Z R M e a j x 1: M j 2 dx 1: M N Y j = M +1 Z 1 0 x j e ax 2 j dx j =2 M N ˇ M= 2 a M= 2 N mes M ( A 1 ;M \ A 2 ;M ) ; (2.5.66) as u !1 .Ontheotherhand,when u islargeenough, mes 2 N ( ~ D )in( 2.5.55 )can 46 beboundedfromaboveby mes 2 N ( ~ D )= ZZ s 2 A 1 ;t 2 A 2 1 fj t s 2 ( u ) g dsdt = Z j x 2 ( u ) Z y 1: M 2 A 1 ;M \ ( A 2 ;M x 1: M ) dy 1: M N Y j = M +1 x j 1 f x j > 0 g dx = ( u ) 2 N M Z j z 2 Z y 1: M 2 A 1 ;M \ ( A 2 ;M z 1: M ( u )) dy 1: M N Y j = M +1 z j 1 f z j > 0 g dz K ( u ) 2 N M ; (2.5.67) where K =max j 1 mes M ( A 1 ;M \ ( A 2 ;M )) R j z 2 Q N j = M +1 z j 1 f z j > 0 g dz . By( 2.5.66 )and( 2.5.67 ),( 2.4.36 )canbeobtainedthroughthesameargumentinthe proofofLemma 2.4.3 .Weomitthedetails. WeendthissectionwiththeproofofLemma 2.5.2 . ProofofLemma 2.5.2 . Let f u;˝ u ( )and f ( )bethedensityfunctionof X ( u;˝ u )and X , respectively.Ittoprovethatforall x 2 R N , Z f y x g f ( y )max ˝ u f u;˝ u ( y ) f ( y ) 1 dy ! 0 ; as u !1 ; (2.5.68) where f y x g = Q N i =1 ( ;x i ]. First,wewillanupperboundformax ˝ u j f u;˝ u ( y ) =f ( y ) 1 j .Forany > 0, u;˝ u )=( ij ( u;˝ u )) i;j =1 ;:::n := 1 u;˝ u ) e ( u;˝ u )=( e i ( u;˝ u )) i =1 ;:::n := 1 ( ( u;˝ u ) ) : 47 ByAssumption( 2.5.4 ),thereexistsaconstant U> 0suchthatforall u>U , max ˝ u j j ( u;˝ u ) j j < max ˝ u j ˙ ij ( u;˝ u ) ˙ ij j U . Let 1 =( v ij ) i;j =1 ;:::;n betheinverseofWhen issmall,thedeterminantof u;˝ u ) j u;˝ u ) j = j + u;˝ u ) j = j j (1+ 1 u;˝ u ))+ O ( 2 )) ; where O ( 2 ) 2 isuniformlyboundedw.r.t. ˝ u forlarge u (see,e.g.,[ MN07 ],p.169).Hence wehave j u;˝ u ) j j j 1 2 j 1 u;˝ u )) j 2 X i;j j v ij j : (2.5.69) Since j ij ( u;˝ u ) j 1 ; 8 i;j =1 ;:::;n forlarge u ,as ! 0,theinverseof u;˝ u )canbe writtenas u;˝ u ) 1 = 1 1 u;˝ u 1 + O ( 2 ) ; where O ( 2 ) 2 isamatrixwhoseentriesareuniformlyboundedandindependentof ˝ u for large u (see,e.g.,[ Mey00 ],p.618).Hence, d u;˝ u ( y ):= 1 2 h ( y ( u;˝ u )) > 1 ( u;˝ u )( y ( u;˝ u )) ( y ) > 1 ( y ) i = 1 2 ( y ) > 1 u;˝ u 1 + O ( 2 ) ( y ) 48 + > ( u;˝ u ) 1 1 u;˝ u 1 + O ( 2 ) ( y ) 1 2 2 e > ( u;˝ u ) 1 1 u;˝ u 1 + O ( 2 ) e ( u;˝ u ) : Since j ij ( u;˝ u ) j and j e i ( u;˝ u ) j areuniformlyboundedby1w.r.t. ˝ u forall u>U ,wederive thatforany y 2 R N , max ˝ u j d u;˝ u ( y ) j! 0 ; as u !1 : (2.5.70) By( 2.5.69 )and( 2.5.70 ),for y 2 R N , max ˝ u f u;˝ u ( y ) f ( y ) 1 =max ˝ u e d u;˝ u ( y ) j u;˝ u ) j 1 = 2 j j 1 = 2 1 ! 0 ; as u !1 : (2.5.71) Ifwecouldfurtheranintegrablefunction g ( y )on R N , f ( y )max ˝ u f u;˝ u ( y ) f ( y ) 1 g ( y ) ; (2.5.72) then( 2.5.68 )holdsbythedominatedconvergencetheorem. Givenaconstant C 0 ,let A I := f ( a ij ) n i;j =1 2 R N N j max i;j j a i;j j C 0 g ;b I := f ( b i ) n i =1 2 R N j max i j b i j C 0 g .Thenthereexistconstants C 2 ;C 3 ,suchthat j x > Ax j C 2 x > x; j b > x j C 3 + x > x; 8 x 2 R N ; 8 A 2 A I ; 8 b 2 b I : Hence,thereexistsaconstant C 4 > 0suchthat j d u;˝ u ( y ) j C 4 ( y ) > ( y )+ C 4 (2.5.73) 49 By( 2.5.69 )and( 2.5.73 ),forsmall andlarge u ,thereexistsaconstant K suchthat max ˝ u f u;˝ u ( y ) f ( y ) 1 Ke C 4 ( y ) > ( y ) +1 Ontheotherhand,forall y 2 R N , f ( y ) (2 ˇ ) n= 2 j j 1 = 2 e 2 ( y ) > ( y ) ; where istheminimumeigenvalueofIfwechoose < 2 C 4 and g ( y ):=(2 ˇ ) n= 2 j j 1 = 2 e 2 ( y ) > ( y ) ( Ke C 4 ( y ) > ( y ) +1) ; then( 2.5.72 )holdsandhencewehavecompletedtheproof. 50 Chapter3 Jointasymptoticsofestimatingthe fractalindicesofbivariateGaussian randomprocesses Characterizingthedependencestructureofmultivariaterandomplaysakeyrolein multivariatespatialmodelsetting.Usually,thecovariancestructureforeachcomponentof multivariateprocessesishighlyrelatedtothesmoothnessofthesurface.Theestimation ofsmoothnessparametersinunivariatemodelhasbeenstudiedextensively.Yet,thereis fewworkinthemultivariatecase.Inthischapter,wegiveashortreviewonthe increment-basedestimatorintroducedbyKentandWood[ KW97 ]andapplyittoestimating thefractalindices(smoothnessparameters)ofbivariateGaussianprocesses.Then,under theasymptoticsframework,weinvestigatethejointasymptoticsoftheestimatorsand studyhowthecrossdependencestructurewouldtheperformanceoftheestimators. 3.1Introduction Thefractalordimensionofarandomprocess,isameasureofroughnessofits samplepath.Itisanimportantparameteringeostatisticsmodeling.Estimatingthefractal dimensionofarealvaluedGaussianandnon-Gaussianprocesshasbeenanattractingprob- 51 leminthelastdecades.HallandWood[ HW93 ]studiedtheasymptoticpropertiesofthe box-countingestimatorforthefractaldimension.Thevariogrammethodwasintroduced byConstantineandHall[ CH94 ].KentandWood[ KW97 ]developedtheincrement-based estimatorsforstationaryGaussianrandomprocess,whichindicatedimprovedperformance underasymptotics(thatisasymptoticpropertiesofstatisticalproceduresasthesam- plingpointsgrowdenseinadomain[ CSY00 , Cre93 ]).ChanandWoodextendedthe methodtoGaussianandaclassofnon-Gaussianrandomon R 2 (see,e.g.,[ CW00 ], [ CW04 ]).ZhuandStein[ ZS02 ]expandedtheworkof[ CW00 ]byconsideringthetwo- dimensionalfractalBrowniansurface.Wereferto[ G SP12 ]forfurtherinformationonthis topic. Ontheotherhand,multivariate(vector-valued)Gaussianrandomhavebeenpop- ularinmodelingmultivariatespatialdatasets(see,e.g.,[ GKS10 ]).Usually,thefractal dimensionforeachcomponentofthemultivariateGaussianrandomvariesfromeach other.Itisnaturaltoemploytheincrement-basedmethodsbyKentandWoodtoestimate thefractaldimensionforeachcomponent.Yet,thejointasymptoticpropertyoftheestima- torswouldbenon-trivial,sincethecrosscovariancestructuremighttheperformance oftheestimators,thatisthecovarianceamongcomponentsofmultivariateGaussianrandom Inthiswork,westudythejointasymptoticpropertiesofestimatingfractalindices forbivariateGaussianrandomprocessesunderasymptotics.Therestofthechapteris organizedasfollows.WethebivariateGaussianrandomprocessesinSection 3.2 and introducetheincrement-basedestimatorsinSection 3.3 .Section 3.4 statesthemainresults onthejointasymptoticsofthebivariateestimators.WegiveanexampleinSection 3.5 .The proofsofourmainresultsaregiveninSection 3.6 . 52 3.2ThebivariateGaussianrandomprocesses Let f X ( t ) , ( X 1 ( t ) ;X 2 ( t )) > ;t 2 R g beabivariatestationaryGaussianrandomwith mean E X ( t )=0andmatrix-valuedcovariancefunction C ( t )= 0 B @ C 11 ( t ) C 12 ( t ) C 21 ( t ) C 22 ( t ) 1 C A where C ij ( t ):= E [ X i ( s ) X j ( s + t )] ;i =1 ; 2 : Further,weassumethat C 11 ( t )= ˙ 2 1 c 11 j t j 11 + o ( j t j 11 ) ; C 22 ( t )= ˙ 2 2 c 22 j t j 22 + o ( j t j 22 ) ; C 12 ( t )= C 21 ( t )= ˆ˙ 1 ˙ 2 (1 c 12 j t j 12 + o ( j t j 12 )) ; (3.2.1) with 11 ; 22 2 (0 ; 2), ˙ 1 ;˙ 2 > 0, ˆ 2 ( 1 ; 1)and c 11 ;c 22 ;c 12 > 0.Thefractaldimensions for X 1 and X 2 are2 11 and2 22 respectively(see,e.g.,[ Adl81 ]Theorem8 : 4 : 1).Hence, westudytheestimationandinferenceof 11 and 22 instead. Let F 11 ;F 22 and F 12 bethecorrespondingspectralmeasureof C 11 ( ) ;C 22 ( )and C 12 ( ). ByTauberianTheorem(see,e.g.,[ Ste99 ]),wehave F ij ( x; 1 ) ˘ C ij (0) C ij (1 =x ) ˘j x j ij ;i;j =1 ; 2 : AccordingtoCramer'stheorem([ Yag87 ],[ CD09 ]and[ Wac03 ]),avalidcovariancefunc- tionfor X ( t )shouldsatisfy ( F 12 ( B )) 2 F 11 ( B ) F 22 ( B ) ; 8 B 2B ( R ) : 53 Hence,itisnecessarytoaddthefollowingrestrictionto( 11 ; 22 ; 12 ),i.e., 11 + 22 2 12 : (3.2.2) 3.3Theincrement-basedestimators Assumethat X areobservedregularlyon[0 ; 1].Sp,wehave n pairsofobservations ( X 1 ( 1 n ) ;X 2 ( 1 n )) ; ( X 1 ( 2 n ) ;X 2 ( 2 n )) ;:::; ( X 1 (1) ;X 2 (1)).KentandWood[ KW97 ]introducedthe increment-basedmethodtoestimatethefractaldimensionofarealvaluedlocallyself-similar Gaussianprocess.Weapplytheirmethodstoestimatethefractalindicesforeachcomponent ofthebivariateGaussianprocess(i.e., 11 ; 22 ).InSection 3.3.1 ,wegiveareviewforthe ofthedilateddiscretizedprocessesandstudytheasymptoticproperties ofthecovarianceofthebivariatedilateddiscretizedprocesses.InSection 3.3.2 ,the GLSestimatorsforthefractalindices( 11 ; 22 ) > areintroduced. 3.3.1Thedilateddiscretizedprocesses 3.3.1 ( Incrementoforder p ) . For J 2 Z + and p 2 Z + [ 0,avector a = f a j g J j = J isanincrementoforder p if J X j = J j r a j =0forallinteger r 2 [0 ;p ]and J X j = J j p +1 a j 6 =0 : (3.3.1) 3.3.2 ( Dilationofa ) . Foranincrement a = f a j g J j = J ,thevector a u is 54 calledthedilationof a forinteger u 1iffor Ju j Ju , a u j = 8 > < > : a j 0 ; if j = j 0 u 0 ; otherwise : (3.3.2) 3.3.3 ( Dilateddiscretizedprocess ) . For n;m 1,thedilated discretizedprocess Y u n;i ( )by Y u n;i ( j ):= n ii = 2 Ju X k = Ju a u k X i j + k n ;i =1 ; 2 ;u =1 ; 2 ;:::;m; and j =1 ; 2 ;:::;n: (3.3.3) Wegivetwoexamplesinthefollowing. increment: p =0and J =1with a 1 =0 ;a 0 = 1 ;a 1 =1.Then, thedilatedreddiscretizedprocessis Y u n;i ( j ):= n ii 2 X i j + u n X i j n ; (3.3.4) where i =1 ; 2 ;u =1 ; 2 ;:::;m; and j =1 ; 2 ;:::;n . increment: p =1and J =1with a 1 =1 ;a 0 = 2 ;a 1 =1 : Then,thedilateddiscretizedprocessis Y u n;i ( j ):= n ii 2 X i j u n 2 X i j n + X i j + u n ; (3.3.5) where i =1 ; 2 ;u =1 ; 2 ;:::;m; and j =1 ; 2 ;:::;n . Next,weconsiderthecovarianceof Y u n;i ;i =1 ; 2.Themarginalcovariancefunction(see, 55 e.g.,[ KW97 ])for Y u n;i is ˙ uv n;ii ( h ):= E [ Y u n;i ( l ) Y v n;i ( l + h )]= n ii X j;k a u j a v k C ii h + k j n ! c ii X j;k a u j a v k j h + k j j ii , ˙ uv 0 ;ii ( h ) ; as n !1 : (3.3.6) Especially, Var [ Y u n;i ( l )]= ˙ uu n;ii (0) ! ˙ uu 0 ;ii (0)= const: u ii (3.3.7) Thecrosscovariancebetween Y u n; 1 and Y v n; 2 isgivenby ˙ uv n; 12 ( h ):= E [ Y u n; 1 ( l ) Y v n; 2 ( l + h )]= n 11 + 22 2 X j;k a u j a v k C 12 h + k j n = c 12 n 11 + 22 2 X j;k a u j a v k h + k j n 12 + o ( n 11 = 2+ 22 = 2 12 ) ! ˙ uv 0 ; 12 ( h ) , 8 > < > : 0 ; if 11 + 22 2 < 12 c 12 P j;k a u j a v k j h + k j j 12 ; if 11 + 22 2 = 12 : (3.3.8) Especially, Cov[ Y u n; 1 ( l ) ;Y u n; 2 ( l )]= ˙ uu n; 12 (0) ! ˙ uu 0 ; 12 (0)= 8 > < > : 0 ; if 11 + 22 2 < 12 const: u 11 + 22 2 ; if 11 + 22 2 = 12 : (3.3.9) 56 Therefore,if( 11 + 22 ) = 2 < 12 ,thecovarianceof( Y u n; 1 ( l ) ;Y v n; 2 ( l + h )) > Var 0 B @ Y u n; 1 ( l ) Y v n; 2 ( l + h ) 1 C A ! 0 B @ ˙ uv 0 ; 11 ( h )0 0 ˙ uv 0 ; 22 ( h ) 1 C A ; as n !1 : (3.3.10) If( 11 + 22 ) = 2= 12 thecovarianceof( Y u n; 1 ( l ) ;Y v n; 2 ( l + h )) > Var 0 B @ Y u n; 1 ( l ) Y v n; 2 ( l + h ) 1 C A ! 0 B @ ˙ uv 0 ; 11 ( h ) ˙ uv 0 ; 12 ( h ) ˙ uv 0 ; 12 ( h ) ˙ uv 0 ; 22 ( h ) 1 C A ; as n !1 : (3.3.11) 3.3.2TheGLSestimatorsfor ( 11 ; 22 ) > Z u n;i ( j ):=( Y u n;i ( j )) 2 ;j =1 ; 2 ;:::;n; (3.3.12) and Z u n;i := 1 n n X j =1 Z u n;i ( j ) ; (3.3.13) where i =1 ; 2 ;u =1 ; 2 ;:::;m .By( 3.3.7 ),itiseasytoseethat Z u n;i p ! A i u ii ;;i =1 ; 2 : (3.3.14) where A i ;i =1 ; 2areconstantsandhence log Z u n;i ˇ ii log u +log A i : (3.3.15) 57 RecalltheGLSestimatorin[ KW97 ].Let U ( i ) =(log Z 1 n;i ;:::; log Z m n;i ) > ;X =(log1 ; log2 ;:::; log m ) > (3.3.16) and 1 bethe m -vectorof1.Thegeneralizedleastsquareestimator^ ii ;i =1 ; 2isdetermined byminimizing ( U ( i ) A i 1 ii X ) > W ( U ( i ) A i 1 ii X ) ; (3.3.17) withrespectto ii and A i .Hence,wehave ^ ii = ( 1 > W 1 )( X > WY ) ( 1 > WX )( 1 > WY ) ( 1 > W 1 )( X > WX ) ( 1 > WX ) 2 (3.3.18) If W ischosenasidentitymatrix,theGLSestimatorisreducedtoordinaryleastsquare (abbr.OLS)estimator. Agoodchoiceoftheweightedmatrix W in( 3.3.17 )istheinversematrixofthecovari- anceof U ( i ) .Let ( i ) = f ! uv i ;u;v =1 ;:::;m g bethecovariancematrixof U ( i ) ,which willbespattheendofSection 3.4 . 3.4Asymptoticproperties Inthissection,westudythepropertiesoftheestimators(^ 11 ; ^ 22 ) > underasymptotics. 58 3.4.1Varianceof Z n andasymptoticnormality LetusintroducethenotationLet Z n;i =( Z 1 n;i ; Z 2 n;i ;:::; Z m n;i ) > ;i =1 ; 2 : (3.4.1) and Z n =( Z > n; 1 ; Z > n; 2 ) > : (3.4.2) WeapplythemethodofderivationinSection3of[ KW97 ]tothebivariateprocesses.Re- callthatCov( Y u n;i ( l ) ;Y v n;j ( l + h ))= ˙ uv n;ij ( h ) ;i;j =1 ; 2.Usingthefactthat,if( U;V ) ˘ Normal(0 ; 0 ; 1 ; 1 ;˘ ),thenCov( U 2 ;V 2 )=2 ˘ 2 ,weobtain Cov( Z u n;i ( l ) ;Z v n;j ( l + h ))=2( ˙ uv n;ij ( h )) 2 ;i;j =1 ; 2 : (3.4.3) Further, ˚ uv n;ij :=Cov( Z u n;i ; Z v n;j )= 1 n n 1 X h = n +1 1 j h j n 2( ˙ uv n;ij ( h )) 2 : (3.4.4) Let n;ij =( ˚ uv n;ij ;u;v =1 ; 2 ;:::;m )bethe m m covariancematrixof Z n;i and Z n;j ;i;j = 1 ; 2.Sothecovariancematrixof Z n canbewrittenas n := 0 B @ n; 11 n; 12 n; 12 n; 22 1 C A : (3.4.5) 59 Inordertostudytheasymptoticpropertiesof n ,webeginwiththeasymptoticbehavior of ˙ uv n;ij ( h )forall n> j h j as j h j!1 .Firstofall,itisnecessarytomakesomemild strengtheningofAssumption( 3.2.1 ),denotedhereby( A 0 ).For q 1,considertheregularity conditionsonthe q thderivativeof C ij ( t ),say( A q ),for 8 t 6 =0, C ( q ) 11 ( t )= sgn( t ) q c 11 11 ! ( 11 q )! j t j 11 q + o ( j t j 11 q ) ; C ( q ) 22 ( t )= sgn( t ) q c 22 22 ! ( 22 q )! j t j 22 q + o ( j t j 22 q ) ; C ( q ) 12 ( t )= C ( q ) 21 ( t )= sgn( t ) q ˆ˙ 1 ˙ 2 c 12 12 ! ( 12 q )! j t j 12 q + o ( j t j 12 q )) ; (3.4.6) wheresgn( t )=1if t> 0andsgn( t )= 1if t< 0. ThetheoremsbelowextendTheorem1andTheorem2in[ KW97 ]tothebivariatecase. Theorem3.4.1. Iftheincrement a hasorder p 0 andthecondition ( A 2 p +2 ) holds,then ˙ uv n;ii ( h )= O ( j h j ii 2 p 2 ) ; as j h j!1 uniformlyfor n> j h j ;i =1 ; 2 ; (3.4.7) and ˙ uv n; 12 ( h )= n 1 = 2+ 2 = 2 12 O ( j h j 12 2 p 2 )= O ( j h j 11 + 22 2 2 p 2 ) ; as j h j!1 uniformlyfor n> j h j : (3.4.8) By[ KW97 ],we'veknownusingincrementwithorder p =1willachievemore. Sowe'llconsidertheconvergenceofvariancewhen p =1.Let ˚ uv 0 ;ij =2 P 1 h = ( ˙ uv 0 ;ij ( h )) 2 , 60 0 ;ij =( ˚ uv 0 ;ij ;u;v =1 ; 2 ;:::;m ),and 0 := 0 B @ 0 ; 11 0 ; 12 0 ; 12 0 ; 22 1 C A (3.4.9) Theorem3.4.2. Ifthecondition ( A 4 ) holdsand 0 < 11 ; 22 < 2 ,then n n ! 0 ; as n !1 ; (3.4.10) wheretheentryof 0 isanabsolutelyconvergentseries. Theorem3.4.3. Ifthecondition ( A 4 ) holdsand 0 < 11 ; 22 < 2 ,then n 1 = 2 ( Z n E [ Z n ]) d ! N 2 m (0 ; 0 ) ; as n !1 ; (3.4.11) 3.4.2Linearestimatorsof ( 11 ; 22 ) > Todescribetheasymptoticpropertiesoftheestimatorsof( 11 ; 22 ) > ,itisnecessaryto specifytheremaindertermintheassumption( 3.2.1 ).Supposethat,forsome 11 ; 22 ; 12 > 0, C 11 ( t )= ˙ 2 1 c 11 j t j 11 + O ( j t j 11 + 11 ) ; C 22 ( t )= ˙ 2 2 c 22 j t j 22 + O ( j t j 22 + 22 ) ; C 12 ( t )= C 21 ( t )= ˆ˙ 1 ˙ 2 (1 c 12 j t j 12 + O ( j t j 12 + 12 )) : (3.4.12) 61 Here,wewillstudytheasymptoticpropertiesofamoregeneralestimators(see,e.g.,[ CW00 ]), thatis ^ ii = m X u =1 L u;i log Z u n;i ;i =1 ; 2 ; (3.4.13) where L u;i ;u =1 ; 2 ;:::;m;i =1 ; 2areanynumberssuchthat m X u =1 L u;i =0and m X u =1 L u;i log u =1 : (3.4.14) ItiseasytocheckthattheGLSestimator( 3.3.18 )isanexampleoftheaboveestimators. Threetheoremsbelowillustratetheasymptoticpropertiesof(^ 11 ; ^ 22 ) > bystudyingthe bias,meansquareerrormatrixandasymptoticnormality. Theorem3.4.4 (Bias) . Forthe ^ ii ;i =1 ; 2 dabove,wehave E [^ ii ii ]= O ( n 1 )+ O ( n ii ) ;i =1 ; 2 : (3.4.15) Theorem3.4.5 (Meansquareerrormatrix) . Let ^ =(^ 11 ; ^ 22 ) > and =( 11 ; 22 ) > .If 11 + 22 =2 12 ,wehave E [(^ )(^ ) > ]= 0 B @ O ( n 1 )+ O ( n 2 11 ) O ( n 1 )+ O ( n 11 22 ) O ( n 1 )+ O ( n 11 22 ) O ( n 1 )+ O ( n 2 22 ) 1 C A (3.4.16) If 11 + 22 < 2 12 ,wehave E [(^ )(^ ) > ] 62 = 0 B B B B B B B B B @ O ( n 1 )+ O ( n 2 11 ) o ( n 1 )+ O ( n 1 11 ) + O ( n 1 22 )+ O ( n 11 22 ) o ( n 1 )+ O ( n 1 11 ) + O ( n 1 22 )+ O ( n 11 22 ) O ( n 1 )+ O ( n 2 22 ) 1 C C C C C C C C C A (3.4.17) Finally,weshowtheasymptoticnormalityofthe^ .WeintroducethenotationLet T u n;i = Z u n;i E Z u n;i E Z u n;i ;T n;i =( T 1 n;i ;:::;T m n;i ) > ; L i =( L 1 ;i ;:::;L m;i ) > ; ~ L i =( L 1 ;i =˙ 11 0 ;ii (0) ;:::;L m;i =˙ mm 0 ;ii (0)) > ;i =1 ; 2 : Theorem3.4.6 (Asymptoticnormality) . Assumethat 11 ; 22 > 1 2 , p n (^ ) followsthe asymptoticpropertiesbelow. p n 0 B @ ^ 11 11 ^ 22 22 1 C A = 0 B @ p nL > 1 T n; 1 + o p (1)+ O ( n 11 + 1 2 ) p nL > 2 T n; 2 + o p (1)+ O ( n 22 + 1 2 ) 1 C A ; (3.4.18) where 0 B @ p nL > 1 T n; 1 p nL > 2 T n; 2 1 C A d ! N ( 0 ; ) ; (3.4.19) with = 0 B @ ~ L > 1 0 ; 11 ~ L 1 ~ L > 1 0 ; 12 ~ L 2 ~ L > 2 0 ; 21 ~ L 1 ~ L > 2 0 ; 22 ~ L 2 1 C A : (3.4.20) Especially,if 11 + 22 2 < 12 , 0 ; 12 = 0 ; 21 = 0 andhence p nL > 1 T n; 1 and p nL > 2 T n; 2 are 63 asymptoticallyindependent. Remark: ByTheorem 3.4.6 ,thecovariancematrixof U ( i ) inSection 3.3.2 canbespas follows. ! uv i = E [log Z u n;i log Z v n;i ] _ n˚ uv n;ii = ( ˙ uu n;ii ˙ vv n;ii ) ! ˚ uv 0 ;ii = ( ˙ uu 0 ;ii ˙ vv 0 ;ii ) : (3.4.21) Hence, ! uv i ischosenasfollows,whichisthesameasthatinKentandWood[ KW97 ]. ! uv i =2 n 1 X h = n +1 (1 j h j =n ) ˙ uv 0 ;ii ( h ) =˙ uu 0 ;ii (0) ˙ vv 0 ;ii (0) : (3.4.22) 3.5Anexample:thebivariateMaternon R RecalltheofbivariateMaternandMaterncorrelationfunctionsintroduced inSection 2.3 .For m 2 Z ,when m< and ^ =(^ 11 ; ^ 22 ) > .For thenonsmoothbivariateMaternprocesswith 0 < 11 ; 22 < 1 ,if 11 + 22 =2 12 ,then E (^ )(^ ) > = 0 B @ O ( n 1 )+ O ( n 4(1 11 ) ) O ( n 1 )+ O ( n 4(1 12 ) ) O ( n 1 )+ O ( n 4(1 12 ) ) O ( n 1 )+ O ( n 4(1 22 ) ) 1 C A (3.5.5) If 11 + 22 < 2 12 ,wehave E (^ )(^ ) > = 0 B @ O ( n 1 )+ O ( n 4(1 11 ) ) o ( n 1 )+ O ( n 2(2 11 22 ) ) o ( n 1 )+ O ( n 2(2 11 22 ) ) O ( n 1 )+ O ( n 4(1 11 ) ) 1 C A (3.5.6) Theorem3.5.3 (Asymptoticnormality) . ForthenonsmoothbivariateMaternprocesswith 0 < 11 ; 22 < 3 4 , p n 0 B @ ^ 11 11 ^ 22 22 1 C A d ! N ( 0 ; ) ; (3.5.7) 66 where = 0 B @ ~ L > 1 0 ; 11 ~ L 1 ~ L > 1 0 ; 12 ~ L 2 ~ L > 2 0 ; 21 ~ L 1 ~ L > 2 0 ; 22 ~ L 2 1 C A ; (3.5.8) and ~ L i =( L 1 ;i =˙ 11 0 ;ii (0) ;:::;L m;i =˙ mm 0 ;ii (0)) > ;i =1 ; 2 .Especially,if 11 + 22 2 < 12 , 0 ; 12 = 0 ; 21 = 0 andhence ^ 11 and ^ 22 areasymptoticallyindependent. 3.6Proofofthemainresults ProofofTheorem 3.4.1 . ( 3.4.7 )comesdirectlyfromtheproofofTheorem1in[ KW97 ].We aregoingtoprove( 3.4.8 ).Firstofall,expand C 12 h + k j n inaTaylorseriesabout h=n to the(2 p +2)thordertoobtain ˙ uv n; 12 ( h )= n 11 + 22 2 X j;k a u j a v k C 12 h + k j n = n 11 + 22 2 2 p +1 X r =0 X j;k a u j a v k ( k j ) r r ! n r C ( r ) 12 h n + n 11 + 22 2 X j;k a u j a v k ( k j ) 2 p +2 (2 p +2)! n 2 p +2 C (2 p +2) 12 h kj n = n 11 + 22 2 X j;k a u j a v k ( k j ) 2 p +2 (2 p +2)! n 2 p +2 C (2 p +2) 12 h kj n ; (3.6.1) where h kj liesbetween h and h + k j .Since j k j j ( u + v ) J 2 mJ , h kj 2 j h j forall j h j 2 mJ .Combiningthecondition( A 2 p +2 )forall j h j 2 mJ andall n> j h j we have j ˙ uv n; 12 ( h ) j const: j h j 12 2 p 2 n 11 + 22 2 12 const: j h j 11 + 22 2 2 p 2 : 67 ProofofTheorem 3.4.2 . Theproofisverysimilarasthatin[ KW97 ].Let d uv n;ij ( h ):= 8 > > < > > : 1 j h j n ( ˙ uv n;ij ( h )) 2 ; j h j 2 R 2 m , n 1 = 2 > ( Z n E [ Z n ]) d ! N (0 ; > 0 ) ; as n !1 ; (3.6.3) First,weintroducethenotation.Let i :=( 1 ;i ;:::; m;i ) > ;i =1 ; 2and n = diag ( > 1 ; > 1 ;:::; > 1 | {z } n times ; > 2 ; > 2 ;:::; > 2 | {z } n times ) > : (3.6.4) So n isa2 mn 2 mn matrixincluding n copiesof 1 and 2 onthediagonal.Let Y n;i ( j ):=( Y 1 n;i ( j ) ;Y 2 n;i ( j ) ;:::;Y m n;i ( j )) > ;i =1 ; 2 ;j =1 ; 2 ;:::;n; (3.6.5) 68 and W n =( Y > n; 1 (1) ;Y > n; 1 (2) ;:::;Y > n; 1 ( n ) ;Y > n; 2 (1) ;Y > n; 2 (2) ;:::;Y > n; 2 ( n )) > ; (3.6.6) where W n isa2 mn dimensionalvector. Then,wehave S n , n 1 = 2 > ( Z n E [ Z n ])= n 1 = 2 ( W > n n W n E [ W > n n W n ]) : (3.6.7) Let V n = E [ W n W > n ] ; (3.6.8) bethecovariancematrixof W n .For1 i 1 ;i 2 2 ; 1 j 1 ;j 2 n; 1 k 1 ;k 2 m ,let l 1 =( i 1 1) mn +( j 1 1) m + k 1 ; l 2 =( i 2 1) mn +( j 2 1) m + k 2 ; Sothe( l 1 ;l 2 )entryof W n is V n ( l 1 ;l 2 )= E [ Y k 1 n;i 1 ( j 1 ) Y k 2 n;i 2 ( j 2 )]= ˙ k 1 k 2 n;i 1 i 2 ( j 2 j 1 ) : (3.6.9) Let f W n = V 1 2 n W n and n =2 n 1 2 V 1 2 n V 1 2 n .Thenwehave S n = 1 2 ( f W > n n f W n E [ f W > n n f W n ])(3.6.10) 69 Itiseasytosee f W n ˘ N 2 mn (0 ;I )where I istheidentitymatrix.Thereexistsanorthogonal matrix Q suchthat Q > n Q isadiagonalmatrixwhosediagonalentriesareeigenvalues of n ,denotedby n;j ;j =1 ; 2 ;:::; 2 mn .Also, U , Q f W n ˘ N 2 mn (0 ;I ).Therefore,for 8 < min 1 j 2 mn 1 n;j ,thecumulantgeneratingfunctionof 1 2 f W > n n f W n isgivenby log E e 2 f W > n n f W n =log E e 2 U > ( Q > n Q ) U =log E e 2 P 2 mn j =1 n;j U 2 j = 1 2 2 mn X j =1 log(1 n;j ) : (3.6.11) Sothecumulantgeneratingfunctionof S n isgivenby k n ( ) , log E [ e S n ]= 1 2 2 mn X j =1 (log(1 n;j )+ n;j )(3.6.12) AstheproofofTheorem3 : 2in[ KW95 ],itisttoprovethat tr 4 )= 2 mn X j =1 4 n;j ! 0 ; as n !1 : (3.6.13) Firstofall,let'sprovewhy( 3.6.13 )ensuretheasymptoticnormalityof S n .Theargument isverysimilarasthatin[ KW95 ]).ApplyingTaylor'sexpansiontolog(1 n;j )at =0, weobtain k n ( )= 2 4 2 mn X j =1 2 n;j + 3 6 2 mn X j =1 3 n;j + 4 8 2 mn X j =1 (1 n;j n;j ) 4 4 n;j ; (3.6.14) where n;j isbetween0and .Let'sconsidertheterm 1 2 P 2 mn j =1 2 n;j .Itfollowsfrom( 3.6.9 ) 70 that 1 2 2 mn X j =1 2 n;j = 1 2 tr 2 n )= 2 n tr (( V n n ) 2 ) = 2 n 2 mn X l 1 2 mn X l 2 ( V n n )( l 1 ;l 2 )( V n n )( l 2 ;l 1 ) = 2 n 2 mn X l 1 2 mn X l 2 V n ( l 1 ;l 2 n ( l 2 ;l 2 ) V n ( l 2 ;l 1 l 1 ;l 1 ) = 2 n 2 X i 1 ;i 2 =1 m X k 1 ;k 2 =1 n X j 1 ;j 2 =1 ˙ k 1 k 2 n;i 1 i 2 ( j 2 j 1 ) k 2 ;i 2 ˙ k 2 k 1 n;i 2 i 1 ( j 1 j 2 ) k 1 ;i 1 = 2 n 2 X i 1 ;i 2 =1 m X k 1 ;k 2 =1 n X j 1 ;j 2 =1 k 1 ;i 1 k 2 ;i 2 ˙ k 1 k 2 n;i 1 i 2 ( j 2 j 1 ) 2 ; (3.6.15) where l 1 =( i 1 1) mn +( j 1 1) m + k 1 ; 1 i 1 2 ; 1 j 1 n; 1 k 1 m; l 2 =( i 2 1) mn +( j 2 1) m + k 2 ; 1 i 2 2 ; 1 j 2 n; 1 k 2 m: Ontheotherhand, > n =( > 1 ; > 2 ) 0 B @ n; 11 n; 12 n; 12 n; 22 1 C A 0 B @ 1 2 1 C A = 2 X i 1 ;i 2 =1 m X k 1 ;k 2 =1 k 1 ;i 1 k 2 ;i 2 ˚ k 1 k 2 n;i 1 i 2 = 1 n 2 2 X i 1 ;i 2 =1 m X k 1 ;k 2 =1 n X j 1 ;j 2 =1 k 1 ;i 1 k 2 ;i 2 E [ Z u n;i Z v n;j ] 71 = 2 n 2 2 X i 1 ;i 2 =1 m X k 1 ;k 2 =1 n X j 1 ;j 2 =1 k 1 ;i 1 k 2 ;i 2 ˙ k 1 k 2 n;i 1 i 2 ( j 2 j 1 ) 2 : (3.6.16) Hence,by( 3.6.15 ),( 3.6.16 )andTheorem 3.4.2 ,wehave 1 2 2 mn X j =1 2 n;j = > ( n n ) ! > 0 ; as n !1 : (3.6.17) Next,let'sconsiderthesecondtermin( 3.6.14 ).By( 3.6.13 ),wehave max 1 j 2 mn j n;j j 2 mn X j =1 4 n;j 1 4 ! 0 ; as n !1 ; (3.6.18) whichimplies 2 mn X j =1 3 n;j max 1 j 2 mn j n;j j 2 mn X j =1 2 n;j ! 0 ; as n !1 : (3.6.19) Finally,let'sconsiderthethirdtermin( 3.6.14 ).By( 3.6.18 ),weknow :=sup n 1 max 1 j 2 mn j n;j j ispositiveandte.Ifwerestrictattentionto j j 1 2 1 ,wehave(1 n;j n;j ) 4 16andhencefor 2 ( 1 2 1 ; 1 2 1 ), 2 mn X j =1 (1 n;j n;j ) 4 ) 4 n;j ! 0 ; as n !1 : (3.6.20) Therefore,by( 3.6.17 ),( 3.6.19 )and( 3.6.20 ),for 8 2 ( 1 2 1 ; 1 2 1 ),wehave k n ( ) ! 2 2 > 0 ; (3.6.21) 72 whichisttoprove S n := n 1 = 2 > ( Z n E [ Z n ]) d ! N (0 ; > 0 ) ; as n !1 ; Nowweonlyneedtoprove( 3.6.13 ). tr 4 n )= 16 n 2 tr (( V n n ) 4 ) = 16 n 2 2 mn X l 1 ;l 2 ;:::;l 4 =1 ( V n n )( l 1 ;l 2 )( V n n )( l 2 ;l 3 )( V n n )( l 3 ;l 4 )( V n n )( l 4 ;l 1 ) = 16 n 2 2 X i 1 ;:::;i 4 =1 m X k 1 ;:::;k 4 =1 k 1 ;i 1 k 2 ;i 2 k 3 ;i 3 k 4 ;i 4 n X j 1 ;:::;j 4 =1 ˙ k 1 k 2 n;i 1 i 2 ( j 2 j 1 ) ˙ k 2 k 3 n;i 2 i 3 ( j 3 j 2 ) ˙ k 3 k 4 n;i 3 i 4 ( j 4 j 3 ) ˙ k 4 k 1 n;i 4 i 1 ( j 1 j 4 ) ; (3.6.22) where l r =( i r 1) mn +( j r 1) m + k r ;r =1 ;:::; 4.Let n ( k 1 ;:::;k 4 ;i 1 ;:::;i 4 ) := n X j 1 ;:::;j 4 =1 ˙ k 1 k 2 n;i 1 i 2 ( j 2 j 1 ) ˙ k 2 k 3 n;i 2 i 3 ( j 3 j 2 ) ˙ k 3 k 4 n;i 3 i 4 ( j 4 j 3 ) ˙ k 4 k 1 n;i 4 i 1 ( j 4 j 1 ) = n X j 1 ;:::;j 4 =1 ˙ k 1 k 2 n;i 1 i 2 ( h 1 ) ˙ k 2 k 3 n;i 2 i 3 ( h 2 ) ˙ k 3 k 4 n;i 3 i 4 ( h 3 ) ˙ k 4 k 1 n;i 4 i 1 ( h 1 + h 2 + h 3 ) ; (3.6.23) where h i = j i +1 j i ;i =1 ; 2 ; 3. Given h 1 ;h 2 and h 3 thecardinalityoftheset # f ( j 1 ;j 2 ;:::;j 4 ) j 1 j 1 ;:::;j 4 n g n: (3.6.24) 73 Hence, j n ( k 1 ;:::;k 4 ;i 1 ;:::;i 4 ) j n X j h 1 j ; j h 2 j ; j h 3 n 1 j ˙ k 1 k 2 n;i 1 i 2 ( h 1 ) ˙ k 2 k 3 n;i 2 i 3 ( h 2 ) ˙ k 3 k 4 n;i 3 i 4 ( h 3 ) ˙ k 4 k 1 n;i 4 i 1 ( h 1 + h 2 + h 3 ) j (3.6.25) Further,byTheorem 3.4.1 ,wehave j n ( k 1 ;:::;k 4 ;i 1 ;:::;i 4 ) j const:n 3 Y r =1 n 1 X h r = n +1 h i r i r 2 + i r +1 i r +1 2 4 r const:n 3 Y r =1 1 X h r = h i r i r 2 + i r +1 i r +1 2 4 r = O ( n ) : (3.6.26) Thelastequalityholdssince i r i r 2 + i r +1 i r +1 2 4 < 2.Therefore,by( 3.6.22 )and ( 3.6.26 ),wehave tr 4 n )= O ( n 1 ) ! 0 ; as n !1 : ProofofTheorem 3.4.4 . Recallthat T u n;i = Z u n;i E Z u n;i E Z u n;i : (3.6.27) 74 If j T u n;i j ˘ 1 2 ; (3.6.28) byTaylorexpansion,wehave log(1+ T u n;i )= T u n;i 1 2 ( T u n;i ) 2 + R u n;i ; (3.6.29) where j R u n;i j ˘ ( T u n;i ) 2 .Hence, E [log(1+ T u n;i )]= E [ T u n;i 1 2 ( T u n;i ) 2 + R u n;i ]= 1 2 E ( T u n;i ) 2 + ER u n;i ; (3.6.30) and E [ j R u n;i j ; j T u n;i j ˘ ] ˘ E [( T u n;i ) 2 ; j T u n;i j ˘ ]= O ( n 1 ) : (3.6.31) ByLemma 3.6.1 ,weobtain m X u =1 L u;i E [log(1+ T u n;i ) ; j T u n;i j >˘ ]= o ( n 1 ) ; (3.6.32) andhence m X u =1 L u;i E [ j R u n;i j ; j T u n;i j >˘ ] m X u =1 L u;i E [ j log(1+ T u n;i j + j T u n;i j + 1 2 ( T u n;i ) 2 ; j T u n;i j >˘ ] = O ( n 1 ) : (3.6.33) 75 Therefore, m X u =1 L u;i E [log(1+ T u n;i )]= O ( n 1 ) : (3.6.34) Ontheotherhand,byassumption( 3.4.12 ),wehave E Z u n;i = ˙ uu n;ii (0)= const:u ii (1+ O ( n ii )) ; (3.6.35) andhence m X u =1 L u;i log E Z u n;i = ii log u + O ( n ii )(3.6.36) by( 3.4.14 ). Therefore,by( 3.6.34 )and( 3.6.36 ), E [^ ii ii ]= E m X u =1 L u;i (log Z u n;i ii log u ) = O ( n 1 )+ O ( n ii ) : (3.6.37) ProofofTheorem 3.4.5 . E (^ ii ii ) 2 = m X u =1 m X v =1 L u;i L v;i E (log Z u n;i ii log u )(log Z v n;i ii log v ) = m X u =1 m X v =1 L u;i L v;i E (log(1+ T u n;i )+log E Z u n;i ii log u ) (log(1+ T v n;i )+log E Z v n;i ii log v ) 76 = m X u =1 m X v =1 L u;i L v;i E [log(1+ T u n;i )log(1+ T v n;i )] + m X u =1 m X v =1 L u;i L v;i E log(1+ T u n;i )(log E Z v n;i ii log v ) + m X u =1 m X v =1 L u;i L v;i (log E Z u n;i ii log u ) E log(1+ T v n;i ) + m X u =1 m X v =1 L u;i L v;i (log E Z u n;i ii log u )(log E Z v n;i ii log v )(3.6.38) By( 3.6.34 )and( 3.6.36 ),wehave m X u =1 m X v =1 L u;i L v;i E log(1+ T u n;i )(log E Z v n;i ii log v )= O ( n 1 ii ) m X u =1 m X v =1 L u;i L v;i (log E Z u n;i ii log u ) E log(1+ T v n;i )= O ( n 1 ii ) m X u =1 m X v =1 L u;i L v;i (log E Z u n;i ii log u )(log E Z v n;i ii log v )= O ( n 2 ii ) : (3.6.39) Next,westudythetermintherighthandsideof( 3.6.38 ).First, E [log(1+ T u n;i )log(1+ T v n;i ) ; j T u n;i j <˘; j T v n;i j <˘ ] = E ( T u n;i 1 = 2( T u n;i ) 2 + R u n;i )( T v n;i 1 = 2( T v n;i ) 2 + R v n;i ) ; j T u n;i j <˘; j T v n;i j <˘ = E T u n;i T v n;i ; j T u n;i j <˘; j T v n;i j <˘ 1 2 E T u n;i ( T v n;i ) 2 ; j T u n;i j <˘; j T v n;i j <˘ + E T u n;i R v n;i ; j T u n;i j <˘; j T v n;i j <˘ 1 2 E ( T u n;i ) 2 T v n;i ; j T u n;i j <˘; j T v n;i j <˘ + 1 4 E ( T u n;i ) 2 ( T v n;i ) 2 ; j T u n;i j <˘; j T v n;i j <˘ 1 2 E ( T u n;i ) 2 R v n;i ; j T u n;i j <˘; j T v n;i j <˘ + E R u n;i T v n;i ; j T u n;i j <˘; j T v n;i j <˘ 1 2 E R u n;i ( T v n;i ) 2 ; j T u n;i j <˘; j T v n;i j <˘ + E R u n;i R v n;i ; j T u n;i j <˘; j T v n;i j <˘ (3.6.40) 77 ByTheorem 3.4.2 ,wehave E [ T u n;i T v n;i ]= O ( n 1 ) : (3.6.41) Hence,wehave E [ T u n;i ( T v n;i ) 2 ]= o ( n 1 ) ; E [( T u n;i ) 2 ( T v n;i ) 2 ]= o ( n 1 ) E R u n;i R v n;i ; j T u n;i j <˘; j T v n;i j <˘ = o ( n 1 ) ; E R u n;i T v n;i ; j T u n;i j <˘; j T v n;i j <˘ = o ( n 1 ) ; E R u n;i ( T v n;i ) 2 ; j T u n;i j <˘; j T v n;i j <˘ = o ( n 1 )(3.6.42) Therefore, E [log(1+ T u n;i )log(1+ T v n;i ) ; j T u n;i j <˘; j T v n;i j <˘ ]= O ( n 1 ) : (3.6.43) Second,byLemma 3.6.1 , E [log(1+ T u n;i )log(1+ T v n;i ) ; j T u n;i j >˘; j T v n;i j <˘ ] j log(1 ˘ ) j E [ j log(1+ T u n;i ) j ; j T u n;i j >˘ ] = o ( n 1 ) : (3.6.44) Similarly, E [log(1+ T u n;i )log(1+ T v n;i ) ; j T u n;i j <˘; j T v n;i j >˘ ] 78 j log(1 ˘ ) j E [ j log(1+ T v n;i ) j ; j T v n;i j >˘ ] = o ( n 1 ) : (3.6.45) Finally,byLemma 3.6.1 , E [log(1+ T u n;i )log(1+ T v n;i ) ; j T u n;i j >˘; j T v n;i j >˘ ] ( E [log 2 (1+ T u n;i ) ; j T u n;i j >˘ ]) 1 = 2 ( E [log(1+ T v n;i ) ; j T v n;i j >˘ ]) 1 = 2 = o ( n 1 ) : (3.6.46) Therefore, E [log(1+ T u n;i )log(1+ T v n;i )]= O ( n 1 ) : (3.6.47) By( 3.6.39 )and( 3.6.47 ),wehave E (^ ii ii ) 2 = O ( n 1 )+ O ( n 2 ii ) ;i =1 ; 2(3.6.48) Next,westudythecrossterm E (^ 11 11 )(^ 22 22 ).Similarlyas E (^ ii ii ) 2 , E (^ 11 11 )(^ 22 22 ) = m X u =1 m X v =1 L u; 1 L v; 2 E (log Z u n; 1 11 log u )(log Z v n; 2 22 log v ) = m X u =1 m X v =1 L u; 1 L v; 2 E [log(1+ T u n; 1 )log(1+ T v n; 2 )] + m X u =1 m X v =1 L u; 1 L v; 2 E log(1+ T u n; 1 )(log E Z v n; 2 22 log v ) 79 + m X u =1 m X v =1 L u; 1 L v; 2 (log E Z u n; 1 11 log u ) E log(1+ T v n; 2 ) + m X u =1 m X v =1 L u; 1 L v; 2 (log E Z u n; 1 11 log u )(log E Z v n; 2 22 log v ) : (3.6.49) Soweonlyneedtotheorderof P m u =1 P m v =1 L u; 1 L v; 2 E [log(1+ T u n; 1 )log(1+ T v n; 2 )]. Case 1 : If 11 + 22 2 = 12 ,byTheorem 3.4.2 and( 3.3.11 ), m X u =1 m X v =1 L u; 1 L v; 2 E [log(1+ T u n; 1 )log(1+ T v n; 2 )]= O ( n 1 ) ; (3.6.50) andhence E (^ 11 11 )(^ 22 22 )= O ( n 1 )+ O ( n 11 22 ) ;i =1 ; 2(3.6.51) Case 2:If 11 + 22 2 < 12 ,byTheorem 3.4.2 and( 3.3.10 ), m X u =1 m X v =1 L u; 1 L v; 2 E [log(1+ T u n; 1 )log(1+ T v n; 2 )]= o ( n 1 ) ; (3.6.52) andhence E (^ 11 11 )(^ 22 22 ) = o ( n 1 )+ O ( n 1 11 )+ O ( n 1 22 )+ O ( n 11 22 ) : (3.6.53) 80 ProofofTheorem 3.4.6 . p n (^ ii ii )= p n m X u =1 L u;i (log Z u n;i ii log u ) = p n m X u =1 L u;i log(1+ T u n;i )+ p n m X u =1 L u;i (log E Z u n;i ii log u ) = p n m X u =1 L u;i T u n;i (1+ e u n;i )+ O ( n ii + 1 2 ) = p nL > i T n;i + p n m X u =1 L u;i T u n;i e u n;i + O ( n ii + 1 2 ) ; (3.6.54) where e u n;i ! 0if T u n;i ! 0. ByTheorem 3.4.3 ,wehave p n 0 B @ L > 1 T n; 1 L > 2 T n; 2 1 C A d ! N ( 0 ; ) ; (3.6.55) where = 0 B @ ~ L > 1 0 ; 11 ~ L 1 ~ L > 1 0 ; 12 ~ L 2 ~ L > 2 0 ; 21 ~ L 1 ~ L > 2 0 ; 22 ~ L 2 1 C A : (3.6.56) Especially,if 11 + 22 2 < 12 , 0 ; 12 = 0 ; 21 = 0 andhence L > 1 T n; 1 and L > 2 T n; 2 areasymp- toticallyindependent. Since p nL u;i T u n;i convergencetonormaldistributionand e n;i ! 0as n !1 ,wehave p nL u;i T u n;i e n;i = o p (1)andhence p n P m u =1 L u;i T u n;i e u n;i = o p (1).Therefore, p n (^ ii ii )= p nL > i T n;i + o p (1)+ O ( n ii + 1 2 ) : (3.6.57) 81 Lemma3.6.1. Showthatforanyd k 2 Z + , m X u =1 L u;i E [log k (1+ T u n;i ) ; j T u n;i j >˘ ] cn 1 = 2 e c p n ; (3.6.58) where c isaconstant. ProofofLemma 3.6.1 . ByHolder'sinequality,wehave E [log k (1+ T u n;i ) ; j T u n;i j >˘ ] E 1 = 2 [log 2 k (1+ T u n;i )] P 1 = 2 ( j T u n;i j >˘ ) : (3.6.59) First,weaupperboundfor P ( j T u n;i j >˘ ). Let U =( U 1 ;:::;U n )where U i iid ˘ N (0 ; 1).Let Y n =( Y u n;i (1) ;:::;Y u n;i ( n )).Denoteby Var ( Y n ):= n =( ˙ uu n;ii ( j k )) n j;k =1 .Let n = diag ( i )where i ;i =1 ;:::;n arethe eigenvaluesof n .Thenwehave Z u n;i = 1 n Y > n Y n d = 1 n U > n U: (3.6.60) Denoteby jj n jj 2 and jj n jj F the l 2 normandFrobeniusnormof n respectively.Indeed, jj n jj 2 =max 1 j n j ; jj n jj F = v u u t n X j =1 2 j : (3.6.61) Itiseasytosee n E Z u n;i = E U > n U = tr n )andhence tr n ) =n ! C 3 u ii where C 3 isa 82 constant.Further,Since jj n jj 2 F = tr 2 n )= tr 2 n )= n X j =1 ;k =1 ( ˙ uu n;ii ( j k )) 2 ; (3.6.62) and ˚ uu n;ii = Var ( Z u n;ii )= 2 n 2 n X j =1 ;k =1 ( ˙ uu n;ii ( j k )) 2 ; (3.6.63) wehave jj n jj 2 F = n 2 2 ˚ uu n;ii n 2 ˚ uu 0 ;ii : (3.6.64) ApplyingHansonandWright'sinequality[ HW71 ],wehave P ( j T u n;i j >˘ )= P ( j U > n U tr n ) j >tr n ) ˘ ) exp ˆ min C 1 ˘ tr n ) jj n jj 2 ;C 2 ˘ 2 ( tr n )) 2 jj n jj 2 F (3.6.65) where C 1 ;C 2 arepositiveconstantsindependentof n , n and ˘ . Since jj n jj 2 jj n jj F and jj n jj 2 F =n ! 1 2 ˚ u 0 ;ii ,wehave tr n ) jj n jj 2 = p n tr n ) =n jj n jj 2 = p n & p 2 C 3 u ii ( ˚ uu 0 ;ii ) 1 = 2 p n; as n !1 ; (3.6.66) and ( tr n )) 2 jj n jj 2 F 2 C 2 3 u 2 ii ( ˚ uu 0 ;ii ) 1 n: (3.6.67) 83 Hence,when n !1 ,( 3.6.65 )decaysexponentiallywithrate p n ,spally, P ( j T u n;i j >˘ ) e p 2 C 1 C 3 u ii ( ˚ uu 0 ;ii ) 1 = 2 ˘ p n ; as n !1 : (3.6.68) Second,weprovetheupperboundof E log 2 k (1+ T n;i ).Itiseasytoseethat E log 2 k (1+ T n;i ) 2 2 k 1 E log 2 k Z u n;i +log 2 k E Z u n;i : (3.6.69) Forany k 2 Z + ,thereexists c k suchthatlog 2 k x x 2 ; 8 x>c k .Since E Z u n;i ! C 3 u ii , E (log 2 k Z u n;i ; Z u n;i >c k ) E ( Z u n;i ) 2 =( E Z u n;i ) 2 + Var ( Z u n;i ) ! C 2 3 u 2 ii : (3.6.70) So E (log 2 k Z u n;i ; Z u n;i >c k )isuniformlyboundedandweonlyneedstocheck E (log 2 k Z u n;i ; Z u n;i c k ). Let U 2 min =min 1 i n U 2 i .For n large, Z u n;i = 1 n n X i =1 i U 2 i tr n ) n U 2 min 1 2 C 3 u ii U 2 min , CU 2 min : (3.6.71) Let f n ( x )bethedensityfunctionof U 2 min ,thatis f n ( x ):= P ( U 2 min 2 dx )= 2 n p 2 ˇ e x= 2 2 Z 1 p x 1 p 2 ˇ e y 2 = 2 dy n 1 dx: (3.6.72) Itiseasytocheckthat f n ( x ) 2 n p 2 ˇ .Hence,when n islarge,weobtain E (log 2 k Z u n;i ; Z u n;i c k ) E log 2 k CU 2 min ; U 2 min 1 =C 84 Z c k 0 log 2 k ( Cx ) f n ( x ) dx 2 n p 2 ˇC Z 1 0 log 2 k ydy cn: (3.6.73) By( 3.6.69 ),( 3.6.70 )and( 3.6.73 ),weobtain E log 2 k (1+ T n;i ) cn andhencewhen n is large, E [log k (1+ T u n;i ) ; j T u n;i j >˘ ] cn 1 = 2 e c p n : (3.6.74) 85 Chapter4 ABayesianfunctionaldatamodelfor couplinghigh-dimensionalLiDARand forestvariablesoverlargegeographic domains Recentadvancesinremotesensing,specLightDetectionandRanging(LiDAR)sen- sors,providethedataneededtoquantifyforestvariablesataspatialresolutionover largedomains.Weaframeworktocouplehigh-dimensionalandspatiallyindexed LiDARsignalswithforestvariablesusingafullyBayesianfunctionalspatialdataanalysis. TheproposedmodelingframeworkisillustratedbyasimulatedstudyandbyanalyzingLi- DARandspatiallycoincidingforestinventorydatacollectedonthePenobscotExperimental Forest,Maine. 4.1Introduction Linkinglong-termforestinventorywithair-andspace-borneLightDetectionandRanging (LiDAR)datasetsviaregressionmodelsanattractiveapproachtomappingforest above-groundbiomass(AGB)atstand,regional,continental,andglobalscales.LiDARdata 86 haveshowngreatpotentialforuseinestimatingspatiallyexplicitforestvariables,including AGB,overarangeofgeographicscales[ AHV + 09 , BMF + 13 , FBM11 , IDUA13 , MMT11 , Nˆs11 , NNR + 13 ].Encouragingresultsfromtheseandmanyotherstudieshavespurred massiveinvestmentinnewLiDARsensors,sensorplatforms,aswellasextensivecampaigns tocollectcalibrationdata.Forexample,ICESat-2|plannedforlaunchin2017| willbeequippedwithaLiDARsensorabletogatherdatafromspaceatunprecedentedspatial resolutions[ AZB + 10 ].Ascurrentlyproposed,ICESat-2willbeaphoton-countingsensor capableofrecordingmeasurementsona ˇ 70cmfootprint[ ICE15 ].TheGlobalEcosystem DynamicsInvestigationLiDAR(GEDI)willbeanInternationalSpaceStationmounted systemcapableofproducing25mdiameterfootprintwaveformsandisscheduledtobe operationalin2018[ GED14 ].OneofGEDI'scoreobjectivesistoquantifythedistribution ofAGBataspatialresolution.NASAGoddard'sLiDAR,Hyper-spectral,andThermal (G-LiHT)imagerisanair-borneplatformdeveloped,inpart,toexaminehowfuturespace- originatingLiDAR,e.g.,ICESat-2,GEDI,orotherplatforms,maybecombinedwith- basedvalidationmeasurementstobuildpredictivemodelsforAGBandotherforestvariables [ AAWN13 , CCN + 13 ]. Inordertoelyextractinformationfromthesehigh-dimensionalmassivedatasets, weneedamodelingframeworktocapturewithinandamongLiDARsignal/forestvariable associationwithinandacrosslocations.However,thecomputationalcomplexityofsuch modelsincreasesincubicorderwiththenumberofspatiallocationsandthedimensionofthe LiDARsignal,andthenumberofforestvariables|acharacteristiccommontomultivariate spatialprocessmodels.Inthischapter,weproposeamodelingframeworkthatexplicitly: 1)reducesthedimensionalityofsignalsinanoptimalway(i.e.,preservestheinformation thatdescribesthemaximumvariabilityinresponsevariable);2)propagatesuncertainty 87 indataandparametersthroughtoprediction,and;3)acknowledgesandleveragesspatial dependenceamongthederivedregressorsandmodelresidualstomeetstatisticalassumptions andimproveprediction. Therestofthischapterisorganizedasfollows.Section 4.2 givesareviewonGaussian predictiveprocessandmoGaussianpredictiveprocess.Thejoinmodelthatcoupling theLiDARsignalandtheforestvariablesisproposedinSection 4.3 .InSection 4.4 ,we completethehierarchicalspandoutlinestheGibbssamplerforthemodel. SpatialpredictionsforforestvariablesarederivedinSection 4.5 .Section 4.6 illustrateswith simulationandforestrydataanalysis. 4.2Preliminary:MoGaussianpredictiveprocess AGaussianrandom f w ( s ) ;s 2 D ˆ R N g isusuallyusedtomodeltheresidualof spatialpointreferenceddata.Let C w ( s;t ; ):=Cov(w(s) ; w(t))bethecovariancefunction of w ( )withparameters .Assumethatthedataareobservedin n locations,say, s 1 ;:::;s n . Estimatingtheparametersalwaysneedsinvertinga n n covariancematrix,whichinvolves O ( n 3 )Whenthesamplesize n isverybig,itiscomputationallyveryexpensiveand eveninfeasible. Toaddressthisissue,[ BGFS08 ]introducedtheGaussianpredictiveprocessmodel,which isadegenerateGaussianrandomobtainedbyprojectingtheparentrandomtoa lower-dimensionalsubspace.Sp,bychoosingasetof\knots" S = f s 1 ;:::;s r gˆ D , theytheGaussianpredictiveprocess ~ w ( s )= E ( w ( s ) j w ( s 1 ) ;:::;w ( s r )) : (4.2.1) 88 Let c ( s ; )=[ C w ( s;s j ; )] r j =1 and C ( )=[ C w ( s i ;s j )] r i;j =1 .Thecovariancefunctionof ~ w ( s )canbederiveddirectlyfromitsparentprocess,thatis Cov(~ w ( s ) ; ~ w ( t ))= c > ( s ; ) C 1 ( ) c ( t ; ) : (4.2.2) Invertingthecorrespondingcovariancematrixonlyrequires O ( nr 2 )with r< ( s ; ) C 1 ( ) c ( s ; ) 0 : (4.2.3) Asaconsequence,thenuggetvarianceinspatialregressionmodelisusuallyoverestimated byabsorbingthevariabilitydroppedbythepredictiveprocess.Toremedythisproblem, [ FSBG09 ]proposedthe moGaussianpredictiveprocess byaddingaGaussian noisetothepredictiveprocess.Sp,they w ( s )=~ w ( s )+~ ( s ) ; (4.2.4) where~ ( s ) ind ˘ N (0 ;C w ( s;s ; ) c > ( s ; ) C 1 ( ) c ( s ; ))isaspatiallyindependentGaus- sianrandomwithvaryingmarginalvariance.Hence,themoGaussianpredictive processhasthesamemarginalvarianceastheparentprocess. 89 4.3Themodel Let D ˆ R 2 beaspatialdomainandlet s beagenericpointin D .Atlocation s ,theoutcome variable y ( s )denotestheabove-groundbiomass.Let x 2 [0 ;M ] ˆ R + betheheightfrom thegroundwith M themaximumheight.Atlocation( s;x ),theoutcomevariable z ( s;x ) denotesthestrengthofLiDARsignal. Assumewe'veobservedboth y and z atasetoflocations S = f s 1 ;:::;s n g .Foreach location s i , z ismeasuredatheight x 1 ;x 2 ;:::;x n x .Moreover,weobservedtheLiDARsignal z inmanyotherlocationswhere y havenotbeenmeasured. 4.3.1MoGaussianpredictivemodelfor z Thesignal z ( s;x )canbemodeledasfollows, z ( s;x )= z ( s;x ; )+ u ( s;x )+ z ( s;x ) ; (4.3.1) where z isthemeanfunction, u ( s;x )istherandomwhichisaGaussianrandom on R 3 and z ( s;x )isthenugget Assumethatthenuggets z ( s;x ) ind ˘ N (0 ;˝ 2 z ( x )),whichmeansthevarianceofthe nuggetisindependentacrosslocations. Denoteby C u ( s;t;x;y ; u ):=Cov[ u ( s;x ) ;u ( t;y )]thecovariancefunctionofrandom u .WeapproximatetheparentmodelbymoGaussianpredictiveprocesseswithin locations.Assumethat f x 1 ;:::;x n x g aretheheightknotsateverylocation.Let u ( s )=( u ( s;x 1 ) ;:::;u ( s;x n x )) > ; 90 whosecovarianceatlocation s isgivenby V u ( s ) ( u ):= 0 B B B B B B B B B @ C u ( s;s;x 1 ;x 1 ; u ) C u ( s;s;x 1 ;x n x ; u ) C u ( s;s;x 2 ;x 1 ; u ) C u ( s;s;x 2 ;x n x ; u ) . . . . . . . . . C u ( s;s;x n x ;x 1 ; u ) C u ( s;s;x n x ;x n x ; u ) 1 C C C C C C C C C A ; (4.3.2) andlet f ( s;x ; u ):=( C u ( s;s;x;x 1 ; u ) ;:::;C u ( s;s;x;x n x ; u )) > : (4.3.3) Sothe moGaussianpredictiveprocessmodelfor z withinlocationsis z ( s;x )= z ( s;x ; )+ f > ( s;x ; u ) V 1 u ( s ) ( u ) u ( s )+ z ( s;x ) ; (4.3.4) where z ( s ; x ) ind ˘ N (0 ; ~ ˙ 2 u ( s;x ; u )+ ˝ 2 z ( x )) ; (4.3.5) with ~ ˙ 2 u ( s;x ; u )= C u ( s;s;x;x ; u ) f > ( s;x ; u ) V 1 u ( s ) ( u ) f ( s;x ; u ) : (4.3.6) 91 4.3.2Jointmodelof y and Z Denoteby Z ( s )= 0 B B B B B B B B B @ z ( s;x 1 ) z ( s;x 2 ) . . . z ( s;x n x ) 1 C C C C C C C C C A ; Z ( s )= 0 B B B B B B B B B @ z ( s ; x 1 ) z ( s ; x 2 ) . . . z ( s ; x n x ) 1 C C C C C C C C C A ;F ( s ; u )= 0 B B B B B B B B B @ f > ( s;x 1 ; u ) f > ( s;x 2 ; u ) . . . f > ( s;x n x ; u ) 1 C C C C C C C C C A ; and Z ( s ; )= 0 B B B B B B B B B @ z ( s;x 1 ; ) z ( s;x 2 ; ) . . . z ( s;x n x ; ) 1 C C C C C C C C C A : Atlocation s ,wecoupletheLiDARsignal Z andtheabove-groundbiomass y throughthe mopredictiveprocesses u ( s ).The jointmodelfor y and Z isgivenbelow, 0 B @ Z ( s ) y ( s ) 1 C A = 0 B @ Z ( s ; ) y ( s ; ) 1 C A + 0 B @ F ( s ; u ) V 1 u ( s ) ( u ) 0 > u 1 1 C A 0 B @ u ( s ) v ( s ) 1 C A + 0 B @ Z ( s ) y ( s ) 1 C A ; (4.3.7) where u 2 R n x , y isthemeanfunctionof y , v ( s )istherandomand y isthenugget Assumethat v; y areindependentof u and z . v ( )isaGaussianrandomon R 2 and y ( s ) ind ˘ N (0 ;˝ 2 y ). Whenthenumberoflocations n isbig,modelestimationandpredictioniscomputation- 92 allyt.SoweusethemoGaussianpredictiveprocesstoapproximatethejoint modelfor y and Z bychoosingthespatialknots.Let f s 1 ;s 2 ;:::;s n z g and f t 1 ;t 2 ;:::;t n y g be thespatialknotsforsignal Z and y respectively. First,weapproximatethespatialrandom u ( s ).Let u =( u T ( s 1 ) ;:::;u T ( s n z )) > and G u ( s ; u ) > bethe n x n x n z blockmatrixwith(1 ;j )-thblockbeingthecrosscovariance C u ( s;s j ; u ):=Cov[ u ( s ) ;u ( s j )] ;j =1 ; 2 ;::;n z ,thatis G u ( s ; u ) > =( C u ( s;s 1 ; u ) ;C u ( s;s 2 ; u ) ;:::;C u ( s;s n z ; u )) : (4.3.8) Denoteby V u ( u )the n x n z n x n z blockmatrixwhose( i;j )-thblockis C u ( s i ;s j ; u ).We approximate u ( s )bythemoGaussianpredictiveprocessbelow, ~ u ( s )= G u ( s ; u ) > V 1 u ( u ) u + u ( s ) ; (4.3.9) where u ( s ) ˘ N ( 0 ; u ( s ; u )) ; (4.3.10) with u ( s ; u )= C u ( s;s ; u ) G u ( s ; u ) > V 1 u G u ( s ; u ) : (4.3.11) Next,weapproximate v ( s ).Let g v ( s ; v ) > bethe1 n y vectorwhose j -thelementis C v ( s;s j ; v ):=Cov[ v ( s ) ;v ( s j )]anddenoteby V v ( v )the n y n y matrixwhose( i;j )-th 93 elementis C v ( s i ;s j ; v ). v ( s )isapproximatedby ~ v ( s )= g v ( s ; v ) > V 1 v ( v ) v + v ( s ) ; (4.3.12) where v ( s ) ˘ N (0 ;˙ 2 v ( s ; v ))with ˙ 2 v ( s ; v )= C v ( s;s ; v ) g v ( s ; v ) > V 1 v ( v ) g v ( s ; v ) : (4.3.13) Therefore, Z ( s )in( 4.3.7 )canbeapproximatedby Z ( s ) ˇ Z ( s ; )+ F ( s ; u ) V 1 u ( s ) ( u )~ u ( s )+ Z ( s ) , Z ( s ; )+ H ( s ; u ) u + e Z ( s ) ; (4.3.14) where H ( s ; u ):= F ( s ; u ) V 1 u ( s ) ( u ) G u ( u ; s ) > V 1 u ( u ) ; e Z ( s )= F ( s ; u ) V 1 u ( s ) ( u ) u ( s )+ Z ( s ) ˘ N ( 0 ;D e Z ( s )) ; with D e Z ( s )= F ( s ; u ) V 1 u ( s ) ( u u ( s ; u ) V 1 u ( s ) ( u ) F ( s ; u ) > + n x j =1 (~ ˙ 2 u ( s;x j ; u )+ ˝ 2 z ( x )) : Also, y ( s )in( 4.3.7 )canbeapproximatedby y ( s ) ˇ y ( s ; )+ > u ~ u ( s )+~ v ( s )+ y ( s ) 94 , y ( s; )+ I ( s ; u ) u + J ( s ; v ) v + e y ( s ) ; (4.3.15) where I ( s ; u )= > u G u ( s ; u ) > V 1 u ( u ) ;J ( s ; v )= g v ( s ; v ) > V 1 v ( v ) ; e y ( s ):= > u u ( s )+ v ( s )+ y ( s ) ˘ N ( 0 ;˙ 2 e y ( s )) ; (4.3.16) with ˙ 2 e y ( s )= > u u ( s ; u ) u + ˙ 2 v ( s ; v )+ ˝ 2 y : (4.3.17) Soour model for Z and y canberewrittenas 0 B @ Z ( s ) y ( s ) 1 C A = 0 B @ Z ( s ; ) y ( s ; ) 1 C A + 0 B @ H ( s ; u ) 0 I ( s ; u ) J ( s ; v ) 1 C A 0 B @ u v 1 C A + 0 B @ e Z ( s ) e y ( s ) 1 C A : (4.3.18) Let X Z ( s ) 2 R n x p and X y ( s ) 2 R q bethepredictorsforthesignal Z and y respectively. Assumethat Z ( s ; )= X > Z ( s ) and y ( s ; )= X y ( s ) > .Then, 0 B @ Z ( s ) y ( s ) 1 C A = 0 B @ X Z ( s ) > X y ( s ) > 1 C A + 0 B @ H ( s ; u ) 0 I ( s ; u ) J ( s ; v ) 1 C A 0 B @ u v 1 C A + 0 B @ e Z ( s ) e y ( s ) 1 C A : (4.3.19) 95 4.3.3Spoftherandomof Z and y Covariancefunctionfor u ( s;x ) : [ Gne02 ]introducedaclassofnonseparablesta- tionarycovariancefunctionforspace-timemodelon R d R .Sp,when d =2, C ( s 1 ;s 2 ; x 1 ;x 2 )= ˙ 2 2 1 )( a j x 1 x 2 j 2 +1) + c k s 1 s 2 k ( a j x 1 x 2 j 2 +1) = 2 K c k s 1 s 2 k ( a j x 1 x 2 j 2 +1) = 2 (4.3.20) Here,weuseaversiontomodelthecovarianceof u ( s;x )by =1 ; = 1 2 ; =0,i.e., C u ( s 1 ;s 2 ;x 1 ;x 2 ; u ):= ˙ 2 u ( a j x 1 x 2 j 2 +1) exp c k s 1 s 2 k ( a j x 1 x 2 j 2 +1) = 2 (4.3.21) where u =( ˙ 2 u ;a;;c ) ;˙ 2 u ;a;c> 0and 2 [0 ; 1]. Covariancefunctionfor v ( s ) : Weemploytheexponentialcovariancefunction for v ( s ),i.e., C v ( s 1 ;s 2 ; v )= ˙ 2 v exp ˚ v k s 1 s 2 kg ; (4.3.22) where v =( ˙ 2 v ;˚ v ). 96 4.4Bayesianimplementationandcomputationalissue 4.4.1Dataequation Forthemodelin( 4.3.19 ),weformthedataequationinthissection.Denoteby =( u ; v ).Let O ( s )= 0 B @ Z ( s ) y ( s ) 1 C A ;B ( s ; )= 0 B @ H ( s ; u ) 0 I ( s ; u ) J ( s ; v ) 1 C A ;b = 0 B @ 1 C A X ( s )= 0 B @ X Z ( s ) > 0 0 X y ( s ) > 1 C A ;w = 0 B @ u v 1 C A ;e ( s )= 0 B @ e Z ( s ) e y ( s ) 1 C A (4.4.1) Thedatamodelfor( 4.3.19 )canbewrittenas O ( s i )= X ( s i ) b + B ( s i ; ) w + e ( s i ) ;i =1 ; 2 ;:::;n: (4.4.2) Thematrixformoftheabovemodelis O = Xb + B ( ) w + e; (4.4.3) where O = 0 B B B B B @ O ( s 1 ) . . . O ( s n ) 1 C C C C C A ;B ( )= 0 B B B B B @ B ( s 1 ; ) . . . B ( s n ; ) 1 C C C C C A ;e = 0 B B B B B @ e ( s 1 ) . . . e ( s n ) 1 C C C C C A ;X = 0 B B B B B @ X ( s 1 ) . . . X ( s n ) 1 C C C C C A : (4.4.4) 97 Here, e ˘ N ( 0 ;D e ) ; (4.4.5) where D e = 0 B B B B B B B B B @ D 0 ( s 1 ) 0 0 0 D 0 ( s 2 ) 0 . . . . . . . . . . . . 00 D 0 ( s n ) 1 C C C C C C C C C A ; (4.4.6) with D 0 ( s i )= 0 B @ D e Z ( s i ) 0 0 ˙ 2 e y ( s i ) 1 C A ;i =1 ; 2 ;:::;n: (4.4.7) And w ˘ MVN ( 0 ; w ) ; (4.4.8) where w = 0 B @ V u ( z ) 0 0 V v ( v ) 1 C A (4.4.9) 98 4.4.2Priorspandfullconditionalsampling Denotebyalltheparametersinthemodel,whichis = f ;; u ;;˝ 2 y ;˝ 2 z ( x i ) ;i =1 ;:::;n x g : (4.4.10) First,letuscompletehierarchicalspThepriorsforalltheparametersare givenasfollows. ˘ N ( 0 ; diag(10 4 )) ; ˘ N( 0 ; diag(10 4 )) ; u ; i ˘ N(0 ; diag(10 4 )) ; i=1 ;:::; n x ; ˙ 2 u ˘ IG (2 ;b ˙ u ) ; ˘ U (0 ; 1) ;a ˘ U (0 ;a max ) ;c ˘ U (0 ;c max ) ; ˙ 2 v ˘ IG (2 ;b ˙ v ) ;˚ v ˘ U ( log(0 : 05) =d s;max ; log(0 : 01) =d s;min ) ; ˝ 2 z ( x i ) ˘ IG (2 ;b ˝ z ) ;i =1 ; 2 ;:::;n x ;˝ 2 y ˘ IG (2 ;b ˝ y ) : (4.4.11) Weassignlargeenoughnumbersfor a max and c max .Thespofhyperparam- etersfor ˚ v followfrom[ RB13 ],where d s;min ;d s;max areminimumandmaximumdistance acrossallthelocations. b ˙ u ;b ˙ v ;b ˝ z ,and b ˝ y areassignedfromtheempiricalsemivariogram (see,e.g.,[ BGFS08 ]). Denoteby:= n b =( u ;;˝ 2 z ( x i ) ;˝ 2 y ).Wecanintegrateouttherandom w andobtainthemarginalizedlikelihood, [ O j ;b ] ˘ MVN ( Xb;B ( w B > ( )+ D e ) : (4.4.12) 99 Sotheposteriordistributionofthemodelis ;b j O ] / MVN ( Xb;B ( w B > ( )+ D e ) [ b ] : (4.4.13) ModelttingemploysaGibbssamplerwithMetropolissteps. 1. Update b =( ; ) > .Let b =( ; ) > ; O j = B ( w B > ( )+ D e ; and b = 0 B @ 0 0 1 C A (4.4.14) Thefullconditionaldensityfor b is [ b j O; ˘ N ( ; b ) ; (4.4.15) where b = 1 b + X > 1 O j X 1 and b = b 1 b b + X > 1 O j O (4.4.16) 2. Update . WeemploytheblockrandomwalkMetropolismethodtosample Supporton R . We'lldosomelog-typetransformationtomakesurethesupport ofalltheparametersis( ; 1 ).Sp,fortheparameterswithInverse Gammaprior,wejustdothelog-transformation,i.e., ~ ˙ 2 u =log ˙ 2 u ; ~ ˙ 2 v =log ˙ 2 v ; ~ ˝ 2 z ( x i )=log ˝ 2 z ( x i ) ; and~ ˝ 2 y =log ˝ 2 y : (4.4.17) 100 Fortheparameterswithuniformprior,wedothelogit-transformation.Sp cally, ~ ˚ v =log ˚ v ˚ v;min ˚ v;max ˚ v ; ~ a =log a a max a ; ~ =log 1 ; ~ c =log c c max c ; (4.4.18) where ˚ v;min = log(0 : 05) =d s;max ;˚ v;max = log(0 : 01) =d s;min . Log-likelihoodof ~ . Denoteby ~ theparametersofaftertransformation. Given O;b ,thelog-likelihoodof ~ isproportionalto log p ( ~ j O;b ) / 1 2 log j O j j 1 2 ( O Xb ) > 1 O j ( O Xb ) n x X i =1 2 u;i 2 ˙ 2 i a ˙ u ~ ˙ 2 u a ˙ v ~ ˙ 2 v a ˝ z n x X i =1 ~ ˝ 2 z ( x i ) a ˝ y ~ ˝ 2 y b ˙ u e ~ ˙ 2 u b ˙ v e ~ ˙ 2 v b ˝ z n x X i =1 e ~ ˝ 2 z ( x i ) b ˝ y e ~ ˝ 2 y +~ a +~ +~ c + ~ ˚ v 2log (1+ e ~ a )(1+ e ~ c )(1+ e ~ )(1+ e ~ ˚ v ) (4.4.19) Remarks: Theinverseof O j areevaluatedbyapplyingSherman-Woodbury-Morrison formula(see,e.g.,[ Har97 ]),whichonlyrequiresinvertingamatrixwithdimension( n x n z + n y ) ( n x n z + n y ).Sp, 1 O j =( D e + B ( w B > ( )) 1 = D 1 e D 1 e B ( ) 1 w + B > ( ) D 1 e B ( ) 1 B > ( ) D 1 e : (4.4.20) 101 4.5Predictions Inthissection,wedothemappingof y tothelocationswherethesignals Z havebeen observedthroughprediction.Section 4.5.1 statestheproceduretopredictboth y and Z at newlocations.Then,wederivethepredictionof y given Z areknowninSection 4.5.2 . 4.5.1Predictionsfor y and Z atnewlocations Assumethattherearenoobservationsof y and Z atlocations~ s 1 ;:::; ~ s m ,buttherearerecords ofpredictors X atthoselocations. Let ~ O =( O > (~ s 1 ) ;O > (~ s 2 ) ;:::;O > (~ s m )) > .Ourgoalistotheconditionaldistribution of ~ O givenalltheobservations. Westackthedatainatwaysoastoseparate z and y .Denoteby ~ O = 0 B @ ~ Z ~ y 1 C A ; ~ X = 0 B @ ~ X Z 0 0 ~ X y 1 C A ; (4.5.1) where ~ Z = 0 B B B B B @ Z (~ s 1 ) . . . Z (~ s m ) 1 C C C C C A ; ~ y = 0 B B B B B @ y (~ s 1 ) . . . y (~ s m ) 1 C C C C C A ; ~ X Z = 0 B B B B B @ X Z (~ s 1 ) . . . X Z (~ s m ) 1 C C C C C A ; ~ X y = 0 B B B B B @ X y (~ s 1 ) . . . X y (~ s m ) 1 C C C C C A : (4.5.2) Itiseasytocheckthat 2 6 4 O ~ O 3 7 5 j ˘ N 0 B B @ 2 6 4 X ~ X 3 7 5 b; 2 6 6 4 O j C O; ~ O j C > O; ~ O j ~ O j 3 7 7 5 1 C C A ; (4.5.3) 102 where ~ O j isthecovariancematrixof ~ O and C O; ~ O j isthecross-covariancematrixbetween O and ~ O .Therefore,wecanobtaintheconditionaldistributionof ~ O given O and [ ~ O j O; ˘ N ( ~ O j O ; ~ O j O ) ; (4.5.4) where ~ O j O = ~ Xb + C > O; ~ O j 1 O j ( O Xb ) , ( ~ Z j O ; ~ y j O ) > ; ~ O j O = ~ O j C > O; ~ O j 1 O j C O; ~ O j , 0 B B @ ~ Z ~ Z ~ Z ~ y > ~ Z ~ y ~ y ~ y 1 C C A : (4.5.5) Usually,Bayesianpredictionproceedsbysamplingfromtheposteriorpredictivedistri- bution p ( ~ O j O )= Z p ( ~ O j O; p j O ) d (4.5.6) Inthiscase,foreachposteriorsampleofwedrawacorresponding ~ O by( 4.5.4 ). 4.5.2Predictionsfor y given Z areobserved Assumethatthereisnoobservationof y atlocations~ s 1 ;:::; ~ s m ,buttherearerecordsofthe signal z andpredictors X atthoselocations.Ourgoalistotheconditionaldistribution of y givenalltheobservations. By( 4.5.4 ),wecanouttheconditionaldistributionof~ y given ~ Z , O and [~ y j ~ Z;O; ˘ N ( ~ y ; ~ y ) ; (4.5.7) 103 where ~ y = ~ y j O + > ~ Z ~ y 1 ~ Z ~ Z ( ~ Z ~ Z j O ) ~ y = ~ y ~ y > ~ Z ~ y 1 ~ Z ~ Z ~ Z ~ y : (4.5.8) TheBayesianpredictionproceedsbysamplingfromtheposteriorpredictivedistribution p (~ y j ~ Z;O )= Z p (~ y j ~ Z;O; p j O; ~ Z ) d (4.5.9) Sincewesampleparametersonlygiventheinformationof O duetocomputingburdenby addingmassivesizeof ~ Z ,weapproximatetheposteriorpredictivedistributionasfollows, p (~ y j ~ Z;O ) ˇ Z p (~ y j ~ Z;O; p j O ) d : (4.5.10) Inthiscase,foreachposteriorsampleofwedrawacorresponding~ y by( 4.5.7 ). 4.6illustrations Weconductsimulationexperimentsandanalyzealargeforestrydatasettoassessmodel performancewithregardtolearningaboutprocessparametersandpredicting y atnew locations.Posteriorinferenceforsubsequentanalysiswerebaseduponthreechainsof30000 iterations(withaburn-inof5000iterations).ThesamplerswereprogrammedinC++and leveragedIntel'sMathKernelLibrary's(MKL)threadedBLASandLAPACKroutinesfor matrixcomputations.ThecomputationswereconductedonaLinuxworkstationusingtwo IntelNehalemquad-Xeonprocessors. 104 4.6.1Simulationexperiments Wesimulatethedataontheregularlatticesinthedomain [0 ; 4] [0 ; 4] | {z } location s [0 ; 5] | {z } height x (4.6.1) fromjointmodel( 4.3.7 )withfullspatialGaussianprocess(GP)where n = n z = n y =400 and n x =50. Weholdout25%ofdatabyrandomlysamplingfromthe400spatiallocations.Inthis study,wechoosefullknotsfor y ,i.e., n y =300.Wethefollowingmodelsfromthetraining data:i)thejointmodelwithfullknots n z =300;ii)thejointmodelwith n z =200knots; iii)thejointmodelwith n z =100knots;iv)thejointmodelwith n z =50knots.Then,we dopredictionsfor y attheholdoutlocationsforeachmodel. Parameterestimatesandperformancemetricsforthefourmodelswithvaryingspatial knotsfor z areprovidedinTable 4.1 .Wejustlistedtheestimatesof , u andtherandom .Largervalueof DIC suggestthejointmodelswithfewerknotsdonotthedata aswellasfullknots.Yet,theconts i whichareusedtoextractinformationfromthe signal Z areestimatedquitewellineachcase.ThelastrowinTable 4.1 showscomputing timesinhoursforonechainof30000iterationsontheenormouscomputational gainsofpredictiveprocessmodelsoverfullGPmodel. Table 4.1 alsoindicatesthatthejointmodelwithfullknotsfor Z hasthesmallest rootmeansquarepredictionerror(RMSPE)andsmallestmeanintervalwidthintermsof predicting y given Z areknown.Yet,thereisnobigwhenwereducethenumber ofknotsfor Z .Figure 4.1 showsthe95%credibleintervalsforpredicting100holdout y undereachmodel. 105 (a) n z =300 (b) n z =200 (c) n z =100 (d) n z =50 Figure4.1Predicted y given Z areknown.Topleft: n z =300(fullknots);topright: n z =200;bottomleft: n z =100;bottomright: n z =50.Anyredpointontheblueline representsthecasewhenthepredicted y isequaltothetrue y . 106 Table4.1Parametercredibleintervals,50%(2 : 5% ; 97 : 5%)andpredictivevalidation.Entries initalicsindicatewherethetruevalueismissed. ParameterTrueResultsforthefollowingnumbersofknots value n =300(FullGP) n =200 n =100 n =50 2020.45(19.06,21.85)20.86(19.58,22.11)20.44(19.28,21.58)20.5(19.59,21.41) 1 -2-1.83(-2.04,-1.63)-1.81(-2.05,-1.58)-1.98(-2.35,-1.57)-1.82(-2.27,-1.29) 2 00.12(-0.16,0.37)0.03(-0.28,0.33)0.34(-0.14,0.79)0.3(-0.32,1.07) 3 22.07(1.77,2.35)1.91(1.59,2.25)1.7(1.26,2.16)1.69(1.13,2.41) 4 11.07(0.8,1.33)1.11(0.83,1.39)1.13(0.72,1.53)1.27(0.7,1.87) 5 54.85(4.61,5.09)4.9(4.64,5.17)4.97(4.6,5.33)4.98(4.59,5.37) ˙ 2 u 0.20.19(0.18,0.2)0.19(0.18,0.2)0.2(0.19,0.21)0.19(0.18,0.2) a 1212.91(11.22,15.11)13.17(11.16,17.44)12.21(10.64,14.87)11.92(10.54,13.93) 0.90.86(0.78,0.94)0.83(0.67,0.92)0.9(0.79,0.98)0.91(0.82,0.96) c 55.63(4.92,6.43) 5.73(5.02,6.58) 5.44(4.69,6.22) 6.04(5.2,7.04) ˙ 2 v 0.50.41(0.1,1.09)0.29(0.09,0.93)0.29(0.1,1.08)0.43(0.1,1.91) ˚ v 23.63(0.91,7.25) 7.83(3.59,9.35) 8.03(1.2,9.48) 6.08(2.63,9.54) p D 74.9682.0277.1479.95 DIC28989.6429226.9929487.4129661.21 RMSPE4.654.965.265.42 95%CIcover%90888889 95%CIwidth19.7020.7321.8022.54 Time153.3h80.3h27.0h13.8h 4.6.2ForestLiDARandbiomassdataanalysis 4.6.2.1Datadescriptionandpreparation ThisdatasetwascollectedonthePenobscotExperimentalForest,Maine.Thesignals z ( s;x ) areobservedat26286locations.Ateachlocation,thereare126measurementsequally distributedabovegroundwithin[0 ; 37 : 5]meters.Amongallthelocations,thereare451 locationswherebiomass y isobserved.Figure 4.2 belowshowsroughlyhowthedatalook like. Sincetheheightsoftreesareusuallysmallerthan29 : 1mattheobservedarea,there isnosignalwhentheheightisabove29 : 1matmostlocations.Wecutthesignalat 29 : 1m.Then,wecoarsenthesignalwithin[0 ; 29 : 1]mbyaveragingeverytwoconsecutive measurementsandusethemto Z .Thedimensionofsignalsateachlocationissetto 107 Figure4.2Left:Interpolated y ,thesmallpoints indicatewhere y 'sarerecorded;Right: Signal Z measuredatthebigreddiscmarkedontheleftgraph. n x =45.Meanwhile,wesumupthesignalsabove29 : 1mateachlocationanddenoteby X y; 1 ( s ),whichisnon-zeroiftheheightoftreeexceeds29 : 1metersat s .Talltreesusually indicatealargeamountofbiomass y .Soweconsider X y; 1 ( s )asthecovariateof y across locations.Inthisstudy,wedonothaveothercovariatestomodel Z and y .Hence,weonly assumethemeanof Z ( s )and y ( s )are X > Z ( s ) = f i g n x i =1 and X > y ( s ) = 1 + 2 X y; 1 ( s ) respectively.Inaddition,fornumericalstability,wescalethemagnitudeofthebiomass y andthesignal Z ,i.e., y ! y= 100and z ! 100 z . 4.6.2.2Results Weholdout25%ofdatabyrandomlysamplingfromthe451spatiallocations.Thenumber ofheightknotsissetto n x =5andtheyareevenlydistributedacrosstheheight.Wethen choosethesamespatialknotsfor Z and y andthefollowingmodelsfromthetraining data:i)thejointmodelwithfullknots n =339;ii)thejointmodelwith200knots;iii)the 108 jointmodelwith100knots;iv)thejointmodelwith50knots.Finallywedopredictionsfor holdout y andcheckthepredictionperformance. ParameterestimatesandperformancemetricsareprovidedinTable 4.2 .Accordingtoour modelframework, u areusedtocharacterizetherelationshipbetweensignalandbiomass. Weseethat 2 , 3 and 4 aret. 2 and 3 correspondtosignalsatrelatively lowerheightthan 4 does.Usually,strongsignalsawayfromgroundindicatebigbiomass. That'swhy 4 ispositivewhile 2 and 3 arenegative. Table4.2Parametercredibleintervals,50%(2 : 5% ; 97 : 5%)andpredictivevalidation. ParameterResultsforthefollowingnumbersofknots n =339(FullGP) n =200 n =100 n =50 1 1.05(0.89,1.23)1.06(0.72,1.39)1.05(0.54,1.55)1.05(0.82,1.29) 2 0(-0.04,0.03)0.01(-0.02,0.03)0.03(0,0.06)0.05(0.02,0.07) 1 -0.19(-0.61,0.29)-0.06(-0.23,0.28)0(-0.17,0.21)0.06(-0.08,0.19) 2 -0.11(-0.17,-0.05)-0.13(-0.19,-0.07)-0.16(-0.22,-0.1)-0.12(-0.2,-0.03) 3 -0.09(-0.15,-0.02)-0.11(-0.16,-0.04)-0.14(-0.19,-0.07) -0.06(-0.13,0.01) 4 0.09(0.05,0.12)0.06(0.03,0.11)0.05(0,0.1)0.11(0.05,0.17) 5 0.05(-0.09,0.16)0(-0.12,0.12)-0.04(-0.17,0.1)-0.05(-0.17,0.07) ˙ 2 u 0.19(0.18,0.21)0.18(0.17,0.2)0.18(0.17,0.19)0.19(0.17,0.2) a 1.1(1.02,1.18)1.13(1.05,1.2)1.19(1.12,1.27)1.12(1.05,1.19) 0.99(0.97,1)1(1,1)0.99(0.99,1)0.99(0.99,1) c 8.21(7.56,8.91)8.18(7.59,8.83)7.08(6.63,7.63)5.88(5.51,6.32) ˙ 2 v 0.07(0.05,0.09)0.15(0.11,0.18)0.19(0.14,0.31)0.09(0.06,0.12) ˚ v 4.38(2.94,6.26)1.48(1.21,1.83)0.89(0.88,0.89)1.86(1.4,2.75) p D 86.9873.9082.3080.95 DIC19111.4219034.3718969.0019253.65 RMSPEfor y 0.320.330.340.36 95%CIof y cover%83818079 95%CIof y width1.451.171.181.24 Time147h67.94h31.94h6.37h Table 4.2 alsoindicatesthatthejointmodelwithfullknotshasthesimilarRMSPEand meanCIwidthasthosewithreducednumberofknots.Figure 4.3 showsthe95%credible intervalsforpredicting112holdout y undereachmodel,fromwhichweseethatreducing thenumberofknotsdoesnotthepredictionof y verymuch. 109 (a) n =339 (b) n =200 (c) n =100 (d) n =50 Figure4.3Predicted y given Z areknown.Topleft: n =339(fullknots);topright: n =200;bottomleft: n =100;bottomright: n =50.Anyredpointontheblueline representsthecasewhenthepredictedbiomassisequaltotheobservedbiomass. 110 BIBLIOGRAPHY 111 BIBLIOGRAPHY [AAWN13] A.Awadallah,V.Abbott,R.H.Wynne,andR.Nelson, Estimatingforest canopyheightandbiophysicalparametersusingphoton-countinglaseraltime- try ,Proc.13thInternationalConferenceonLiDARApplicationsforAssessing ForestEcosystems(SilviLaser2013),Oct2013,pp.129{136. [Adl81] RobertJAdler, Thegeometryofrandom ,vol.62,Siam,1981. [Adl00] R.J.Adler, Onexcursionsets,tubeformulasandmaximaofrandom ,Ann. Appl.Probab. 10 (2000),1{74.MRMR1765203(2001g:60082) [AGS12] TatiyanaApanasovich,MarcG.Genton,andYingSun, Cross-covariancefunc- tionsformultivariaterandombasedonlatentdimensions ,J.Amer.Statist. Assoc. 107 (2012),no.497,180{193.MRMR2949350 [AHV + 09] GPAsner,RFHughes,TAVarga,DEKnapp,andTKennedy-Bowdoin, En- vironmentalandbioticcontrolsoverabovegroundbiomassthroughoutatropical rainforest ,Ecosystems 12 (2009),no.2,261{278. [Ans06] A.B.Anshin, OntheprobabilityofsimultaneousextremaoftwoGaussiannon- stationaryprocesses ,TheoryProbab.Appl. 50 (2006),no.3,353{366.MR MR2223210(2007f:60031) [AS72] MiltonAbramowitzandIreneA.Stegun, HandbookofMathematicalFunctions withFormulas,Graphs,andMathematicalTables,tenthprinting ,Dover,New York,1972. [AT07] R.AdlerandJ.Taylor, RandomFieldsandGeometry ,NewYork:JoneWiley &Sons,2007. [ATW10] RobertJAdler,JonathanETaylor,andKeithJWorsley, Applicationsofrandom andgeometry:Foundationsandcasestudies . [AW09] Jean-MarcAzaandM.Wschebor, LevelSetsandExtremaofRandomProcesees andFields ,JoneWiley&Sons,Inc.,Hoboken,NJ.,2009. [AZB + 10] WAbdalati,HJZwally,RBindschadler,BCsatho,SLFarrell,HAFricker, RHarding,DandKwok,MLefsky,TMarkus,AMarshak,TNeumann,SPalm, BSchutz,BSmith,JSpinhirne,andCWebb, Theicesat-2laseraltimetrymis- sion ,ProceedingsoftheIEEE 98 (2010),no.5,735{751. [BCG14] SudiptoBanerjee,BradleyP.Carlin,andAlanE.Gelfand, Hierarchicalmodeling andanalysisforspatialdata,2nded. ,CRCpress,2014. [BGFS08] SudiptoBanerjee,AlanEGelfand,AndrewOFinley,andHuiyanSang, Gaus- sianpredictiveprocessmodelsforlargespatialdatasets ,JournaloftheRoyal StatisticalSociety:SeriesB(StatisticalMethodology) 70 (2008),no.4,825{848. 112 [Bil68] PatrickBillingsley, Convergenceofprobabilitymeasures ,NewYork:JoneWiley &Sons,1968. [BMF + 13] CRBabcock,JAMatney,AOFinley,AWeiskittel,andBDCook, Multivariate spatialregressionmodelsforpredictingindividualtreestructurevariablesusing lidardata ,IEEEJournalofSelectedTopicsinAppliedEarthObservationsand RemoteSensing 6 (2013),no.1,SI,6{14. [CCN + 13] BDCook,LACorp,RFNelson,EMMiddleton,DCMorton,JTMcCorkel, JGMasek,KJRanson,VLy,andPMMontesano, Nasagoddardslidar,hyper- spectralandthermal(g-liht)airborneimager ,RemoteSensing 5 (2013),no.8, 4045{4066. [CD09] Jean-PaulChilesandPierre Geostatistics:modelingspatialuncertainty , vol.497,JohnWiley&Sons,2009. [CH94] A.G.ConstantineandPeterHall, Characterizingsurfacesmoothnessviaestima- tionofctivefractaldimension ,J.Roy.Statist.Soc.Ser.B 56 (1994),no.1, 97{113.MRMR1257799(95k:62260) [CL06] HockPengChanandTzeLeungLai, Maximaofasymptoticallygaussianrandom andmoderatedeviationapproximationstoboundarycrossingprobabilities ofsumsofrandomvariableswithmultidimensionalindices ,TheAnnalsofProb- ability(2006),80{121. [Cre93] N.A.C.Cressie, Statisticsforspatialdata,2nded. ,NewYork:JoneWiley& Sons,1993. [CSY00] Huann-ShengChen,DouglasGSimpson,andZhiliangYing, lasymptotics forastochasticprocessmodelwithmeasurementerror ,StatisticaSinica 10 (2000),no.1,141{156. [CW00] GraceChanandAndrewT.A.Wood, Increment-basedestimatorsoffractaldi- mensionfortwo-dimensionalsurfacedata ,Statist.Sinica 10 (2000),no.2,343{ 376.MRMR1769748(2001c:62110) [CW04] , Estimationoffractaldimensionforaclassofnon-Gaussianstationary processesand ,Ann.Statist. 32 (2004),no.3,1222{1260.MRMR2065204 (2005i:60062) [CW11] NoelCressieandChristopherKWikle, Statisticsforspatio-temporaldata ,John Wiley&Sons,2011. [CX14] D.ChengandY.Xiao, GeometryandexcursionprobabilityofmultivariateGaus- sianrandom ,Manuscript(2014). [DHJ14] KRZYSZTOFDEBICKI,ENKELEJDHASHORVA,andLANPENGJI, Ex- tremesofaclassofnon-homogeneousgaussianrandom ,Ann.Probab. (2014). 113 [DKMR10] K.Debicki,K.M.Kosinski,M.Mandjes,andT.Rolski, Extremesofmultidimen- sionalGaussianprocesses ,Stochsticprocessesandtheirapplications 120 (2010), no.12,2289{2301.MRMR2728166(2011m:60107) [FBM11] AOFinley,SBanerjee,andDWMacFarlane, Ahierarchicalmodelforquantifying forestvariablesoverlargeheterogeneouslandscapeswithuncertainforestareas , JournalottheAmericanStatisticalAssociation 106 (2011),no.493,31{48. [FSBG09] AndrewOFinley,HuiyanSang,SudiptoBanerjee,andAlanEGelfand, Improv- ingtheperformanceofpredictiveprocessmodelingforlargedatasets ,Computa- tionalstatistics&dataanalysis 53 (2009),no.8,2873{2884. [GDFG10] A.Gelfand,P.Diggle,M.Fuentes,andP.Guttorp, Handbookofspatialstatistics , CRCpress,2010. [GED14] GEDI, Globalecosystemdynamicsinvestigationlidar , http://science.nasa. gov/missions/gedi/ ,2014,Accessed:1-5-2015. [GKS10] T.Gneiting,W.Kleiber,andM.Schlather, Materncross-covariancefunctions formultivariaterandom ,JournaloftheAmericanStatisticalAssociation 105 (2010),no.491,1167{1177.MRMR2752612(2012d:62326) [Gne02] TilmannGneiting, Nonseparable,stationarycovariancefunctionsforspace{time data ,JournaloftheAmericanStatisticalAssociation 97 (2002),no.458,590{ 600.MRMR1941475(2003k:62283) [G SP12] TilmannGneiting,Hana Sevcoa,andDonaldB.Percival, Estimatorsoffractal dimension:assessingtheroughnessoftimeseriesandspatialdata ,Statist.Sci. 27 (2012),no.2,247{277.MR2963995 [Har97] DavidAHarville, Matrixalgebrafromastatistician'sperspective ,vol.157, Springer,1997. [HJ14] E.HashorvaandL.Ji, OntheJointExtremesofTwoCorrelatedFractional BrownianMotions ,arXiv:1309.4981[math.PR](2014). [HW71] DavidLeeHansonandFarrollTimWright, Aboundontailprobabilitiesfor quadraticformsinindependentrandomvariables ,TheAnnalsofMathematical Statistics(1971),1079{1083. [HW93] PeterHallandAndrewWood, Ontheperformanceofbox-countingestimators offractaldimension ,Biometrika 80 (1993),no.1,246{252.MRMR1225230 (94m:62108) [ICE15] ICESat-2, Ice,cloud,andlandelevationsatellite-2 , http://icesat.gsfc.nasa. gov/ ,2015,Accessed:1-5-2015. 114 [IDUA13] IAIqbal,JDash,SUllah,andGAhmad, Anovelapproachtoestimatecanopy heightusingicesat/glasdata:Acasestudyinthenewforestnationalpark,uk ,In- ternationalJorunalofAppliedEarthObservationandGeoinformation 23 (2013), 109{118. [KN12] WilliamKleiberandDouglasNychka, Nonstationarymodelingformultivariate spatialprocesses ,JournalofMultivariateAnalysis 112 (2012),76{91. [KW95] JohnT.KentandAndrewT.A.Wood, Estimatingthefractaldimensionofa locallyself-similarGaussianprocessusingincrements ,StatisticsResearchReport SRR034-95.(1995),CentreforMathematicsandItsApplications,Australian NationalUniversity,Canberra. [KW97] JohnT.KentandT.A.Wood, Estimatingthefractaldimensionofalocallyself- similarGaussianprocessbyusingincrements ,J.Roy.Statist.Soc.Ser.B 59 (1997),no.3,679{699.MRMR1452033(99a:62136) [LP00] A.LadnevaandV.I.Piterbarg, OndoubleextremesofGaussiansta- tioinaryprocesses ,EURANDOMTechinalreport2000-027(2000),1{18, Availablefrom: http://www.eurandom.tue.nl/reports/2000/027-report. pdf [AccessedMar.19th2014]. [Mey00] C.D.Meyer, MatrixAnalysisandAppliedLinearAlgebra ,SIAM,Philadelphis, 2000. [MMT11] JDMuss,DJandPATownsend, Apseudo-waveformtechniqueto assessforeststructureusingdiscretelidardata ,RemoteSensingofEnvironment 115 (2011),no.3,824{835. [MN07] J.R.MagnusandH.Neudecker, Matrixentialcalculuswithapplicationsin statisticsandeconometrics,reviseded. ,JohnWiley&Sons,2007. [Nˆs11] ENˆsset, Estimatingabove-groundbiomassinyoungforestswithairbornelaser scanning ,InternationalJournalofRemoteSensing 32 (2011),no.2,473{501. [NNR + 13] CSRNeigh,RFNelson,KJRanson,HAMargolis,PMMontesano,GSun, VKharuk,ENˆsset,MAWulder,andHAndersen, Takingstockofcircumboreal forestcarbonwithgroundmeasurements,airborneandspacebornelidar ,Remote SensingofEnvironment 137 (2013),274{287. [NSY08] Y.Nardi,D.O.Siegmund,andB.Yakir, Thedistributionofmaximaofap- proximatelyGaussianrandom ,Ann.Statist. 36 (2008),1375{1403.MR MR2418661(2009e:60112) [Pic69] JamesPickands, Upcrossingprobabilitiesforstationarygaussianprocesses , TransactionsoftheAmericanMathematicalSociety 145 (1969),51{73. [Pit96] V.Piterbarg, AsymptoticmethodsinthetheoryofGaussianprocessesand , AmericanMathematicalSociety,1996. 115 [PS05] V.I.PiterbargandB.Stamatovich, Roughasymptoticsoftheprobabilityofsi- multaneoushighextremaoftwoGaussianprocesses:thedualactionfunctional , Russ.Math.Surv. 60 (2005),no.1,167{168.MRMR2145669(2006a:60064) [RB13] QianRenandSudiptoBanerjee, Hierarchicalfactormodelsforlargespatially misaligneddata:alow-rankpredictiveprocessapproach ,Biometrics 69 (2013), no.1,19{30.MR3058048 [Ste99] MichaelStein, Interpolationofspatialdata:sometheoryforkriging ,Springer, 1999. [Wac03] H.Wackernagel, Multivariategeostatistics:anintroductionwithapplications, 2nded. ,Springer,2003. [Yag87] AMYaglom, Correlationtheoryofstationaryandrelatedrandomfunctions:Vol.: 1:Basicresults ,Springer-Verlag,1987. [Yak13] B.Yakir, ExtremesinRandomFields ,HigherEducationPress,2013. [ZS02] ZhengyuanZhuandMichaelL.Stein, ParameterestimationforfractionalBrow- niansurfaces ,Statist.Sinica 12 (2002),no.3,863{883.MRMR1929968 116