THREEESSAYSONECONOMETRICS By SeunghwaRho ADISSERTATION Submittedto MichiganStateUniversity inpartialoftherequirements forthedegreeof EconomicsŒDoctorofPhilosophy 2013 ABSTRACT THREEESSAYSONECONOMETRICS By SeunghwaRho Thisdissertationconsistsofthreechaptersoneconometrics.Thechapter,fi Areall fl,isrelatedtothestochasticfrontiermodelofAigneretal.(1977).In theusualstochasticfrontiermodel,allareinefbecauseinefisnon- negativeandtheprobabilitythatinefisexactlyzeroequalszero.Wemodifythis modelbyaddingaparameter p whichequalstheprobabilitythataisfullyef ThismodelhasalsobeenconsideredbyKumbhakaretal.(2013).Weextendtheirpaperin severalways.Wediscusssomeationissuesthatariseifallareinefor noareinefWeshowthatthelikelihoodhasastationarypointatparameters thatindicatenoinefandthatthispointisalocalmaximumiftheOLSresiduals arepositivelyskewed.Weconsiderthecasethatalogitorprobitmodeldeterminesthe probabilityoffullefintermsofsomeobservablevariables.Finally,weconsider problemsinvolvedintestingthehypothesisthat p = 0.Weprovidesomesimulationsand anempiricalexample.Thesimulationresultssuggestthattheproposedmodelappears tobeusefulwhen(i)itisreasonabletosupposethatsomearefullyefand(ii) theineflevelsoftheinefarenotsmallrelativetostatisticalnoise. Thefocusofthesecondandthirdchaptersliesonasymptotictheoryforteststatistics intimeseriesthatarerobusttoheteroskedasticityandautocorrelation(HAC)especially underthe b asymptoticframeworkproposedbyKieferandVogelsang(2005).Inthe secondchapter,fi SerialCorrelationRobustInferencewithMissingData fl,weinvestigatethe propertiesofHACrobustteststatisticswhenthereismissingdata.Wecharacterizethe timeserieswithmissingobservationsasamplitudemodulatedseriesfollowingParzen (1963).Forestimationandinferencethisamountstoplugginginzerosformissingob- servations.Wealsoinvestigateanalternativeapproachwherethemissingobservations aresimplyignored.TherearethreemaintheoreticalFirst,whenthemissing processisrandomandstrongmixingconditions,HACrobust t and Wald statis- ticscomputedfromtheamplitudemodulatedseriesfollowtheusual b limitsasin KieferandVogelsang(2005).Second,whenthemissingprocessisnon-random,the b limitsdependonthelocationsofmissingobservationsbutareotherwisepivotal.Third, whenmissingobservationsareignoredweobtainthesurprisingresultthat b limits oftherobust t and Wald statisticshavethestandard b limitswhetherthemissing processisrandomornon-random.Wediscussmethodsforobtainingd- b criticalval- ueswithafocusonbootstrapmethods.Wethatthenaive i . i . d .bootstrapisthemost effectiveandpracticalwaytoobtain b criticalvalueswhendataismissingespecially whenthebootstrapconditionsonthelocationsofthemissingdata. Inthethirdchapter,fi Inferenceintimeseriesmodelsusingsmoothedclusteredstandarder- rors fl,weproposealongrunvarianceestimatorforconductinginferenceintimeseries regressionmodelsthatcombinesthetraditionalnonparametrickernelapproachwitha clusterapproach.Thebasicideaistodividethetimeperiodsintonon-overlappingclus- tersandconstructthelongrunvarianceestimatorbyaggregatingwithinclustersand thenkernelsmoothingacrossclusters.Wederiveasymptoticresultsholdingthenumber ofclustersandalsotreatingtheclustersasincreasingwiththesamplesize.We thatthenumberofclustersflasymptoticapproximationworkswellwhether thenumberofclusters( G )issmallorlarge.Also,wethatthenaive i . i . d .bootstrap mimicsthenumberofclusterscriticalvaluesregardlessof G .Finitesamplesimu- lationssuggestthatclusteringbeforekernelsmoothingcanreduceover-rejectionscaused bystrongserialcorrelationwithoutagreatcostintermsofpower. ACKNOWLEDGEMENTS ThereweresomanydaysthatIdoubtedifIwasgoingtherightwayduringthelast years.Iwouldnothavemadethisfarwithoutthehelpofmanypeople. First,Iwanttothankallofmycommitteemembers.Especially,Iwanttothankmy mainadvisor,PeterSchmidt,whoalwayssurprisedmewithclearandconcreteguidance forwhateverquestionIhadandwhosedetailedattentionhelpedmestayontrack.I alsowouldliketothankmyotheradvisorTimothyVogelsangforhiscontinuoussupport andinspiration.Hoursofconversationswithhimwerebothhelpfulandmemorable. Theyadvisedmefromtheverypreliminarystagetotheconcludinglevelandtheywere patienttotheverylastmoment.Withouttheirsupport,thisdissertationwouldhave beenliterallyimpossible.Icouldnotimaginehavingbetteradvisors.Ialsowouldliketo expressspecialthankstoChristineAmslerforallthedetailedandhelpfulcommentsin thecompletionofthisdissertation. IwanttothankallofmyfriendsatMSU,whomImissalready,includingCheolkeun Cho,DooyeonCho,SukamponChongwilaikasaem,SunghoonKang,DowonKwak,Jiny- oungLee,andSunYu. Lastly,Ialsohavebeenfortunatetohavefamilyandfriendswhosomehowalways believedinmeinwhatevercrazysituations.Iwouldnothavesurvivedthedaysofthe GreatDepressionwithouttheirsupport.Ithankmyparents,mylovingsisterSeungmin, mydearfriendWiroyShinandfriend-likementorCharlesBecker. Iamveryhappytomydissertation. Imadeit! iv TABLEOFCONTENTS LISTOFTABLES.......................................vii LISTOFFIGURES......................................viii CHAPTER1AREALLFIRMSINEFFICIENT?..................1 1.1INTRODUCTION..................................1 1.2THEMODELANDBASICRESULTS.......................2 1.3EXTENSIONSOFTHEBASICMODEL.....................5 1.3.1Issues.............................5 1.3.2AStationaryPointfortheLikelihood...................6 1.3.3ModelsfortheDistributionof u i .....................8 1.3.4TestingtheHypothesisThat p = 0....................9 1.3.4.1LRtest...............................10 1.3.4.2Waldtest..............................11 1.3.4.3LMtest...............................11 1.3.4.4edLMtest.........................11 1.3.4.5KTtest...............................12 1.3.4.6TheWrongSkewProblem,Revisited.............12 1.4SIMULATIONS...................................14 1.4.1ParameterEstimation............................15 1.4.2TestingtheHypothesis p = 0.......................17 1.5EMPIRICALEXAMPLE..............................18 1.5.1Model.....................................19 1.5.2TheEstimates................................20 1.5.3ModelComparisonandSelection.....................22 1.6CONCLUDINGREMARKS............................24 CHAPTER2HETEROSKEDASTICITYAUTOCORRELATIONROBUST INFERENCEINTIMESERIESREGRESSIONSWITHMISS- INGDATA................................43 2.1INTRODUCTION..................................43 2.2MODELANDTESTSTATISTICS.........................46 2.3ASSUMPTIONSANDASYMPTOTICTHEORY................48 2.3.1RandomMissingProcess..........................51 2.3.2Non-randomMissingProcess.......................56 2.4FINITESAMPLEPERFORMANCE........................62 2.4.1DataGeneratingProcess..........................63 2.4.2TeststatisticsandCriticalValues.....................65 2.4.3FiniteSamplePerformance........................67 2.5WHENMISSINGOBSERVATIONSAREIGNORED..............70 2.5.1ModelsandTeststatistics.........................70 v 2.5.2AsymptoticTheory.............................73 2.5.2.1Non-randommissingprocess..................73 2.5.2.2Randommissingprocess....................77 2.5.3FiniteSampleProperties..........................78 2.5.4ComparisonofAMandESStatistics...................80 2.6CONCLUSION...................................81 CHAPTER3INFERENCEINTIMESERIESMODELSUSINGSMOOTHED CLUSTEREDSTANDARDERRORS................443 3.1INTRODUCTION..................................443 3.2MODELANDCLUSTEREDSMOOTHEDSTANDARDERRORS......444 3.3INFERENCEANDASYMPTOTICTHEORY..................446 3.3.1Large- G ,Fixed- n G .............................447 3.3.2Fixed- G ,Large- n G results.........................449 3.4FINITESAMPLEPERFORMANCE........................452 3.4.1EmpiricalRejectionProbabilities.....................452 3.4.2SizeAdjustedPower............................457 3.5CONCLUSIONANDREMAININGWORK...................459 APPENDICES.........................................604 AppendixAPROOFSFORCHAPTER1........................605 AppendixBPROOFSFORCHAPTER2........................611 AppendixCPROOFSFORAMPLITUDEMODUALTEDSTATISTIC,NON- RANDOMMISSINGDATA............................629 AppendixDPROOFSFOREQUALSPACEDSTATISTIC,NON-RANDOM MISSINGDATA...................................635 AppendixEPROOFSFORFIXED- G ,LARGE- n G CASEWHEN G EVENLY DIVIDES T ......................................663 AppendixFPROOFSFORFIXED- G ,LARGE- n G CASEWHENTHENUM- BEROFOBSERVATIONSARENOTTHEEXACTMULTIPLEOF G .....668 BIBLIOGRAPHY.......................................673 vi LISTOFTABLES Table1.1FrequencyofapositivethirdmomentoftheOLSresiduals.........25 Table1.2BasicSFModelvs.ZISFModel,allreplications: n = 200..........26 Table1.3BasicSFModelvs.ZISFModel,correctskewreplications: n = 200....29 Table1.4BasicSFModelvs.ZISFModel,allreplications: n = 500..........32 Table1.5LikelihoodRatioTest, n = 200.........................35 Table1.6LikelihoodRatioTest, n = 500.........................36 Table1.7WaldTest, n = 200................................37 Table1.8WaldTest, n = 500................................38 Table1.9Score-BasedTests, n = 200...........................39 Table1.10Score-BasedTests, n = 500...........................40 Table1.11ModelComparison................................41 Table3.1CriticalValues:Fixed G .............................461 Table3.2Large G ,Empiricalnullrejectionprobabilities,5%level, T = 60......472 Table3.3Fixed G ,Empiricalnullrejectionprobabilities,5%level, T = 60......487 Table3.4Empiricalnullrejectionprobabilitieswithblockbootstrapcriticalval- ues,5%level, T = 60...............................502 Table3.5Empiricalnullrejectionprobabilitieswith i . i . d .bootstrapcriticalval- ues,5%level, T = 60...............................517 Table3.6AverageTypeIIError,5%level, T = 60....................532 Table3.7DailyData,WeekendsMissing, G BlockBootstrap,5%level........536 Table3.8DailyData,WeekendsMissing, i . i . d .Bootstrap,5%level..........543 vii LISTOFFIGURES Figure2.1Datawithmissingobservations........................56 Figure2.2EqualSpaceRegressionModel........................71 Figure2.3MissingduetoWorldWarIandWorldWarII:Yearlydata........82 Figure2.4InitiallyScarceData...............................82 Figure2.5AMSeries-WorldWar(yearly),Bartlett, T = 36..............83 Figure2.6AMSeries-WorldWar(yearly),Bartlett, T = 48..............87 Figure2.7AMSeries-WorldWar(yearly),Bartlett, T = 60..............91 Figure2.8AMSeries-WorldWar(quarterly),Bartlett, T = 144............95 Figure2.9AMSeries-WorldWar(quarterly),Bartlett, T = 192............99 Figure2.10AMSeries-WorldWar(quarterly),Bartlett, T = 240............103 Figure2.11AMSeries-InitiallyScarceData,Bartlett, N Q = 12 N M = 12......107 Figure2.12AMSeries-InitiallyScarceData,Bartlett, N Q = 12 N M = 24......111 Figure2.13AMSeries-InitiallyScarceData,Bartlett, N Q = 12 N M = 48......115 Figure2.14AMSeries-InitiallyScarceData,Bartlett, N Q = 24 N M = 12......119 Figure2.15AMSeries-InitiallyScarceData,Bartlett, N Q = 24 N M = 24......123 Figure2.16AMSeries-InitiallyScarceData,Bartlett, N Q = 24 N M = 48......127 Figure2.17AMSeries-ConditionalBernoulli( p = 0.3),Bartlett, T = 50.......131 Figure2.18AMSeries-ConditionalBernoulli( p = 0.5),Bartlett, T = 50.......135 Figure2.19AMSeries-ConditionalBernoulli( p = 0.7),Bartlett, T = 50.......139 Figure2.20AMSeries-ConditionalBernoulli( p = 0.3),Bartlett, T = 100......143 Figure2.21AMSeries-ConditionalBernoulli( p = 0.5),Bartlett, T = 100......147 Figure2.22AMSeries-ConditionalBernoulli( p = 0.7),Bartlett, T = 100......151 viii Figure2.23AMSeries-ConditionalBernoulli( p = 0.3),Bartlett, T = 200......155 Figure2.24AMSeries-ConditionalBernoulli( p = 0.5),Bartlett, T = 200......159 Figure2.25AMSeries-ConditionalBernoulli( p = 0.7),Bartlett, T = 200......163 Figure2.26AMSeries-RandomBernoulli( p = 0.3),Bartlett, T = 50.........167 Figure2.27AMSeries-RandomBernoulli( p = 0.5),Bartlett, T = 50.........171 Figure2.28AMSeries-RandomBernoulli( p = 0.7),Bartlett, T = 50.........175 Figure2.29AMSeries-RandomBernoulli( p = 0.3),Bartlett, T = 100........179 Figure2.30AMSeries-RandomBernoulli( p = 0.5),Bartlett, T = 100........183 Figure2.31AMSeries-RandomBernoulli( p = 0.7),Bartlett, T = 100........187 Figure2.32AMSeries-RandomBernoulli( p = 0.3),Bartlett, T = 200........191 Figure2.33AMSeries-RandomBernoulli( p = 0.5),Bartlett, T = 200........195 Figure2.34AMSeries-RandomBernoulli( p = 0.7),Bartlett, T = 200........199 Figure2.35ES-WorldWar(yearly),Bartlett, T = 36...................203 Figure2.36ES-WorldWar(yearly),Bartlett, T = 48...................207 Figure2.37ES-WorldWar(yearly),Bartlett, T = 60...................211 Figure2.38ES-WorldWar(quarterly),Bartlett, T = 144................215 Figure2.39ES-WorldWar(quarterly),Bartlett, T = 192................219 Figure2.40ES-WorldWar(quarterly),Bartlett, T = 240................223 Figure2.41ES-InitiallyScarceData,Bartlett, N Q = 12 N M = 12...........227 Figure2.42ES-InitiallyScarceData,Bartlett, N Q = 12 N M = 24...........231 Figure2.43ES-InitiallyScarceData,Bartlett, N Q = 12 N M = 48...........235 Figure2.44ES-InitiallyScarceData,Bartlett, N Q = 24 N M = 12...........239 Figure2.45ES-InitiallyScarceData,Bartlett, N Q = 24 N M = 24...........243 Figure2.46ES-InitiallyScarceData,Bartlett, N Q = 24 N M = 48...........247 ix Figure2.47ES-ConditionalBernoulli( p = 0.3),Bartlett, T = 50............251 Figure2.48ES-ConditionalBernoulli( p = 0.5),Bartlett, T = 50............255 Figure2.49ES-ConditionalBernoulli( p = 0.7),Bartlett, T = 50............259 Figure2.50ES-ConditionalBernoulli( p = 0.3),Bartlett, T = 100...........263 Figure2.51ES-ConditionalBernoulli( p = 0.5),Bartlett, T = 100...........267 Figure2.52ES-ConditionalBernoulli( p = 0.7),Bartlett, T = 100...........271 Figure2.53ES-ConditionalBernoulli( p = 0.3),Bartlett, T = 200...........275 Figure2.54ES-ConditionalBernoulli( p = 0.5),Bartlett, T = 200...........279 Figure2.55ES-ConditionalBernoulli( p = 0.7),Bartlett, T = 200...........283 Figure2.56ES-RandomBernoulli( p = 0.3),Bartlett, T = 50..............287 Figure2.57ES-RandomBernoulli( p = 0.5),Bartlett, T = 50..............291 Figure2.58ES-RandomBernoulli( p = 0.7),Bartlett, T = 50..............295 Figure2.59ES-RandomBernoulli( p = 0.3),Bartlett, T = 100.............299 Figure2.60ES-RandomBernoulli( p = 0.5),Bartlett, T = 100.............303 Figure2.61ES-RandomBernoulli( p = 0.7),Bartlett, T = 100.............307 Figure2.62ES-RandomBernoulli( p = 0.3),Bartlett, T = 200.............311 Figure2.63ES-RandomBernoulli( p = 0.5),Bartlett, T = 200.............315 Figure2.64ES-RandomBernoulli( p = 0.7),Bartlett, T = 200.............319 Figure2.65AMandES-WorldWar(quarterly),Bartlett, T = 36............323 Figure2.66AMandES-WorldWar(quarterly),Bartlett, T = 48............327 Figure2.67AMandES-WorldWar(quarterly),Bartlett, T = 60............331 Figure2.68AMandES-WorldWar(quarterly),Bartlett, T = 144...........335 Figure2.69AMandES-WorldWar(quarterly),Bartlett, T = 192...........339 Figure2.70AMandES-WorldWar(quarterly),Bartlett, T = 240...........343 Figure2.71AMandES-InitiallyScarceData,Bartlett, N Q = 12 N M = 12......347 x Figure2.72AMandES-InitiallyScarceData,Bartlett, N Q = 12 N M = 24......351 Figure2.73AMandES-InitiallyScarceData,Bartlett, N Q = 12 N M = 48......355 Figure2.74AMandES-InitiallyScarceData,Bartlett, N Q = 24 N M = 12......359 Figure2.75AMandES-InitiallyScarceData,Bartlett, N Q = 24 N M = 24......363 Figure2.76AMandES-InitiallyScarceData,Bartlett, N Q = 24 N M = 48......367 Figure2.77AMandES-ConditionalBernoulli( p = 0.3),Bartlett, T = 50......371 Figure2.78AMandES-ConditionalBernoulli( p = 0.5),Bartlett, T = 50......375 Figure2.79AMandES-ConditionalBernoulli( p = 0.7),Bartlett, T = 50......379 Figure2.80AMandES-ConditionalBernoulli( p = 0.3),Bartlett, T = 100......383 Figure2.81AMandES-ConditionalBernoulli( p = 0.5),Bartlett, T = 100......387 Figure2.82AMandES-ConditionalBernoulli( p = 0.7),Bartlett, T = 100......391 Figure2.83AMandES-ConditionalBernoulli( p = 0.3),Bartlett, T = 200......395 Figure2.84AMandES-ConditionalBernoulli( p = 0.5),Bartlett, T = 200......399 Figure2.85AMandES-ConditionalBernoulli( p = 0.7),Bartlett, T = 200......403 Figure2.86AMandES-RandomBernoulli( p = 0.3),Bartlett, T = 50........407 Figure2.87AMandES-RandomBernoulli( p = 0.5),Bartlett, T = 50........411 Figure2.88AMandES-RandomBernoulli( p = 0.7),Bartlett, T = 50........415 Figure2.89AMandES-RandomBernoulli( p = 0.3),Bartlett, T = 100........419 Figure2.90AMandES-RandomBernoulli( p = 0.5),Bartlett, T = 100........423 Figure2.91AMandES-RandomBernoulli( p = 0.7),Bartlett, T = 100........427 Figure2.92AMandES-RandomBernoulli( p = 0.3),Bartlett, T = 200........431 Figure2.93AMandES-RandomBernoulli( p = 0.5),Bartlett, T = 200........435 Figure2.94AMandES-RandomBernoulli( p = 0.7),Bartlett, T = 200........439 Figure3.1SizeAdjustedPowerComparisionbasedon G = 60, M = 30case....550 xi Figure3.2SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 1....................................553 Figure3.3SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 2....................................555 Figure3.4SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 3....................................557 Figure3.5SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 4....................................561 Figure3.6SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 5....................................564 Figure3.7SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 6....................................568 Figure3.8SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 7....................................569 Figure3.9SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 8....................................570 Figure3.10SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 1....................................571 Figure3.11SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 2....................................574 Figure3.12SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 3....................................577 Figure3.13SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 4....................................581 Figure3.14SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 5....................................585 Figure3.15SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 6....................................587 Figure3.16SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 7....................................588 Figure3.17SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 8....................................589 xii Figure3.18SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 9....................................590 Figure3.19SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 1....................................591 Figure3.20SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 2....................................595 Figure3.21SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 3....................................596 Figure3.22SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 4....................................598 Figure3.23SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 5....................................599 Figure3.24SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 6....................................600 Figure3.25SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 7....................................601 Figure3.26SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 8....................................602 Figure3.27SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 9....................................603 xiii CHAPTER1 AREALLFIRMSINEFFICIENT? 1.1INTRODUCTION InthebasicstochasticfrontiermodelofAigneretal.(1977)andMeeusenandvanden Broeck(1977),allareineftosomedegree.Theone-sidederrorthatrepresents technicalinefhasadistribution(forexample,halfnormal)forwhichzeroisinthe support,sothatzeroisapossiblevalue,butitisstillthecasethattheprobabilityiszero thatadrawfromahalfnormalexactlyequalszero.Thismayberestrictiveempirically, sinceitisplausible,oratleastpossible,thatanindustrymaycontainasetofthat arefullyef InthischapterweallowthepossibilitythatsomearefullyefWeintro- duceaparameter p whichrepresentstheprobabilitythataisfullyefSothe caseof p = 0correspondstotheusualstochasticfrontiermodelandthecaseof p = 1cor- respondstothecaseoffullef(noone-sidederror),whileif0 < p < 1afraction p ofthearefullyefandafraction1 p areinefThismaybeimportant becauseifsomeoftheactuallyarefullyeftheusualstochasticfrontiermodel isandcanbeexpectedtoyieldbiasedestimatesofthetechnologyandof ineflevels. ThismodelisaspecialformofthelatentclassmodelconsideredbyCaudill(2003), OreaandKumbhakar(2004),Greene(2005)andothers.Ithasthespecialfeaturethatthe frontieritselfdoesnotvaryacrossthetwoclassesofms;onlytheexistenceornon- existenceofinefdiffers.OurmodelhaspreviouslybeenconsideredbyKumb- hakar,Parmeter,andTsionas(2013),hereafterKPT.SeealsoGrassetti(2011).Ourresults werederivedwithoutknowledgeoftheKPTpaper,butinthischapterwewillnaturally 1 focusonourresultswhicharenotintheirpaper. Theplanofthischapterisasfollows.InSection1.2wewillpresentthemodeland giveabriefsummaryofthebasicresultsthatarealsointheKPTpaper.Theseinclude thelikelihoodtobemaximized,theformofthefiposteriorflprobabilitiesoffullef foreachandtheexpressionfortheestimatedinefforeachInSection 1.3weprovidesomenewresults.Wediscussissues.Wegivethegeneral- izationoftheresultsofWaldman(1982),whichestablishthatthereisastationarypoint ofthelikelihoodatapointoffullefandthatthispointisalocalmaximumofthe likelihoodiftheOLSresidualsarepositivelyskewed.Weproposeusinglogitorprobit modelstoallowadditionalexplanatoryvariablestoaffecttheprobabilityofabeing fullyefWealsodiscusstheproblemoftestingthehypothesisthat p = 0.InSec- tion1.4wepresentsomesimulations,andinSection1.5wegiveanempiricalexample. Finally,Section1.6givesourconclusions. 1.2THEMODELANDBASICRESULTS Webeginwiththestandardstochasticfrontiermodeloftheform: y i = x 0 i b + # i , # i = v i u i , u i 0. Here i = 1,..., n indexesWehaveinmindaproductionfrontiersothat y istypi- callylogoutputand x isavectoroffunctionsofinputs.The v i areiid N 0, s 2 v ,the u i areiid N + 0, s 2 u (i.e.,half-normal),and x , v ,and u aremutuallyindependent(so x can betreatedasWewillrefertothismodelasthebasicstochasticfrontier(orbasic SF)model. Wenowsomestandardnotation.Let f bethestandardnormaldensity,and F 2 bethestandardnormalcdf.Let f v and f u representthedensitiesof v and u : f v ( v ) = 1 p 2 ps v exp ( v 2 2 s 2 v )= 1 s v f v s v ,(1.1) f u ( u ) = 2 p 2 ps u exp ( u 2 2 s 2 u )= 2 s u f u s u , u 0. Also l = s u / s v and s 2 = s 2 u + s 2 v .Thisimpliesthat s 2 v = s 2 / ( 1 + l 2 ) and s 2 u = s 2 l 2 / ( 1 + l 2 ) .Finally,welet f # representthedensityof # = v u : f # ( # ) = 2 s f # s 1 F #l s .(1.2) Nowwethemodelofthischapter.Supposethereisan unobservable variable z i suchthat z i = 1 u i = 0 = 8 > > < > > : 1if u i = 0 0if u i > 0. p = P z i = 1 = P u i = 0 .Weassumethat u i z i = 0isdistributedas N + ( 0, s 2 u ) , thatis,halfnormal.Thus u i = 8 > > < > > : 0withprobability p N + 0, s 2 u withprobability1 p . Thismodelcontainstheparameters b , s 2 u , s 2 v ,and p or b , l , s 2 ,and p . WewillfollowtheterminologyofKPTandcallthismodelthe"zero-inefstochas- ticfrontier"(ZISF)model.Thenamereferstothefactthat,inthismodel,theevent u i = 0 canoccurwithnon-zerofrequency.Notethat f ( # j z = 1 ) = f v ( # ) , f ( # j z = 0 ) = f # ( # ) , (where f v and f # arein(1.1)and(1.2)above)andsothemarginal(unconditional) densityof # is f p ( # ) = pf v ( # ) + ( 1 p ) f # ( # ) . 3 Usingthisdensity,wecanformthe(log)likelihoodforthemodel: ln L b , s 2 u , s 2 v , p = n å i = 1 ln f p y i x 0 i b . WewillestimatethemodelbyMLE;thatis,bymaximizingln L withrespectto b , s 2 u , s 2 v , and p .Or,alternatively,themodelmaybeparameterizedintermsof b , l , s 2 ,and p , withmaximizationoverthatsetofparameters. Whenwehaveestimatedthemodel,wecanobtain ‹ # i = y i x 0 i ‹ b ,anestimateof # i = y i x 0 i b .UsingBayesrule,wecannowupdatetheprobabilitythataparticular isfullyefbecause # i isinformativeaboutthatpossibility.Thatis,wecancalculate P z i = 1 j # i = P z i = 1 f # i j z i = 1 f p # i = pf v # i f p # i = pf v # i pf v # i +( 1 p ) f # # i . (1.3) Wewillcallthisthefiposterior"probabilitythat i isfullyefItisevaluatedat ‹ p , ‹ # i andalso ‹ s 2 u and ‹ s 2 v ,whichenterintothedensitiesof f v and f # .Weputquotesaround "posterior"becauseitisnottrulytheposteriorprobabilityof z i = 1inaBayesiansense. (AtrueBayesianposteriorwouldgive P z i = 1 j y i , x i andwouldhavestartedwitha priordistributionfortheparameters b , s 2 u , s 2 v ,and p .) Wenowwishtoestimate(predict) u i foreachFollowingthelogicofJondrow etal.(1982),we ‹ u i = E u i j # i .Now E u i j # i = E z j # E u i j # i , z i = P z i = 1 j # i E u i j # i , z i = 1 + P z i = 0 j # i E u i j # i , z i = 0 = P z i = 0 j # i E u i j # i , z i = 0 since u i 0when z i = 1.But E u i j # i , z i = 0 istheusualexpressionfromJondrowetal. (1982),and P z i = 0 j # i = 1 P z i = 1 j # i whichcanbeevaluatedusingequation(1.3) above.Therefore, ‹ u i = E u i j # i = ( 1 p ) f # # i pf v # i + ( 1 p ) f # # i s " f a i 1 F a i a i # , (1.4) 4 where a i = # i l / s and s = s u s v / s = ls / ( 1 + l 2 ) . Aslightextensionofthisresult,whichisnotinKPT,istofollowBatteseandCoelli (1988)andtechnicalefas TE = exp ( u ) .Correspondinglytechnicalin- efwouldbe1 TE = 1 exp ( u ) ,whichisonlyapproximatelyequalto u (for small u ).Theyprovidetheexpressionfor E ( TE j # ) .Usingour"posterior"probability P z i = 1 j # i andtheirexpressionfor E TE i j # i , z i = 0 ,weobtain c TE i = E e u i j # i = ( 1 p ) f # # i pf v # i + ( 1 p ) f # # i F m i s s ! F m i s ! exp ( s 2 2 m i ) + pf v # i pf v # i + ( 1 p ) f # # i , (1.5) where m i = # i s 2 u / s 2 , s = s u s v / s (asabove),andcorrespondingly m i / s = a i where a i = # i l / s (asabove). AsinJondrowetal.(1982),theexpressionineither(1.4)or(1.5)wouldneedtobe evaluatedattheestimatedvaluesoftheparameters( p , s 2 u ,and s 2 v )andat ‹ # i = y i x 0 i ‹ b . 1.3EXTENSIONSOFTHEBASICMODEL Wenowinvestigatesomeextensionsofthebasicresultsoftheprevioussection.Mostof theresultsinthissectionarenotinKPT. 1.3.1Issues Someoftheparametersarenotundercertaincircumstances.When p = 1,so thatallarefullyef s 2 u isnotConversely,when s 2 u = 0, p isnot Infact,thelikelihoodvalueisexactlythesamewhen ( i ) s 2 u = 0, p = anything aswhen ( ii ) p = 1, s 2 u = anything.Moregenerally,wemightsupposethat s 2 u and p will 5 beestimatedimpreciselywhenadatasetcontainslittleinef,sinceitwillbehard todeterminewhetherthereislittleinefbecause s 2 u issmallorbecause p isclose toone. Thisissueofisrelevanttotheproblemoftestingthenullhypothesis that p = 1againstthealternativethat p < 1.Thisisatestofthenullhypothesisthat allareefagainstthealternativethatsomefraction(possiblyall)ofthemare inefandthatisaneconomicallyinterestinghypothesis.KPTsuggestalikelihood ratiotestofthishypothesis.Astheynote,thenulldistributionoftheirstatisticisaffected bythefactthatthenullhypothesisisontheboundaryoftheparameterspace.Theyrefer toChenandLiang(2010),p.608tojustifyanasymptoticdistributionof 1 / 2 c 2 0 + 1 / 2 c 2 1 for thelikelihoodratiostatistic.However,itisnotclearthatthisresultapplies,giventhat oneoftheparameters( s 2 u )isnotunderthenullthat p = 1.,the argumentofChenandLiang(2010)dependsontheexistenceandasymptoticnormality oftheestimator ‹ h ( g 0 ) [seep.606,line4]where g 0 correspondsto p 0 (= 1 ) ,andwhere h correspondstotheotherparametersofourmodel,including s 2 u . Amorerelevantreference,whichKPTnotebutdonotpursue,isAndrews(2001).This chapterexplicitlyallowsthecaseinwhichtheparametervectorunderthenullmaylieon theboundaryofthemaintainedhypothesis and theremaybeanuisanceparameterthat appearsunderthealternativehypothesis,butnotunderthenull.SeehisTheorem4,p. 707,fortherelevantasymptoticdistributionresult,whichunfortunatelyisconsiderably morecomplicatedthanthesimpleresult(50-50mixtureofchi-squareds)ofChenand Liang(2010). 1.3.2AStationaryPointfortheLikelihood Forthebasicstochasticfrontiermodel,lettheparametervectorbe q =( b 0 , l , s 2 ) 0 .Then Waldman(1982)establishedthefollowingresults.First,theloglikelihoodalwayshas astationarypointat q =( ‹ b 0 ,0, ‹ s 2 ) 0 ,where ‹ b = OLSand ‹ s 2 = (OLSsumofsquared 6 residuals)/ n .Notethattheseparametervaluescorrespondto ‹ s 2 u = 0,thatis,tofullef- ofeachSecond,theHessianmatrixissingularatthispoint.Itisnegative withonezeroeigenvalue.Third,theseparametervaluesarealocalmaxi- mizeroftheloglikelihoodiftheOLSresidualsarepositivelyskewed.Thisistheso-called "wrongskewproblem". TheloglikelihoodfortheZISFmodelhasastationarypointverysimilartothatfor thebasicstochasticfrontiermodel.Thisstationarypointisalsoalocalmaximumofthe loglikelihoodiftheleastsquaresresidualsarepositivelyskewed. Theorem1.1. Let q =( b 0 , l , s 2 , p ) 0 andlet q =( ‹ b 0 ,0, ‹ s 2 , ‹ p ) 0 ,where ‹ b = OLS, ‹ s 2 = (OLS sumofsquaredresiduals) / n,andwhere ‹ pis any valuein [ 0,1 ] .Then 1. q isastationarypointoftheloglikelihood. 2. TheHessianmatrixissingularatthispoint.Itisnegativewithtwozero eigenvalues. 3. q with ‹ p 2 [ 0,1 ) isalocalmaximizeroftheloglikelihoodfunctionifandonlyif å n i = 1 ‹ # 3 i > 0 ,where ‹ # i = y i x 0 i ‹ b istheOLSresidual. 4. q with ‹ p = 1 isalocalmaximizeroftheloglikelihoodfunctionif å n i = 1 ‹ # 3 i > 0 . Proof. SeeAppendixA. Asistypicallydoneforthebasicstochasticfrontiermodel,wewillpresumethat q istheglobalmaximizeroftheloglikelihoodwhentheresidualshavepositive("wrong") skew.Notethatat q ,wehave ‹ l = 0orequivalently ‹ s 2 u = 0,and p isnot when s 2 u = 0.Wegetthesamelikelihoodvalueforanyvalueof p .Inoursimulations (inSection1.4)wewillset ‹ p = 1inthecaseofwrongskew,since ‹ p = 1isanotherway ofrfullef.However,foragivendataset,thevalueof ‹ p doesnotmatter when q = q . 7 Sinceforany p 2 [ 0,1 ] , plim n ! ¥ 1 n å ‹ # 3 i = E # i E # i 3 = s 3 u p 2/ p ( 1 p ) 4 p 2 +( 8 3 p ) p + p 4 / p 0 asthenumberofobservationsincreases,theprobabilityofapositivethirdmomentofthe OLSresidualsgoestozeroasymptotically.Inasample,theprobabilityofapositive thirdmomentincreaseswhen l issmalland/or p isnear0or1.SeeTable1.1.The entriesinTable1.1arebasedonsimulationswith100,000replications,with s u = 1, l = s u / s v , l 2 f 0.5,1,2 g ,and p 2 f 0,0.1,...,0.9 g ,forsamplesizes50,100,200,and400. 1.3.3ModelsfortheDistributionof u i TheZISFmodelcanbeextendedbyallowingthedistributionof u i todependonsome observablevariables w i .Forexample,inourempiricalanalysisofSection1.5,the w i will includevariablesliketheageandeducationofthefarmerandthesizeofhishousehold. Thesevariablescanbeassumedtoaffecteither P ( z i = 1 ) or f ( u i j z i = 0 ) orboth. Firstconsiderthecaseinwhichweassumethat w i affectsthedistributionof u i forthe inefAgeneralassumptionwouldbethatthedistributionof u i conditionalon w i andon z i = 0is N + ( m i , s 2 i ) where m i and/or s 2 i dependon w i .Forexample,inSection 1.5wewillassumetheRSCFGmodelofReifschneiderandStevenson(1991),Caudilland Ford(1993)andCaudilletal.(1995),undertheassumptionsthat m i = 0and s 2 i = exp ( w 0 i g ) .AnotherpossiblemodelistheKGMHLBCmodelofKumbhakaretal. (1991),HuangandLiu(1994)andBatteseandCoelli(1995),with s 2 i = s 2 u constantand with m i = w 0 i y or m i = c exp ( w 0 i y ) .Wang(2002)proposesparameterizingboth m i and s 2 i .SeealsoAlvarezetal.(2006). Asecondandmorenovelcaseistheoneinwhichweassumethat w i affects P ( z i = 1 ) . 8 Forexample,wecouldassumealogitmodel: P z i = 1 j w i = exp ( w 0 i d ) 1 + exp ( w 0 i d ) . (1.6) Aprobitmodelwouldbeanotherobviouspossibility. Finally,wecanconsideramoregeneralmodelinwhichboth P ( z i = 1 j w i ) and f ( u i j z i = 0, w i ) dependon w i ,asabove.Wewillestimatesuchamodelinourempiricalsection. 1.3.4TestingtheHypothesisThat p = 0 Inthissection,wediscusstheproblemoftestingthenullhypothesis H 0 : p = 0against thealternative H A : p > 0.Thenullhypothesisisthatallareinefsothebasic stochasticfrontiermodelapplies.Thealternativeisthatsomearefullyefand sotheZISFmodelisneeded. Itisastandardresultthat,undercertainregularityconditions,notablythatthepa- rametervaluebythenullhypothesisisaninteriorpointoftheparameterspace, thelikelihood(LR),Lagrangemultiplier(LM),andWaldtestsallhavethesameasymp- totic c 2 distribution.However,inourcase p cannotbenegative,andthereforethenull hypothesisthat p = 0liesontheboundaryoftheparameterspace.Thisisthereforea non-standardproblem.Unlikethecaseoftestingthehypothesisthat p = 1,however, thereisnoproblemwiththeoftheotherparameters(nuisanceparameters) b , s 2 u ,and s 2 v ,or b , l ,and s 2 .Weneedtorestrict s 2 u > 0and s 2 v > 0sothatthenuisance parametersareintheinterioroftheparameterspace,andalsobecause p wouldnotbe if s 2 u = 0.However,withthesemodestrestrictions,thisisonlyamildlynon- standardproblem,whichhasbeendiscussedbyRogers(1986),SelfandLiang(1987),and GouriérouxandMonfort(1995)chapter21,forexample. Weconsiderteststatistics:thelikelihoodratio(LR),Wald,Lagrangemultiplier (LM),LagrangemultiplierLM),andKuhn-Tucker(KT)tests.Allof 9 theseexcepttheLMtestwillhaveasymptoticdistributionsthataredifferentfromthe usual( c 2 1 )distribution. Wewillassumethatthelikelihoodfunction L n ( q ) theusualconditions, 1 p n ¶ L n ( q 0 ) ¶q d !N 0, I 0 , 1 n ¶ 2 L n ( q 0 ) ¶q¶q p !H 0 = I 0 , where q =( b 0 , s u , s v , p ) 0 ,andtheparametersotherthan p areawayfromtheboundary oftheirparameterspaces.therestrictedestimator( Ÿ q )andtheunrestrictedestima- tor( ‹ q ): Ÿ q = argmax s u 0, s v 0, p = 0 ln L n ( q ) , ‹ q = argmax s u 0, s v 0,0 p 1 ln L n ( q ) . Wealso l i = ln f ( # i ) , ‹ s i = ¶ l i ( ‹ q ) / ¶q , Ÿ s i = ¶ l i ( Ÿ q ) / ¶q , ‹ h i = ¶ 2 l i ( ‹ q ) / ¶q¶q 0 ,and Ÿ h i = ¶ 2 l i ( Ÿ q ) / ¶q¶q 0 .Finally,weconsiderthe"unconstrained"estimator( q )thatignores thelogicalrestriction0 p 1: q = argmax s u 0, s v 0 ln L n ( q ) . 1.3.4.1LRtest TheLRstatisticwhentesting H 0 : p = 0is x LR = 2 ( ln L n ( ‹ q ) ln L n ( Ÿ q )) .Understan- dardregularityconditions,theasymptoticdistributionof x LR isamixtureof c 2 0 and c 2 1 , withmixingweights 1 / 2 ,where c 2 0 isasthepointmassdistributionatzero.That is x LR d ! 1 / 2 c 2 0 + 1 / 2 c 2 1 .Thisfollows,forexample,fromChenandLiang(2010),ascited byKPT. 10 1.3.4.2Waldtest TheWaldstatisticfor H 0 : p = 0is x W = ‹ p 2 se ( ‹ p ) 2 . Notethat se ( ‹ p ) 2 canbecomputedusingtheouterproductofthescoreformofthevari- ancematrixof ‹ q , [( å n i = 1 ‹ s i ‹ s 0 i ) 1 ] ,theHessianform, [( å n i = 1 ‹ h i ) 1 ] ,ortheRobustform, [( å n i = 1 ‹ h i ) 1 ( å n i = 1 ‹ s i ‹ s 0 i )( å n i = 1 ‹ h i ) 1 ] .AswiththeLRstatistic, x W d ! 1 / 2 c 2 0 + 1 / 2 c 2 1 .Notethatthenon-standardnatureofthisresultmeansthatthee"of anestimated ‹ p fromtheZISFmodelcannotbeassessedusingstandardresults. 1.3.4.3LMtest TheLMstatisticfor H 0 : p = 0is x LM = n å i = 1 Ÿ s i ! 0 Ÿ M 1 n å i = 1 Ÿ s i ! . Ÿ M canbeeither [( å n i = 1 Ÿ s i Ÿ s 0 i )] or [( å n i = 1 Ÿ h i )] ,ineithercaseevaluatedat Ÿ q .Unlikethe otherstatisticsconsideredhere,theLMstatistichastheusual c 2 1 distribution.Itignores theone-sidednatureofthealternative,becauseitrejectsforalarge(inabsolutevalue) positiveor negative valueof Ÿ s i .AspointedoutbyRogers(1986),thismayresultinaloss inpowerrelativetoteststhattaketheone-sidednatureofthealternativeintoaccount. 1.3.4.4LMtest TheLMstatistichastheusual c 2 1 distributionbecauseitdoesnottakeaccountofthe one-sidednatureofthealternative.Bytakingaccountoftheone-sidednatureofthe alternative,theLMtestmighthavebetterpower.TheLMstatisticproposedby 11 Rogers(1986)ismotivatedbythispoint.TheLMstatisticis: x LM = 8 > > < > > : x LM ,if å n i = 1 Ÿ s i > 0 0,otherwise. IntheLMstatistic,apositivescoreistakenasevidenceagainstthenulland infavorofthealternative p > 0,whereasanegativescoreisnot.Soanegativescoreis simplysettozero.Theasymptoticdistributionof x LM is 1 / 2 c 2 0 + 1 / 2 c 2 1 . 1.3.4.5KTtest Anotherformofscoreteststatisticthattakesaccountoftheone-sidednatureoftheal- ternativeistheKTstatisticproposedbyGouriérouxetal.(1982).TheKTstatisticfor H 0 : p = 0is x KT = n å i = 1 Ÿ s i n å i = 1 ‹ s i ! 0 Ÿ M 1 n å i = 1 Ÿ s i n å i = 1 ‹ s i ! , where Ÿ M canbeeither å n i = 1 Ÿ s i Ÿ s 0 i or å n i = 1 Ÿ h i .When ‹ p = 0, å n i = 1 Ÿ s i = å n i = 1 ‹ s i .Since ‹ p = 0when p 0,theteststatisticwillhaveadegeneratedistributionatzerowhen p 0. Otherwise, å n i = 1 ‹ s i = 0andtheteststatistichastheusual c 2 1 distribution.Therefore, x KT d ! 1 / 2 c 2 0 + 1 / 2 c 2 1 . 1.3.4.6TheWrongSkewProblem,Revisited WhentheOLSresidualsarepositivelyskewed( å n i = 1 ‹ # 3 i > 0),wehave ‹ s 2 u = 0(orequiv- alently, ‹ l = 0)and ‹ p isnotwellAlsotheinformationmatrix,whetherevaluated 12 at ‹ q or Ÿ q ,issingular., n å i = 1 ‹ s i ‹ s 0 i = 0 B B B B B B B B B B @ 1 ‹ s 2 v n å i = 1 ‹ # 2 i x i x 0 i ( 1 ‹ p ) q 2 p 1 ‹ s 2 v n å i = 1 ‹ # 2 i x i 1 ‹ s 5 v n å i = 1 ‹ # 3 i x i 0 ( 1 ‹ p ) q 2 p 1 ‹ s 2 v n å i = 1 ‹ # 2 i x 0 i 2 p ( 1 ‹ p ) 2 n ‹ s 2 v ( 1 ‹ p ) q 2 p 1 ‹ s 5 v n å i = 1 ‹ # 3 i 0 1 ‹ s 5 v n å i = 1 ‹ # 3 i x 0 i ( 1 ‹ p ) q 2 p 1 ‹ s 5 v n å i = 1 ‹ # 3 i 1 ‹ s 6 v n å i = 1 ‹ # 4 i n ‹ s 2 v 0 0000 1 C C C C C C C C C C A n å i = 1 Ÿ s i Ÿ s 0 i = 0 B B B B B B B B B B @ 1 Ÿ s 2 v n å i = 1 Ÿ # 2 i x i x 0 i q 2 p 1 Ÿ s 2 v n å i = 1 Ÿ # 2 i x i 1 Ÿ s 5 v n å i = 1 Ÿ # 3 i x i 0 q 2 p 1 Ÿ s 2 v n å i = 1 Ÿ # 2 i x 0 i 2 p n Ÿ s 2 v q 2 p 1 Ÿ s 5 v n å i = 1 Ÿ # 3 i 0 1 Ÿ s 5 v n å i = 1 Ÿ # 3 i x 0 i q 2 p 1 Ÿ s 5 v n å i = 1 Ÿ # 3 i 1 Ÿ s 6 v n å i = 1 Ÿ # 4 i n Ÿ s 2 v 0 0000 1 C C C C C C C C C C A n å i = 1 ‹ h i = 0 B B B B B B B B B B B @ n å i = 1 x i x 0 i ‹ s 2 v ( 1 ‹ p ) q 2 p 1 ‹ s 2 v n å i = 1 x i 00 ( 1 ‹ p ) q 2 p 1 ‹ s 2 v n å i = 1 x 0 i ( 1 ‹ p ) 2 2 p n ‹ s 2 v 00 00 2 n ‹ s 2 v 0 0000 1 C C C C C C C C C C C A n å i = 1 Ÿ h i = 0 B B B B B B B B B B B @ n å i = 1 x i x 0 i Ÿ s 2 v q 2 p 1 Ÿ s 2 v n å i = 1 x i 00 q 2 p 1 Ÿ s 2 v n å i = 1 x 0 i 2 p n Ÿ s 2 v 00 00 2 n Ÿ s 2 v 0 0000 1 C C C C C C C C C C C A . Allthematricesabovearesingularforany ‹ p 2 [ 0,1 ] .Therefore,whenthethirdmo- mentoftheOLSresidualsispositive,onlytheLRstatisticcanbeandequalszero. ItremainstodecideifweshouldrejectthenullhypothesisornotwhentheOLSresiduals havewrongskew.Clearly,theLRtestwillnotrejectthenullhypothesis,sincethestatistic 13 equalszerounderwrongskew.Butfortheothertests,thestatisticisandit isnotclearwhattoconclude.Ifweconsiderthewrongskewcasesasindicatingthatall areefthenitwouldbereasonabletorejectthenullhypothesis.However,as apracticalmatter,whetherwerejectthenullhypothesisornotdoesnotaffectanything, becausetheestimatedmodelwhether p = 0ornotcollapsestothesamemodel.Itmight bereasonabletosimplysaythat p isnotwithincorrectlyskewedOLSresidu- als.Foragivendataset,boththenullandthealternativehypothesiswouldleadtosame results. Assumingthat s 2 u > 0,thewrongskewproblemoccurswithaprobabilitythatgoes tozeroasymptotically.However,asshowninTable1.1,itcanoccurwithnon-trivial probabilityinsamples.Also,thediscussionabovemayberelevantevenwhenthe datadonothavethewrongskewproblem.Theloglikelihoodhasastationarypointat q regardlessoftheskewoftheresiduals.Inthewrongskewcase,thelikelihoodis perfectlyinthe p directionwith ‹ b = OLS, ‹ l = 0,and ‹ s 2 = 1 / n SSE .Inthecorrect skewcase,thisisnottrue,butwhen l issmall,weexpectthatthepartialofloglikelihood withrespectto p (evaluatedattheMLEoftheotherparameters)wouldoftenbesmallin thevicinityof p = 0,sothattheLMtestanditsvariantsmighthavelowpower.Wewill investigatethisissueinthesimulationsofthenextsection. 1.4SIMULATIONS Weconductedsimulationsinordertoinvestigatethesampleperformanceofthe ZISFmodel,andtocompareittotheperformanceofthebasicstochasticfrontiermodel. Weareinterestedbothinparameterestimationandintheperformanceoftestsofthe hypothesis p = 0. Weconsideraverysimpledatageneratingprocess: y i = b + # i ,whereasinSection 1.2above, # i = v i u i and u i ishalf-normalwithprobability1 p and u i = 0with probability p .Wepick n = 200and500, b = 1,and s u = 1.Weconsider p = 0,0.25,0.5, 14 and0.75,and l = 1,2,and5(i.e., s v = 1,0.5,and0.2).Oursimulationsarebasedon 1000replications.BecausetheMLE'sweresensitivetothestartingvaluesused,weused severalsetsofstartingvaluesandchosetheresultswiththehighestmaximizedlikelihood value. OurexperimentaldesignwassimilartothatinKPT.Theyincludedanon-constant regressor,butinourexperimentsthatmadelittledifference.Amoresubstantialdifference isthatweused n = 200and500whereastheyused n = 500and1000. Thereweresometechnicalproblemsrelatedtothefactsthat s 2 u isnotwhen p = 1,and p isnotwhen s 2 u = 0.We ‹ p = 1when ‹ s u = 0and ‹ s u = 0 when ‹ p = 1.ThiswouldimplythatwhentheOLSresidualshaveincorrectskew,the MLEwouldbe q with ‹ p = 1.Itwasveryseldomthecasethat ‹ s 2 u = 0or ‹ p = 1other thaninthewrongskewcases. 1.4.1ParameterEstimation Table1.2containsthemean,bias,andMSEofthevariousparameterestimates,forthe basicstochasticfrontiermodelandfortheZISFmodel,forthecasethat n = 200.Wealso presentthemean,bias,andMSEofthetechnicalinefestimates,andthemeanof the"posterior"probabilitiesoffullef. Unsurprisingly,thebasicstochasticfrontiermodelperformspoorlyexceptwhen p = 0 (inwhichcaseitiscorrectlyThisistrueforallthreevaluesof l .Weover- estimatetechnicalinef,becauseweactasifallareinefwhereasinfact theyarenot.Thisbiasisnaturallybiggerwhen p isbigger. FortheZISFmodel,theresultsdependstronglyonthevalueof l .When l = 1,the resultsarenotverygood.Noteinparticularthemeanvaluesof ‹ p ,whichare0.53,0.49, 0.51,and0.57for p = 0,0.25,0.50,and0.75,respectively.Itisdisturbingthatthemean estimateof p doesnotappeartodependonthetruevalueof p . Theseproblemsarelesssevereforlargervaluesof l .Themeanvalueof ‹ p when p = 0 15 is0.33for l = 2and0.16for l = 5.Theestimatesareconsiderablybetterfortheother valuesof p .Sobasicallythemodelperformsreasonablywellwhen l islargeenoughand p isnottooclosetozero. Table1.3issimilartoTable1.2exceptthatitreportstheresultsonlyforthecasesof correctskew(i.e.,wrongskewcasesarenotincluded).Thismakesalmostnodifference for l = 2or5,becausethereareveryfewwrongskewcaseswhen l = 2or5.For l = 1, itmattersmore.However,theconclusionsgivenabovereallydonotchange. Table1.4containsthesameinformationasTable1.2,exceptthatnowwehave n = 500 ratherthan n = 200.Theresultsarebetterfor n = 500thanfor n = 200,butalarger samplesizedoesnotreallysolvetheproblemsthattheZISFmodelhasinestimating p when p = 0and/or l = 1.Forexample,when p = 0,themean ‹ p for l = 1,2,5is 0.53,0.33,0.16when n = 200and0.50,0.30,0.12when n = 500.Readingthetableinthe otherdirection,when l = 1,themean ‹ p for p = 0,0.25,0.5,0.75is0.53,0.49,0.51,0.57 when n = 200and0.50,0.46,0.48,0.58when n = 500.Soagainthereareproblemsin estimating p when p = 0orwhen l issmall. ItisperhapsnotsurprisingthatweencounterproblemswhenweestimatetheZISF modelwhenthetruevalueof p iszero.Essentially,weareestimatingalatent-classmodel withmoreclassesthantherereallyare.Itistruethattheclasswithzeroprobability containsnonewparameters.Ifitdid,theywouldnotbeandtheresultswould presumablybemuchworse. TheseresultsdonotalwaysagreewiththesummaryoftheresultsinKPT.KPTcon- centrateonthetechnicalinefestimates,andtheonlyresultstheyshowexplicitly fortheparameterestimates(theirFigure3)arefor n = 1000,and l = 5and p = 0.25. Wedidsuccessfullyreplicatetheirresults,but n = 1000and l = 5isaveryfavorablepa- rameterIntheirSection1.3.1,theysaythefollowingaboutthecasewhen thetrue p equalszero:"TheMLestimatorfromtheZISFmodelisfoundtoperformquite well....Estimatesof p wereclosetozero."Itisnotclearwhatparameter 16 thisrefersto,butinoursimulationsthisisnottrueexceptwhen l = 5.Forsmallervalues of l ,theZISFestimatesof p whenthetrue p = 0arenotveryclosetozero. 1.4.2TestingtheHypothesis p = 0 Wenowturntotheresultsofoursimulationsthataredesignedtoinvestigatethesizeand powerpropertiesofthetestsofthehypothesis p = 0,asdiscussedinSection1.3.4above. Thishypothesisiseconomicallyinteresting,anditisalsopracticallyimportanttoknow whether p = 0,sinceourmodeldoesnotappeartoperformwellinthatcase.Wewould liketobeabletorecognizecaseswhen p = 0andjustusethebasicSFmodelinthese cases. Thedatageneratingprocessandparametervaluesforthesesimulationsareasdis- cussedabove(inthebeginningofSection1.4).,thesimulationsarefor n = 200 and n = 500. Webeginwiththelikelihoodratio(LR)test,whichisthetestthatwebelieved exante wouldbemostreliable.Theresultsfor n = 200aregiveninTable1.5.Foreachvalueof l and p ,wegivethemeanofthestatistic(overthefullsetof1000replications),thenumber ofrejectionsandthefrequencyofrejection.Therejectionratesintherowscorresponding to p = 0arethesizeofthetest,whereastherejectionratesintherowscorrespondingto thepositivevaluesof p representpower. Lookatthesetofresultsforallreplications.Thesizeofthetestisreasonable.It isundersizedfor l = 1andapproximatelycorrectlysizedfor l = 2and5.However, thepowerisdisappointing,exceptwhen l islarge.Thereisessentiallynopower,even againstthealternative p = 0.75,when l = 1.When l = 2,poweris0.60against p = 0.75, butonly0.24against p = 0.50and0.06against p = 0.25.Powerismorereasonablewhen l = 5. Table1.6givesthesameresultsfor n = 500.Increasing n haslittleeffectonthesizeof thetest,butitimprovesthepower.Powerisstilllowwhen l = 1orwhen l = 2and p is 17 notlarge. Ineithercase( n = 200or500),lookingseparatelyatthecorrect-skewcasesdoesnot changeourconclusions. InTables1.7and1.8,wegiveresultsfortheWaldtest,for n = 200and500,respec- tively.SincetheWaldtestisinwrong-skewcases,weshowtheresultsonly forthecorrect-skewcases.WeconsiderseparatelytheOPG,Hessian,andRobustforms ofthetest,asinSection1.3.4above.Regardlessofwhichformofthetestisused, thetestisconsiderablyover-sized.Thisistrueforbothsamplesizes.Theproblemis worstfortheRobustformandleastseriousfortheOPGform,butthereareserioussize distortionsinallthreecases.Basedontheseresults,theWaldtestisnotrecommended. InTables1.9and1.10,wegivetheresultsforthescore-basedtests(LM,edLM, andKT).Onceagainthetestsareforwrong-skewcasessowereportresults onlyforthecorrect-skewcases.The(two-sided)LMtestisthebestofthethree.Itshows moderatesizedistortionsandnopowerwhen l = 1,butonlymodestsizedistortions when l = 2or5.TheLMtesthasbiggersizedistortionsandlesspowerwhen l = 2or5.TheKTtesthasthelargestsizedistortionsandisthereforenotrecommended. Ourresultsareeasytosummarize.Thelikelihoodratiotestisthebestofthetests wehaveconsidered,atleastfortheseparametervalues.Itistheonlyoneoftheteststhat doesnotover-rejectthetruenullthat p = 0.However,itdoesnothavemuchpower. Thatis,wewillhavetroublerejectingthehypothesisthatthebasicSFmodeliscorrectly eveniftheZISFmodelisneededand p isnotclosetozero.Theexceptiontothis pessimisticconclusionisthecasewhenboth p and l arelarge,inwhichcasethepower ofthetestissatisfactory. 1.5EMPIRICALEXAMPLE WeapplythemodelsinSections1.2and1.3tothePhilippinericedatausedinthe empiricalexamplesofCoellietal.(2005),chapters8-9.ThePhilippinedataarecomposed 18 of43farmersovereightyearsandCoellietal.(2005)estimatethebasicstochasticfrontier modelwithatrans-logproductionfunction,ignoringthepanelnatureoftheobserva- tions.Theiroutputvariableistonnesoffreshlythreshedrice,andtheinputvariablesare plantedarea(inhectares),labor,andfertilizerused(inkilograms).Thesevariablesare scaledtohaveunitmeanssothedercoefofthetrans-logfunctioncanbe interpretedaselasticitiesofoutputwithrespecttoinputsevaluatedatthevariablemeans. WefollowthebasicsetupofCoellietal.(2005)butestimatetheextendedmodelswhere somefarmsareallowedtobeefandtheprobabilityoffarm i beingefand/or thedistributionof u i dependonfarmcharacteristics.Dataonageofhouseholdhead,ed- ucationofhouseholdhead,householdsize,numberofadultsinthehousehold,andthe percentageofareaasbantog(upland)areusedasfarmcharacteristics thattheprobabilityofafarmbeginfullyefand/orthedistributionofthe inef.SeeCoellietal.(2005)Appendix2foradetaileddescriptionofthedata. 1.5.1Model Weconsidermodelsbasedonthefollowing ln y i = b 0 + q t + b 1 ln area i + b 2 ln labor i + b 3 ln npk i + 1 2 b 11 ( ln area i ) 2 + b 12 ln area i ln labor i + b 13 ln area i ln npk i + 1 2 b 22 ( ln labor i ) 2 + b 23 ln labor i ln npk i + 1 2 b 33 ( ln npk i ) 2 + v i u i ,(1.7) u i ˘ N + ( 0, s 2 i ) , s 2 i = exp ( g 0 + age i g 1 + edyrs i g 2 + hhsize i g 3 + nadult i g 4 + banrat i g 5 ) , (1.8) P ( z i = 1 j w i )= exp ( d 0 + age i d 1 + edyrs i d 2 + hhsize i d 3 + nadult i d 4 + banrat i d 5 ) 1 + exp ( d 0 + age i d 1 + edyrs i d 2 + hhsize i d 3 + nadult i d 4 + banrat i d 5 ) , (1.9) where area i isthesizeofplantedareainhectares, labor i isameasureoflabor, npk i is fertilizerinkilograms, age i istheageofhouseholdhead, edyrs i istheyearsofeducation 19 ofthehouseholdhead, hhsize i isthehouseholdsize, nadult i isthenumberofadultsin thehousehold,and banrat i isthepercentageofareaasbantog(upland) Weassumeatrans-logproductionfunctionwithtimetrendasin(1.7).Weestimate thefollowingmodels: [a] thebasicstochasticfrontiermodel,inwhich s 2 i isconstant( s 2 u )and P ( z i = 1 j w i )= 0; [b] theZISFmodelinwhich s 2 u isconstantand P ( z i = 1 j w i ) isconstant( p )butnot necessarilyequaltozero; [c] the"heteroskedasticity"modelinwhich p = 0but s 2 i isasgivenin(1.8); [d] the"logit"modelinwhich s 2 u isconstantbut P ( z i = 1 j w i ) isasgivenin(1.9); [e] the"logit+heteroskedasticity"modelinwhich s 2 i isasgivenin(1.8)and P ( z i = 1 j w i ) isasgivenin(1.9). 1.5.2TheEstimates TheMLEsandtheirOPGstandarderrorsarereportedinTable1.11. Considertheresultsforthebasicstochasticfrontiermodelcolumnofresults inthetable).Theinputsareproductiveandthereareroughlyconstantreturnstoscale. Averagetechnicalefisabout70%.Theestimatedvalueof l is2.75,andboththat valueandthesamplesize( n = 344)arebigenoughtofeelaboutproceedingto theZISFmodelanditsextensions. ThenextblockofcolumnofresultsisfortheZISFmodel.Herewehave ‹ p = 0.58, soasubstantialfractionoftheobservations(farm-timeperiodcombinations)arechar- acterizedbyfullef.Thetechnology(effectofinputsonoutput)isnotchanged muchfromthebasicSFmodel,buttheinterceptislowerandtheleveloftechnicalef 20 ciencyishigher(between85%and90%).Basedonoursimulations,thisisapredictable consequenceofthatasubstantialnumberofobservationsarefullyef Thenextblockofcolumnsofresultsisfortheheteroskedasticitymodelinwhichall farmsareinefbutthelevelofinefdependsonfarmcharacteristics.Anum- beroffarmcharacteristics(ageofthefarmer,educationofthefarmerandpercentageof bantogelds)havecanteffectsonthelevelofinef.Inthisparameteriza- tion,apositivecoefindicatesthatanincreaseinthecorrespondingvariablemakes afarmmore in efThemodelimpliesthatfarmswherethefarmerisolderandmore educated,andwherethepercentageofbantogislower,tendtobemoreinef (lessefOr,sayingthesamethingtheotherwayaround,farmsaremoreef onaverageifthefarmerisyoungerandlesseducatedandthepercentageofbantog ishigher.Theeffectofeducationisperhapssurprising.Becausethismodeldoesnotal- lowanyfarmstobefullyefweonceagainhavealowlevelofaveragetechnical ef,about72%,whichissimilartothatforthebasicSFmodel. Nextweconsiderthelogitmodelinwhichthedistributionofinefisthesame forallthatarenotfullyefbuttheprobabilityofbeingfullyefdepends onfarmcharacteristicsaccordingtoalogitmodel.Nowageofthefarmerandpercentage ofbantoghaveeffectsontheprobabilityoffullef,andthecoef cientofhouseholdsizeisalmostatthe5%level(tstatistic=-1.93).Theresults indicatethatfarmswithyoungerfarmers,smallerhouseholdsize,andalargerproportion ofbantogaremorelikelytobefullyefTheresultsforageofthefarmerand percentageofbantogaresimilarinnaturetothosefortheheteroskedasticitymodel. Theaveragelevelofinefisonceagainhigher,about86%,whichisverysimilarto theresultfortheZISFmodelwithconstant p . Finally,thelastsetofresultsareforthelogit+heteroskedasticitymodelinwhichfarm characteristicsboththeprobabilityofbeingfullyefandthedistribution ofinefforthosefarmsthatarenotfullyefNownoneofthefarmchar- 21 acteristicsconsideredhaveeffectsonthedistributionofinefforthe ineffarms,butthreeofthem(ageofthefarmer,householdsizeandproportionof bantogdohaveeffectsontheprobabilityofbeingfullyefThe coefforthesethreevariableshavethesamesignsasinthelogitmodelwithout heteroskedasticity.Itisinterestingthatwecanestimateamodelthiscomplicatedandstill getresults.Also,wenotethat,becausethismodelallowstheprobabilityof fullef,wearebacktoahighaverageleveloftechnicalinef,between85% and90%. 1.5.3ModelComparisonandSelection Wewillnowtesttherestrictionsthatdistinguishthevariousmodelswehaveestimated. Basedontheresultsofoursimulations,wewillusethelikelihoodratio(LR)test.We immediatelyencountersomedifbecause,tousetheLRtest(ortheothertests weconsideredinSection1.3.4),thehypothesesshouldbenested,whereasnotallofour modelsarenested.Therearetwopossiblenestedhierarchiesofmodels:(a)basicSF ˆ ZISF ˆ logit ˆ logit-heteroskedasticity,and(b)basicSF ˆ heteroskedasticity. Webeginwithhierarchy(a).Whenwetestthehypothesisthat p = 0intheZISF model,weobtainLR = 5.07,whichexceedsthe5%criticalvalueof2.71forthedistribution ( 1 / 2 c 2 0 + 1 / 2 c 2 1 ).SowerejectthebasicSFmodelinfavoroftheZISFmodel.Nextwetest theZISFmodelagainstthelogitmodel.Thisisastandardtestofthehypothesisthat d 1 = d 2 = d 3 = d 4 = d 5 = 0inthelogitmodel.TheLRstatisticof23.96exceedsthe5% criticalvalueforthe c 2 5 distribution(11.07),sowerejecttheZISFmodelinfavorofthe logitmodel.Finally,wetestthelogitmodelagainstthelogit-heteroskedasticitymodel. Thisisastandardtestofthehypothesisthat g 1 = g 2 = g 3 = g 4 = g 5 = 0inthelogit- heteroskedasticitymodel.TheLRteststatisticof11.12verymarginallyexceedsthe5% criticalvalue,sowerejectthelogitmodelinfavorofthelogit-heteroskedasticitymodel, butnotoverwhelmingly.Notethatthelogitmodelisrejectedeventhough,inthelogit- 22 heteroskedasticitymodel,noneoftheindividual g j intheheteroskedasticityportionof themodelisindividually Nowconsiderhierarchy(b).WetestthebasicSFmodelagainsttheheteroskedasticity model.Thisisastandardtestofthehypothesisthat g 1 = g 2 = g 3 = g 4 = g 5 = 0in theheteroskedasticitymodel.TheLRstatisticof17.04exceedsthe5%criticalvalue,so werejectthebasicSFmodelinfavoroftheheteroskedasticitymodel. Wecannottesttheheteroskedasticitymodelagainstthelogit-heteroskedasticitymodel, atleastnotbystandardmethods,sincetherestrictionthatwouldconvertthelogit-heteros- kedasticitymodelintotheheteroskedasticitymodelis d 0 = ¥ andunderthisnullthe other d j areStill,thedifferenceinlog-likelihoods,whichis11.56,would appeartoargueinfavorofthelogit-heteroskedasticitymodel. Inordertocomparethemodelsinaslightlydifferentway,andtoamplifyonthecom- mentattheendoftheprecedingparagraph,wewillalsoconsidersomestandardmodel selectioncriteria.WeconsiderAIC = 2 LF + 2 d (Akaike(1974)),BIC = 2 LF + d ln n (Schwarz(1978))andHQIC = 2 LF + 2 d ln ( ln n ) (HannanandQuinn(1979)),where d isthenumberofestimatedparameters, n isthenumberofobservations,and LF isthe log-likelihoodvalue.Smallervaluesofthesecriteriaindicateafibetterflmodel.Wenote thatallthreecriteriafavorthelogitmodelovertheheteroskedasticitymodel,twoofthe threefavorthelogit-heteroskedasticitymodelovertheheteroskedasticitymodel,andtwo ofthethreefavorthelogitmodeloverthelogit-heteroskedasticitymodel. Basedontheresultsofourhypothesistestsandthecomparisonofthemodelselec- tionprocedures,weconcludethatacasecouldbemadeforeitherthelogitmodelorthe logit-heteroskedasticitymodelasthepreferredmodel.Aswesawabove,thesubstantive conclusionsfromthesetwomodelswerebasicallythesame. 23 1.6CONCLUDINGREMARKS Inthischapterweconsideredageneralizationoftheusualstochasticfrontiermodel.In thisnew"ZISF"model,thereisaprobability p thataisfullyefThismodelwas proposedbyKumbhakar,Parmeter,andTsionas(2013),whoshowedhowtoestimatethe modelbyMLE,howtoupdatetheprobabilityofabeingfullyefonthebasis ofthedata,andhowtoestimatetheineflevelofa Weextendtheiranalysisinanumberofways.Weshowthataresultsimilartothatof Waldman(1982)holdsintheZISFmodel,namely,thatthereisalwaysastationarypoint ofthelikelihoodatparametervaluesthatindicatenoinef,andthatthispointisa localmaximumiftheOLSresidualsarepositivelyskewed.Weproposeamodelinwhich theprobabilityofabeingfullyefisnotconstant,butratherisdeterminedby alogitorprobitmodelbasedonobservablecharacteristics.Weshowhowtotestthe hypothesisthat p = 0.Wealsoprovideamorecomprehensivesetofsimulationsthan Kumbhakar,Parmeter,andTsionas(2013)did,andweincludeanempiricalexample. Let l = s u / s v ,astandardmeasureinthestochasticfrontierliteratureoftherelative sizeoftechnicalinefandstatisticalnoise.Themainpracticalimplicationofour simulationsisthattheZISFmodelworkswellwhenneither l nor p issmall.However, wehavetroubleestimating p reliably,ortestingwhetheritequalszero,when l issmall. Andifthetrue p equalszero,wehavetroubleestimatingitreliablyunless l islargerthan isempiricallyplausible(e.g., l = 5).Largersamplesizeobviouslyhelps,buttheabove conclusionsdonotdependstronglyonsamplesizeinoursimulations.Situationswhere theZISFmodelmaybeusefulthereforehavethecharacteristicsthat(i)itisreasonableto supposethatsomearefullyefand(ii)theineflevelsoftheinef arenotsmallrelativetostatisticalnoise.Suchsituationsdonotseemimplausible, anditisanempiricalquestionastohowcommontheyare. 24 Table1.1:FrequencyofapositivethirdmomentoftheOLSresiduals n = 50 n = 100 n = 200 n = 400 l = 0.5 l = 1 l = 2 l = 0.5 l = 1 l = 2 l = 0.5 l = 1 l = 2 l = 0.5 l = 1 l = 2 p = 0 0.4760.3630.114 0.4630.3000.038 0.4470.2240.006 0.4210.1390.000 p = 0.1 0.4750.3520.102 0.4600.2860.031 0.4430.2100.003 0.4160.1230.000 p = 0.2 0.4720.3380.080 0.4560.2660.019 0.4380.1850.001 0.4070.1010.000 p = 0.3 0.4690.3220.058 0.4510.2450.011 0.4310.1610.000 0.3980.0790.000 p = 0.4 0.4660.3080.042 0.4470.2260.006 0.4240.1410.000 0.3910.0620.000 p = 0.5 0.4650.3000.034 0.4440.2150.004 0.4210.1290.000 0.3860.0530.000 p = 0.6 0.4660.2990.033 0.4450.2150.004 0.4200.1280.000 0.3870.0520.000 p = 0.7 0.4680.3110.043 0.4490.2290.006 0.4270.1430.000 0.3940.0630.000 p = 0.8 0.4740.3420.076 0.4580.2680.017 0.4390.1850.001 0.4140.0980.000 p = 0.9 0.4850.3990.178 0.4740.3480.079 0.4630.2800.019 0.4460.2000.001 25 Table1.2:BasicSFModelvs.ZISFModel,allreplications: n = 200 BasicSFModel ZISFModel BasicSFModel ZISFModel l = 1 ( s u = 1, s v = 1 ) , p = 0 l = 1 ( s u = 1, s v = 1 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 0.87 0.130.20 0.60 0.400.29 1.160.160.19 0.85 0.150.15 s u 0.84 0.160.30 0.91 0.090.40 0.95 0.050.26 1.000.000.33 s v 0.99 0.010.02 1.020.020.02 0.97 0.030.02 1.010.010.02 p Œ 0.530.530.43 Œ 0.490.240.19 l 0.93 0.070.44 0.95 0.050.45 1.070.070.45 1.060.060.43 s 1.39 0.020.04 1.470.060.12 1.440.030.04 1.510.090.11 log L 313.00 312.85 314.71 314.55 ‹ E u i j ‹ # i 0.67 0.130.50 0.40 0.400.60 0.760.160.51 0.45 0.150.47 ‹ p z i = 1 j ‹ # i 0.53 0.49 l = 1 ( s u = 1, s v = 1 ) , p = 0.5 l = 1 ( s u = 1, s v = 1 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.400.400.30 1.040.040.12 1.510.510.40 1.160.160.13 s u 1.000.000.22 1.070.070.31 0.89 0.110.23 1.010.010.34 s v 0.93 0.070.02 0.98 0.020.02 0.90 0.100.03 0.95 0.050.02 p Œ 0.510.010.12 Œ 0.57 0.180.16 l 1.160.160.45 1.150.150.43 1.080.080.43 1.120.120.44 s 1.440.030.04 1.520.110.12 1.35 0.070.04 1.470.060.13 log L 311.00 310.77 300.69 300.40 ‹ E u i j ‹ # i 0.800.400.59 0.440.050.40 0.710.510.62 0.360.160.33 ‹ p z i = 1 j ‹ # i 0.51 0.57 26 Table1.2:(cont'd) BasicSFModel ZISFModel BasicSFModel ZISFModel l = 2 ( s u = 1, s v = 0.5 ) , p = 0 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 0.98 0.020.02 0.71 0.290.14 1.260.260.08 0.98 0.020.05 s u 0.98 0.020.03 0.97 0.030.05 1.080.080.03 1.040.040.03 s v 0.500.000.01 0.540.040.01 0.45 0.050.01 0.500.000.01 p Œ 0.330.330.19 Œ 0.300.050.06 l 2.080.080.48 1.89 0.110.41 2.530.530.88 2.210.210.55 s 1.11 0.010.01 1.120.000.03 1.180.060.01 1.160.050.02 log L 229.93 229.65 232.55 232.22 ‹ E u i j ‹ # i 0.78 0.020.17 0.51 0.290.30 0.860.260.23 0.58 0.020.20 ‹ p z i = 1 j ‹ # i 0.33 0.30 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.5 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.450.450.21 1.070.070.05 1.520.520.28 1.050.050.03 s u 1.070.070.02 1.060.060.02 0.91 0.090.02 1.030.030.04 s v 0.39 0.110.02 0.47 0.030.01 0.37 0.130.02 0.48 0.020.00 p Œ 0.44 0.060.06 Œ 0.67 0.080.05 l 2.830.831.29 2.340.340.57 2.540.540.72 2.180.180.28 s 1.150.030.01 1.160.040.02 0.99 0.130.03 1.140.020.03 log L 221.64 220.71 196.18 193.77 ‹ E u i j ‹ # i 0.850.450.35 0.470.080.18 0.720.520.39 0.250.050.11 ‹ p z i = 1 j ‹ # i 0.44 0.67 27 Table1.2:(cont'd) BasicSFModel ZISFModel BasicSFModel ZISFModel l = 5 ( s u = 1, s v = 0.2 ) , p = 0 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.000.000.00 0.85 0.150.04 1.230.230.05 1.030.030.01 s u 1.000.000.01 0.97 0.030.01 1.050.050.01 1.020.020.00 s v 0.19 0.010.00 0.230.030.00 0.14 0.060.01 0.19 0.010.00 p Œ 0.160.160.05 Œ 0.23 0.020.02 l 5.610.617.04 4.79 0.216.21 8.323.3221.81 6.061.0610.77 s 1.020.000.00 1.00 0.020.01 1.050.030.01 1.040.020.00 log L 176.43 176.06 174.14 172.76 ‹ E u i j ‹ # i 0.800.000.04 0.65 0.150.08 0.820.230.09 0.630.030.04 ‹ p z i = 1 j ‹ # i 0.16 0.23 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.5 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.300.300.09 1.000.000.00 1.320.320.10 1.000.000.00 s u 0.93 0.070.01 1.010.010.01 0.71 0.290.09 1.000.000.02 s v 0.11 0.090.01 0.200.000.00 0.11 0.090.01 0.200.000.00 p Œ 0.500.000.01 Œ 0.750.000.00 l 8.963.9636.70 5.170.170.61 6.961.9619.52 5.050.060.43 s 0.94 0.080.01 1.030.010.01 0.72 0.300.09 1.020.000.02 log L 148.06 138.15 99.12 74.13 ‹ E u i j ‹ # i 0.700.300.13 0.400.000.03 0.520.320.13 0.200.000.02 ‹ p z i = 1 j ‹ # i 0.50 0.75 28 Table1.3:BasicSFModelvs.ZISFModel,correctskewreplications: n = 200 BasicSFModel ZISFModel BasicSFModel ZISFModel l = 1 ( s u = 1, s v = 1 ) , p = 0 l = 1 ( s u = 1, s v = 1 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.060.060.07 0.71 0.290.20 1.290.290.16 0.93 0.070.12 s u 1.080.080.10 1.170.170.22 1.130.130.12 1.190.190.20 s v 0.94 0.060.02 0.98 0.020.02 0.94 0.060.02 0.98 0.020.02 p Œ 0.400.400.26 Œ 0.390.140.12 l 1.200.200.28 1.230.230.30 1.270.270.35 1.260.260.32 s 1.460.050.03 1.560.150.13 1.500.080.03 1.570.160.11 log L 313.16 312.97 315.09 314.90 ‹ E u i j ‹ # i 0.860.060.36 0.51 0.290.49 0.900.300.47 0.54 0.060.43 ‹ p z i = 1 j ‹ # i 0.40 0.39 l = 1 ( s u = 1, s v = 1 ) , p = 0.5 l = 1 ( s u = 1, s v = 1 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.510.510.32 1.100.100.12 1.640.640.46 1.220.220.15 s u 1.140.140.11 1.220.220.21 1.050.050.09 1.190.190.23 s v 0.91 0.090.02 0.96 0.040.02 0.87 0.130.03 0.93 0.070.02 p Œ 0.44 0.060.11 Œ 0.49 0.260.18 l 1.330.330.38 1.310.310.35 1.270.270.33 1.310.310.35 s 1.480.070.03 1.580.160.12 1.40 0.020.03 1.540.130.13 log L 311.17 310.91 301.06 300.72 ‹ E u i j ‹ # i 0.910.510.61 0.510.110.40 0.840.640.69 0.420.220.34 ‹ p z i = 1 j ‹ # i 0.44 0.49 29 Table1.3:(cont'd) BasicSFModel ZISFModel BasicSFModel ZISFModel l = 2 ( s u = 1, s v = 0.5 ) , p = 0 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 0.99 0.010.02 0.72 0.280.14 1.260.260.08 0.98 0.020.05 s u 0.98 0.020.03 0.97 0.030.04 1.080.080.03 1.040.040.03 s v 0.49 0.010.01 0.540.040.01 0.45 0.050.01 0.500.000.01 p Œ 0.320.320.18 Œ 0.300.050.06 l 2.100.100.46 1.90 0.100.39 2.530.530.88 2.210.210.55 s 1.11 0.010.01 1.120.000.03 1.180.060.01 1.160.050.02 log L 229.94 229.66 232.57 232.24 ‹ E u i j ‹ # i 0.78 0.010.16 0.52 0.280.29 0.860.260.23 0.58 0.020.20 ‹ p z i = 1 j ‹ # i 0.32 0.30 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.5 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.450.450.21 1.070.070.05 1.520.520.28 1.050.050.03 s u 1.070.070.02 1.060.060.02 0.91 0.090.02 1.030.030.04 s v 0.39 0.110.02 0.47 0.030.01 0.37 0.130.02 0.48 0.020.00 p Œ 0.44 0.060.06 Œ 0.67 0.080.05 l 2.830.831.29 2.340.340.57 2.540.540.72 2.190.190.28 s 1.150.030.01 1.160.040.02 0.99 0.130.03 1.140.020.03 log L 221.64 220.71 196.20 193.79 ‹ E u i j ‹ # i 0.850.450.35 0.470.080.18 0.720.520.39 0.250.050.11 ‹ p z i = 1 j ‹ # i 0.44 0.67 30 Table1.3:(cont'd) BasicSFModel ZISFModel BasicSFModel ZISFModel l = 5 ( s u = 1, s v = 0.2 ) , p = 0 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.000.000.00 0.85 0.150.04 1.230.230.05 1.030.030.01 s u 1.000.000.01 0.97 0.030.01 1.040.040.01 1.020.020.00 s v 0.19 0.010.00 0.230.030.00 0.14 0.060.01 0.19 0.010.00 p Œ 0.160.160.05 Œ 0.23 0.020.02 l 5.610.617.04 4.79 0.216.21 8.323.3221.81 6.061.0610.77 s 1.020.000.00 1.00 0.020.01 1.050.030.01 1.040.020.00 log L 176.43 176.06 174.14 172.76 ‹ E u i j ‹ # i 0.800.000.04 0.65 0.150.08 0.820.230.09 0.630.030.04 ‹ p z i = 1 j ‹ # i 0.16 0.23 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.5 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.300.300.09 1.000.000.00 1.320.320.10 1.000.000.00 s u 0.93 0.070.01 1.010.010.01 0.71 0.290.09 1.000.000.02 s v 0.11 0.090.01 0.200.000.00 0.11 0.090.01 0.200.000.00 p Œ 0.500.000.01 Œ 0.750.000.00 l 8.963.9636.70 5.170.170.61 6.961.9619.52 5.050.060.43 s 0.94 0.080.01 1.030.010.01 0.72 0.300.09 1.020.000.02 log L 148.06 138.15 99.12 74.13 ‹ E u i j ‹ # i 0.700.300.13 0.400.000.03 0.520.320.13 0.200.000.02 ‹ p z i = 1 j ‹ # i 0.50 0.75 31 Table1.4:BasicSFModelvs.ZISFModel,allreplications: n = 500 BasicSFModel ZISFModel BasicSFModel ZISFModel l = 1 ( s u = 1, s v = 1 ) , p = 0 l = 1 ( s u = 1, s v = 1 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 0.90 0.100.11 0.59 0.410.26 1.190.190.13 0.85 0.150.12 s u 0.88 0.120.17 0.94 0.060.24 0.99 0.010.13 1.030.030.19 s v 1.010.010.01 1.040.040.01 0.99 0.010.01 1.030.030.01 p Œ 0.500.500.38 Œ 0.460.210.16 l 0.91 0.090.21 0.93 0.070.23 1.040.040.19 1.030.030.19 s 1.39 0.020.02 1.460.050.08 1.440.030.02 1.500.090.07 log L 785.23 785.03 790.14 789.94 ‹ E u i j ‹ # i 0.70 0.100.40 0.39 0.410.55 0.790.190.42 0.45 0.150.42 ‹ p z i = 1 j ‹ # i 0.50 0.46 l = 1 ( s u = 1, s v = 1 ) , p = 0.5 l = 1 ( s u = 1, s v = 1 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.450.450.26 1.050.050.09 1.560.560.38 1.140.140.09 s u 1.060.060.10 1.100.100.14 0.95 0.050.11 1.050.050.19 s v 0.94 0.060.01 0.99 0.010.01 0.91 0.090.02 0.97 0.030.01 p Œ 0.48 0.020.10 Œ 0.58 0.170.14 l 1.170.170.19 1.130.130.16 1.080.080.19 1.100.100.21 s 1.450.040.02 1.510.100.06 1.35 0.060.02 1.470.060.08 log L 779.58 779.28 754.00 753.53 ‹ E u i j ‹ # i 0.850.450.53 0.450.050.36 0.760.560.58 0.340.140.27 ‹ p z i = 1 j ‹ # i 0.48 0.58 32 Table1.4:(cont'd) BasicSFModel ZISFModel BasicSFModel ZISFModel l = 2 ( s u = 1, s v = 0.5 ) , p = 0 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.25 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 0.99 0.010.01 0.74 0.260.11 1.260.260.07 1.000.000.04 s u 0.99 0.010.01 0.95 0.050.01 1.080.080.01 1.020.020.01 s v 0.500.000.00 0.540.040.01 0.46 0.040.00 0.500.000.00 p Œ 0.300.300.15 Œ 0.270.020.04 l 2.020.020.15 1.79 0.210.19 2.390.390.32 2.080.080.18 s 1.110.000.00 1.10 0.020.01 1.180.060.01 1.150.030.00 log L 577.71 577.38 584.97 584.58 ‹ E u i j ‹ # i 0.79 0.010.15 0.54 0.260.26 0.860.260.22 0.600.000.18 ‹ p z i = 1 j ‹ # i 0.30 0.27 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.5 l = 2 ( s u = 1, s v = 0.5 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.450.450.21 1.050.050.03 1.520.520.27 1.020.020.01 s u 1.070.070.01 1.020.020.01 0.91 0.090.01 1.000.000.02 s v 0.40 0.100.01 0.49 0.010.00 0.38 0.120.02 0.49 0.010.00 p Œ 0.46 0.040.04 Œ 0.72 0.030.02 l 2.720.720.68 2.140.140.15 2.430.430.32 2.040.040.07 s 1.150.030.00 1.140.020.01 0.99 0.130.02 1.120.000.01 log L 556.16 554.57 492.83 487.45 ‹ E u i j ‹ # i 0.850.450.34 0.450.050.15 0.720.520.39 0.210.020.09 ‹ p z i = 1 j ‹ # i 0.46 0.72 33 Table1.4:(cont'd) BasicSFModel ZISFModel BasicSFModel ZISFModel l = 5 ( s u = 1, s v = 0.2 ) , p = 0 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.25 b 0 1.000.000.00 0.88 0.120.03 1.220.220.05 1.010.010.01 s u 1.000.000.00 0.97 0.030.00 1.040.040.00 1.010.010.00 s v 0.200.000.00 0.220.020.00 0.14 0.060.00 0.19 0.010.00 p Œ 0.120.120.03 Œ 0.24 0.010.01 l 5.180.180.99 4.55 0.451.35 7.552.558.46 5.360.361.87 s 1.020.000.00 0.99 0.030.00 1.050.030.00 1.030.010.00 log L 442.32 441.94 436.65 433.87 ‹ E u i j ‹ # i 0.800.000.03 0.68 0.120.06 0.820.220.08 0.610.010.04 ‹ p z i = 1 j ‹ # i 0.12 0.24 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.5 l = 5 ( s u = 1, s v = 0.2 ) , p = 0.75 meanbiasmse meanbiasmse meanbiasmse meanbiasmse b 0 1.300.300.09 1.000.000.00 1.320.320.10 1.000.000.00 s u 0.93 0.070.01 1.000.000.00 0.71 0.290.09 1.000.000.01 s v 0.12 0.080.01 0.200.000.00 0.11 0.090.01 0.200.000.00 p Œ 0.500.000.00 Œ 0.750.000.00 l 8.013.0110.75 5.030.030.17 6.361.362.77 5.000.000.16 s 0.93 0.090.01 1.020.000.00 0.72 0.300.09 1.020.000.01 log L 371.51 347.55 250.31 188.10 ‹ E u i j ‹ # i 0.700.300.12 0.400.000.02 0.520.320.13 0.200.000.01 ‹ p z i = 1 j ‹ # i 0.50 0.75 34 Table1.5:LikelihoodRatioTest, n = 200 All CorrectSkew IncorrectSkew MeanRejectionTotal MeanRejectionTotal MeanRejectionTotal l = 1 p = 0 0.2921 ( 0.02 ) 1000 0.3821 ( 0.03 ) 776 0.000 ( 0.00 ) 224 p = 0.25 0.3220 ( 0.02 ) 1000 0.3720 ( 0.02 ) 842 0.000 ( 0.00 ) 158 p = 0.5 0.4640 ( 0.04 ) 1000 0.5340 ( 0.05 ) 878 0.000 ( 0.00 ) 122 p = 0.75 0.5753 ( 0.05 ) 1000 0.6753 ( 0.06 ) 850 0.000 ( 0.00 ) 150 l = 2 p = 0 0.5642 ( 0.04 ) 1000 0.5642 ( 0.04 ) 994 0.000 ( 0.00 ) 6 p = 0.25 0.6663 ( 0.06 ) 1000 0.6663 ( 0.06 ) 999 0.000 ( 0.00 ) 1 p = 0.5 1.87244 ( 0.24 ) 1000 1.87244 ( 0.24 ) 1000 Œ0 p = 0.75 4.81596 ( 0.60 ) 1000 4.81596 ( 0.60 ) 999 0.000 ( 0.00 ) 1 l = 5 p = 0 0.7360 ( 0.06 ) 999 + 0.7361 ( 0.06 ) 999 + Œ0 p = 0.25 2.76393 ( 0.39 ) 996 + 2.76395 ( 0.40 ) 996 + Œ0 p = 0.5 19.82988 ( 0.99 ) 997 + 19.82988 ( 0.99 ) 997 + Œ0 p = 0.75 49.98997 ( 1.00 ) 997 + 49.98997 ( 1.00 ) 997 + Œ0 1. + Someiterationsdroppeddueto ‹ s v beingtoosmallsuchthat ‹ l isnotwell 35 Table1.6:LikelihoodRatioTest, n = 500 All CorrectSkew IncorrectSkew MeanRejectionTotal MeanRejectionTotal MeanRejectionTotal l = 1 p = 0 0.3930 ( 0.03 ) 1000 0.4530 ( 0.03 ) 879 0.000 ( 0.00 ) 121 p = 0.25 0.4128 ( 0.03 ) 1000 0.4528 ( 0.03 ) 921 0.000 ( 0.00 ) 79 p = 0.5 0.6048 ( 0.05 ) 1000 0.6248 ( 0.05 ) 964 0.000 ( 0.00 ) 36 p = 0.75 0.95102 ( 0.10 ) 1000 1.01102 ( 0.11 ) 939 0.000 ( 0.00 ) 61 l = 2 p = 0 0.6656 ( 0.06 ) 1000 0.6656 ( 0.06 ) 1000 Œ0 p = 0.25 0.7763 ( 0.06 ) 1000 0.7763 ( 0.06 ) 1000 Œ0 p = 0.5 3.19461 ( 0.46 ) 1000 3.19461 ( 0.46 ) 1000 Œ0 p = 0.75 10.76911 ( 0.91 ) 1000 10.76911 ( 0.91 ) 1000 Œ0 l = 5 p = 0 0.7570 ( 0.07 ) 1000 0.7570 ( 0.07 ) 1000 Œ0 p = 0.25 5.55689 ( 0.69 ) 1000 5.55689 ( 0.69 ) 1000 Œ0 p = 0.5 47.941000 ( 1.00 ) 1000 47.941000 ( 1.00 ) 1000 Œ0 p = 0.75 124.421000 ( 1.00 ) 1000 124.421000 ( 1.00 ) 1000 Œ0 36 Table1.7:WaldTest, n = 200 OPG Hessian Robust MeanRejectionTotal MeanRejectionTotal MeanRejectionTotal l = 1 p = 0 5.92128 ( 0.17 ) 773 57.97189 ( 0.24 ) 776 143.75546 ( 0.70 ) 776 p = 0.25 4.42147 ( 0.18 ) 838 36.90215 ( 0.26 ) 842 104.61592 ( 0.70 ) 842 p = 0.5 6.98179 ( 0.21 ) 873 40.61270 ( 0.31 ) 878 135.57629 ( 0.72 ) 878 p = 0.75 9.74247 ( 0.29 ) 849 63.53334 ( 0.39 ) 850 146.11607 ( 0.71 ) 850 l = 2 p = 0 6.18215 ( 0.22 ) 994 21.23290 ( 0.29 ) 994 45.46620 ( 0.62 ) 994 p = 0.25 4.26264 ( 0.26 ) 999 10.21320 ( 0.32 ) 997 19.28618 ( 0.62 ) 999 p = 0.5 10.91580 ( 0.58 ) 1000 14.37639 ( 0.64 ) 1000 19.46735 ( 0.73 ) 1000 p = 0.75 45.72856 ( 0.86 ) 999 61.79883 ( 0.88 ) 998 80.71906 ( 0.91 ) 999 l = 5 p = 0 3.25266 ( 0.27 ) 999 + 3.58315 ( 0.32 ) 996 + 5.03490 ( 0.49 ) 999 + p = 0.25 8.63696 ( 0.70 ) 998 + 9.16725 ( 0.73 ) 997 + 9.90727 ( 0.73 ) 998 + p = 0.5 59.73997 ( 1.00 ) 998 + 60.75997 ( 1.00 ) 998 + 61.12996 ( 1.00 ) 998 + p = 0.75 247.621000 ( 1.00 ) 1000 254.991000 ( 1.00 ) 1000 257.771000 ( 1.00 ) 1000 1. SomeiterationsaredroppedduetoasingularOPGvariancematrix. 2. SomeoftheiterationswhereMLEisattheboundary( ‹ p = 0)aredroppedduetonotnegativeHessian. 3. + Someiterationsdroppeddueto ‹ s v beingtoosmallsuchthat ‹ l isnotwell 37 Table1.8:WaldTest, n = 500 OPG Hessian Robust MeanRejectionTotal MeanRejectionTotal MeanRejectionTotal l = 1 p = 0 12.05201 ( 0.23 ) 878 112.28250 ( 0.29 ) 877 286.90620 ( 0.71 ) 878 p = 0.25 10.34203 ( 0.22 ) 921 94.59264 ( 0.29 ) 921 215.21639 ( 0.69 ) 921 p = 0.5 12.69275 ( 0.29 ) 963 47.99347 ( 0.36 ) 964 121.96661 ( 0.69 ) 964 p = 0.75 24.74368 ( 0.39 ) 938 120.86447 ( 0.48 ) 937 258.85678 ( 0.72 ) 939 l = 2 p = 0 5.32262 ( 0.26 ) 1000 26.94310 ( 0.31 ) 1000 47.13618 ( 0.62 ) 1000 p = 0.25 3.94306 ( 0.31 ) 1000 0.47373 ( 0.37 ) 999 8.46642 ( 0.64 ) 1000 p = 0.5 17.10800 ( 0.80 ) 1000 19.10831 ( 0.83 ) 1000 22.49837 ( 0.84 ) 1000 p = 0.75 93.38988 ( 0.99 ) 1000 105.45987 ( 0.99 ) 1000 121.46983 ( 0.98 ) 1000 l = 5 p = 0 3.02282 ( 0.28 ) 1000 3.24311 ( 0.31 ) 1000 4.79496 ( 0.50 ) 1000 p = 0.25 17.87890 ( 0.89 ) 1000 18.79894 ( 0.89 ) 1000 20.02893 ( 0.89 ) 1000 p = 0.5 142.551000 ( 1.00 ) 1000 143.991000 ( 1.00 ) 1000 144.731000 ( 1.00 ) 1000 p = 0.75 609.811000 ( 1.00 ) 1000 618.151000 ( 1.00 ) 1000 621.491000 ( 1.00 ) 1000 1. SomeiterationsaredroppedduetoasingularOPGvariancematrix. 2. SomeoftheiterationswhereMLEisattheboundary( ‹ p = 0or ‹ p = 1)aredroppedduetonotnegative Hessian. 3. Oneiterationdroppeddueto ‹ p = 1 38 Table1.9:Score-BasedTests, n = 200 LM LM KT MeanRejectionTotal MeanRejectionTotal MeanRejectionTotal l = 1 p = 0 1.8399 ( 0.13 ) 776 1.47122 ( 0.16 ) 776 1.82146 ( 0.19 ) 776 p = 0.25 1.83101 ( 0.12 ) 840 1.48135 ( 0.16 ) 840 1.83160 ( 0.19 ) 840 p = 0.5 1.74112 ( 0.13 ) 878 1.24116 ( 0.13 ) 878 1.74161 ( 0.18 ) 878 p = 0.75 1.74100 ( 0.12 ) 849 1.0788 ( 0.10 ) 849 1.74152 ( 0.18 ) 849 l = 2 p = 0 1.3073 ( 0.07 ) 994 0.94105 ( 0.11 ) 994 1.29135 ( 0.14 ) 994 p = 0.25 1.2075 ( 0.08 ) 999 0.7796 ( 0.10 ) 999 1.19130 ( 0.13 ) 999 p = 0.5 1.69140 ( 0.14 ) 1000 0.1816 ( 0.02 ) 1000 1.68222 ( 0.22 ) 1000 p = 0.75 3.82422 ( 0.42 ) 999 0.043 ( 0.00 ) 999 3.82531 ( 0.53 ) 999 l = 5 p = 0 1.1658 ( 0.06 ) 999 + 0.7179 ( 0.08 ) 999 + 1.11113 ( 0.11 ) 999 + p = 0.25 1.78139 ( 0.14 ) 996 + 0.127 ( 0.01 ) 996 + 1.73216 ( 0.22 ) 996 + p = 0.5 13.10961 ( 0.96 ) 997 + 0.000 ( 0.00 ) 997 + 13.10978 ( 0.98 ) 997 + p = 0.75 29.46996 ( 1.00 ) 997 + 0.000 ( 0.00 ) 997 + 29.46996 ( 1.00 ) 997 + 1. SomeiterationsaredroppedduetoasingularOPGvariancematrix. 2. + Someiterationsdroppeddueto ‹ s v beingtoosmallsuchthat ‹ l isnotwell 39 Table1.10:Score-BasedTests, n = 500 LM LM KT MeanRejectionTotal MeanRejectionTotal MeanRejectionTotal l = 1 p = 0 1.3984 ( 0.10 ) 878 0.99104 ( 0.12 ) 878 1.39130 ( 0.15 ) 878 p = 0.25 1.3681 ( 0.09 ) 921 0.98100 ( 0.11 ) 921 1.36127 ( 0.14 ) 921 p = 0.5 1.2673 ( 0.08 ) 964 0.7079 ( 0.08 ) 964 1.26125 ( 0.13 ) 964 p = 0.75 1.3786 ( 0.09 ) 939 0.4550 ( 0.05 ) 939 1.37151 ( 0.16 ) 939 l = 2 p = 0 1.2176 ( 0.08 ) 1000 0.7989 ( 0.09 ) 1000 1.20127 ( 0.13 ) 1000 p = 0.25 1.0748 ( 0.05 ) 1000 0.6268 ( 0.07 ) 1000 1.06107 ( 0.11 ) 1000 p = 0.5 2.59249 ( 0.25 ) 1000 0.032 ( 0.00 ) 1000 2.57370 ( 0.37 ) 1000 p = 0.75 8.14795 ( 0.80 ) 1000 0.000 ( 0.00 ) 1000 8.14887 ( 0.89 ) 1000 l = 5 p = 0 1.0656 ( 0.06 ) 1000 0.6269 ( 0.07 ) 1000 0.97109 ( 0.11 ) 1000 p = 0.25 2.97280 ( 0.28 ) 1000 0.031 ( 0.00 ) 1000 2.92415 ( 0.41 ) 1000 p = 0.5 30.941000 ( 1.00 ) 1000 0.000 ( 0.00 ) 1000 30.941000 ( 1.00 ) 1000 p = 0.75 69.771000 ( 1.00 ) 1000 0.000 ( 0.00 ) 1000 69.771000 ( 1.00 ) 1000 1. SomeiterationsaredroppedduetoasingularOPGvariancematrix. 40 Table1.11:ModelComparison BasicSF ZISF Heteroskedasticity Variable CoefSEt-stat CoefSEt-stat CoefSEt-stat cons b 0 0.270.046.68 0.080.051.67 0.250.045.50 timeperiod( q ) 0.020.012.27 0.010.012.19 0.020.012.60 area( b 1 ) 0.530.086.38 0.520.086.58 0.570.096.43 labor( b 2 ) 0.230.092.71 0.250.083.07 0.210.092.46 fertilizer( b 3 ) 0.200.053.95 0.210.054.54 0.190.053.83 b 11 0.480.21 2.27 0.450.20 2.26 0.350.25 1.39 b 12 0.610.173.60 0.550.173.21 0.580.202.87 b 13 0.060.150.43 0.080.130.62 0.010.150.07 b 22 0.560.24 2.33 0.500.25 2.04 0.600.27 2.21 b 23 0.140.13 1.04 0.120.12 1.02 0.110.15 0.78 b 33 0.010.09 0.08 0.030.08 0.45 0.010.090.15 s u 0.440.0313.86 0.440.0410.87 s ui 0.42 cons g 0 3.050.93 3.27 age g 1 0.030.012.06 edyrs g 2 0.110.033.36 hhsize g 3 0.080.090.99 nadult g 4 0.100.11 0.98 banrat g 5 1.300.43 2.98 s v 0.160.028.23 0.200.0212.22 0.170.028.34 l 2.75 2.18 2.43 p 0.580.115.42 p i b pr z i = 1 j b # 0.58 cons d 0 age d 1 edyrs d 2 hhsize d 3 nadult d 4 banrat d 5 b E u i b # i 0.35 0.15 0.33 exp b E u i b # i 0.70 0.86 0.72 b E exp u i j b # i 0.73 0.89 0.74 ln L 74.41 71.88 65.89 #ofparameter 13 14 18 AIC 174.82 171.75 167.78 BIC 224.75 225.52 236.91 HQIC 194.70 193.17 195.31 41 Table1.11:(cont'd) Logit Logit+Hetero. Variable CoefSEt-stat CoefSEt-stat cons b 0 0.070.041.80 0.050.041.38 timeperiod( q ) 0.020.012.57 0.020.012.60 area( b 1 ) 0.590.096.85 0.560.087.04 labor( b 2 ) 0.220.082.65 0.240.083.07 fertilizer( b 3 ) 0.180.053.82 0.180.043.96 b 11 0.290.25 1.13 0.310.26 1.19 b 12 0.530.212.60 0.500.212.35 b 13 0.000.140.02 0.020.140.16 b 22 0.560.27 2.08 0.470.27 1.73 b 23 0.110.13 0.80 0.120.13 0.91 b 33 0.000.09 0.03 0.010.09 0.13 s u 0.420.0410.43 s ui 0.53 cons g 0 2.613.110.84 age g 1 0.070.04 1.60 edyrs g 2 0.030.12 0.28 hhsize g 3 0.280.22 1.26 nadult g 4 0.020.23 0.08 banrat g 5 1.321.201.10 s v 0.200.0114.15 0.210.0115.84 l 2.06 2.49 p p i 0.55 0.57 b pr z i = 1 j b # 0.55 0.57 cons d 0 3.882.291.69 8.803.422.57 age d 1 0.100.03 2.88 0.160.05 3.36 edyrs d 2 0.140.15 0.93 0.270.21 1.30 hhsize d 3 0.440.23 1.93 0.810.32 2.56 nadult d 4 0.350.271.30 0.440.351.25 banrat d 5 4.221.313.23 5.461.553.53 b E u i b # i 0.15 0.14 exp b E u i b # i 0.86 0.87 b E exp u i j b # i 0.88 0.89 ln L 59.90 54.33 #ofparameter 19 24 AIC 157.79 156.67 BIC 230.76 248.84 HQIC 186.86 193.38 42 CHAPTER2 HETEROSKEDASTICITYAUTOCORRELATIONROBUSTINFERENCEINTIME SERIESREGRESSIONSWITHMISSINGDATA 2.1INTRODUCTION Itisnotunusualtoencounteratimeseriesdatasetwithmissingobservations.Mostofthe timesseriesliteratureindealingwithmissingdatafocusesontheestimationofdynamic modelswherethegoalistoforecastmissingobservations.However,intherelatively simplecontextoftimeseriesregression,thereappearstobeasparsityofworkrelatedto missingdataissues.Inparticular,littleappearstobeknownabouttheimpactofmissing dataonheteroskedasticityautocorrelation(HAC)robusttestsinregressionsettings.This chapterattemptstothisvoidbyanalyzingtheimpactofmissingdataonrobusttests basedonnonparametrickernelestimatorsoflongrunvariances.FollowingKieferand Vogelsang(2005)wefocusonobtaining b resultsfortherobusttests.Inadditionto capturingtheimpactofthelongrunvarianceestimator'skernelandbandwidthonthero- bustteststatistics,the b limitsalsocapturetheimpactofthelocationsofthemissing dataontherobustteststatisticswheneitherthemissingprocessisnon-randomorone conditionsonthemissinglocations.Insituationswherethemoretraditionalapproach thatseekstoobtainconsistencyresultsforvarianceestimatorswouldbeproblematic, b theorydeliversusefulapproximations. FollowingParzen(1963)wecharacterizemissingobservationsasbeingdrivenbya missingprocessthatisa0-1binaryvariable.Intermsofaregressionmodel,theParzen (1963)approachamountstoplugginginzerosformissingobservations.Timeserieswith zerosinplaceofmissingdatahavebeenlabeledamplitudemodulatedseriesbyParzen (1963)whichweadoptthroughoutthischapter.Becauseofthezeros,amplitudemodu- latedseriesareintuitivelysensiblebecausethetimedistancesbetweentheobservations 43 remainpreserved.ThiswouldseemparticularlyrelevantforHACrobusttestingbased onnonparametrickernelestimator(NeweyandWest(1987)andAndrews(1991))given thatthoseestimatorsemployquadraticformswithweightsthatdependonthetimedis- tancesofpairsofobservations. SoonafterParzen(1963)introducedthenotionofmodelingmissingdatawiththe amplitudemodulatedseriesapproach,manyauthorsinvestigatedtheimpactofmissing dataontheconsistentestimationofspectraldensityfunctions.Forexample,Scheinok (1965)and(1970)considerestimatingaspectraldensityfunctionoftheob- servedprocess(withmissingdata)withindependentBernoullianddependentBernoulli missingprocessesrespectively.Neave(1970)estimatesaspectraldensityfunctionwith initiallyscarcedata.LaterworkbyDunsmuirandRobinson(1981)investigatedthecon- sistentestimationofthespectraldensityoftheunderlyinglatentprocess.WhileHAC robustinferencemakesuseofspectralestimationmethod,withtheexceptionofarecent workingpaperbyDattaandDu(2012),thereappearstobenoattemptintheliteratureto linkthisearlierliteratureonspectraldensityestimationwithregressioninferenceinthe caseofmissingdata. DattaandDu(2012)usedtheamplitudemodulatedseriesapproachtoinvestigatero- bustinferenceintimeseriesregressionsettings.Theirapproachisbasedontraditional asymptotictheoryforHACrobusttestswhichappealstotheconsistencyoftheHACes- timators.Inthecaseofnon-randommissinglocations,thetraditionalapproachbecomes complicatedbecauseoftheneedtoconsistentlyestimatethelongrunvarianceofthela- tentprocess.WhilethisispossibleusingresultsinDunsmuirandRobinson(1981),itis notclearhowtoobtainapositivevarianceestimator.Inanycase,giventhatit isnowwellestablishedthat b theoryprovidesbetterapproximationsthanthetra- ditionalapproach(seeJansson(2002),Sun,Phillips,andJin(2008),andGonçalvesand Vogelsang(2011)),obtaining b resultsforthemissingdatacaseisprudent. Therearethreemaintheoreticalinthischapter.First,whenthemissingpro- 44 cessisrandomandstrongmixingconditions,HACrobust t and Wald statistics computedfromtheamplitudemodulatedseriesfollowtheusual b limitsasinKiefer andVogelsang(2005).Second,whenthemissingprocessisnon-random,the b limits dependonthelocationsofmissingobservationsbutareotherwisepivotal.Therefore,the b criticalvaluesthatonewoulduseintheamplitudemodulatedseriesapproachde- pendsonwhetherthemissingprocessisbestviewedasrandomornon-random.Third, aseeminglynaivealternativetotheamplitudemodulatedseriesapproachistosimply ignorethemissingdata.Onemightreasonablyconjecturethatignoringthemissingdata wouldbeproblematicforrobustinference.Surprisinglywethatthe b limitsof therobust t and Wald statisticshavethestandard b randomvariablewhetherthe missingprocessisrandomornon-random.Here,ignoringtheproblem(missingdata) hasnonegativeconsequencesandgeneratestheadvantageofrobustnesstowhetherthe missingprocessisrandomornon-random. Therestofthischapterisorganizedasfollows.Section2.2themodeland theamplitudemodulatedseriesteststatisticsinthepresenceofmissingdata.Section2.3 develops b asymptoticresultsfortheamplitudemodulatedseriesteststatisticsfor bothrandomandnon-randommissingprocesses.Becausetherandomandnon-random missingprocessesrequiredifferentregularityconditionstheyaretreatedseparately.Sim- ulationoftheasymptoticcriticalvaluesisdiscussedwithafocusonbootstrapmethods. FollowingGonçalvesandVogelsang(2011),wethatthenaive i . i . d .bootstrapisa particularlygoodoptionforobtainingvalid b criticalvalues.Finitesampleperfor- manceoftheamplitudemodulatedseriestestsforbothrandomandnon-randommissing processesareexaminedinSection2.4byMonteCarlosimulations.Attentionisfocused ontherelativeperformanceofsimulatedasymptoticcriticalvalueswithbootstrapcritical values.Section2.5analyzestheapproachofignoringmissingobservationsandmakes somecomparisonswiththeamplitudemodulatedseriesapproach.Section2.6concludes andformalproofsaregivenintheAppendicesB-D. 45 2.2MODELANDTESTSTATISTICS Consideraregressionmodelwithoutmissingobservations, y t = x 0 t b + u t ( t = 1,2,..., T ) ,(2.1) where b isa ( k 1 ) vectorofregressionparameters, x t isa ( k 1 ) vectorofregressors, and u t isameanzerorandomprocess.Whentherearemissingobservations,(2.1)isthe underlying latentprocess . Inthepresenceofmissingobservations,wecharacterizethemissingprocessasabi- naryvariable.Let f a t g bea missingprocess where a t = 8 > > < > > : 1dataisobservedattime t 0dataismissingattime t . Whetherwetreatthemissingprocessasnon-randomorrandomdependsonthestructure ofthedataandthereasonwhytheobservationsaremissing.Weconsiderbothstochastic andnon-stochasticmissingprocesses. Withthemissingprocess, f a t g ,wetheregressionmodelwithmissingobserva- tionsas 1 y t = x 0 t b + u t , y t = a t y t , x t = a t x t , u t = a t u t , ( t = 1,2,..., T ) .(2.2) Characterizingthemissingprocessasa0-1binaryvariableandconstructingregression modelas(2.2)isoneofthestandardapproachesoftreatingmissingobservationsinpanel dataregressionmodels.Intimeseries,Parzen(1963)characterizedtimeserieswith missingdatausingadummyvariableandmodeledobservedprocessas(2.2).However thisapproachhasnotbecomestandardintimeserieswhichissurprisingbecause(2.2) canbethoughtofasanaturalwayofformulatingaregressionmodelwithmissingobser- vationswhenthereisnoparticularinterestinforecastingthemissingdata.Model(2.2)is 1 Forsimplicity,weassumethatthedependentvariableandtheindependentvariables aremissingatthesametimepoints. 46 intuitivelysensible.Becausethezerosarepluggedinformissingobservations,thetrue timedistancesbetweenobservationsarepreserved.Ataconceptuallevelthiswouldap- pearimportantwhenweareusingnonparametrickernelcovariancematrixestimators. Parzen(1963)labelledthetimeseriesinmodel(2.2)as amplitudemodulated(AM)series be- causetheoriginaltimeseriesareamplitudemodulatedbythemissingprocess f a t g .We adoptthesamelanguagehere. Throughoutouranalysisweassumethatthelatentregressionmodelexogene- ity, E ( x t u t )= 0,andweassumethatthemechanismgeneratingthemissingdatadoes notgenerateanendogeneityproblem,i.e.weassumethat E ( x t u t )= 0orequivalently E ( a t x t u t )= 0.Thisallowsustofocusontheimpactofmissingdataonrobustinference assumingthat b isbytheobserveddata. Thefocusofthischapterisoninferenceregarding b basedontheordinaryleast squares(OLS)estimatorof b .Inferenceiscarriedouttoberobusttotheformofthe heteroskedasticityandserial(auto)correlation.TheOLSestimatorof b isgivenby ‹ b = T å t = 1 x t x 0 t ! 1 T å t = 1 x t y t . Plugginginfor y t givesthewellknownexpression ‹ b b = T å t = 1 x t x 0 t ! 1 T å t = 1 x t u t = T å t = 1 x t x 0 t ! 1 T å t = 1 v t where v t = x t u t .Theimpactofserialcorrelationon ‹ b comesthrough v t androbust standarderrorscanbeobtainedusinganonparametrickernelestimatoroftheasymptotic varianceof T 1/2 å T t = 1 v t oftheform ‹ W = ‹ G 0 + T 1 å j = 1 k j M ‹ G j + ‹ G 0 j , 47 where ‹ G j = T 1 å T t = j + 1 ‹ v t ‹ v 0 t j arethesampleautocovariancesof ‹ v t = x t ‹ u t with ‹ u t = y t x 0 t ‹ b theOLSresidualsoftheAMseries,and ‹ G j = ‹ G 0 j for j < 0.Here, k ( x ) isa kernelfunctionsuchthat k ( x )= k ( x ) , k ( 0 )= 1, j k ( x ) j 1, k ( x ) iscontinuousat x = 0, R ¥ ¥ k 2 ( x ) < ¥ ,and M isthebandwidthparameter.Noticethat ‹ W istheusuallongrun varianceestimatorthatisobtainedaftersimplysetting ‹ v t = 0foranydatesforwhich dataismissing.Thiscanbeseenmechanicallybynotingthat ‹ v t = x t ‹ u t = a t x t ‹ u t .Using wellknownalgebra,wecanrewrite ‹ W as ‹ W = T 1 T å t = 1 T å s = 1 k t s M ‹ v t ‹ v 0 s .(2.3) Because ‹ W iscomputedusingtheAMseries,thetimedistances, j t s j ,betweenobserved datapointsarepreservedwhichisconceptuallysensible.Inaddition, ‹ W willbeposi- tivewithappropriatechoicesofthekernelfunction,e.g.theBartlett,Parzenor quadraticspectral(QS)kernels. Supposeweareinterestedintestingthenullhypothesis, H 0 : r ( b o ) = 0against H A : r ( b o ) 6 = 0,where r ( b ) isa q 1vector ( q k ) ofcontinuouslydifferentiable functionswithaderivativematrix R ( b ) = ¶ r ( b ) / ¶b 0 .Weanalyzethefollowing Waldstatistic, W T = Tr ‹ b 0 h R ‹ b ‹ Q 1 ‹ W ‹ Q 1 R ‹ b 0 i 1 r ‹ b , where ‹ Q = T 1 å T t = 1 x t x 0 t .Thecasewhereonerestrictionisbeingtested, q = 1,wecan alsousea t -statisticoftheform t T = p Tr ‹ b q R ‹ b ‹ Q 1 ‹ W ‹ Q 1 R ‹ b 0 . 2.3ASSUMPTIONSANDASYMPTOTICTHEORY InthissectionwederivetheasymptoticbehavioroftheOLSestimator, ‹ b ,theHACes- timator, ‹ W ,andtheHACrobustwaldtest, W T ,inSection2.2forthecaseof 48 weaklydependentcovariancestationarytimeseries.Wepresentresultsforbothrandom andnon-randommissingprocesses.Resultsforrandomandnon-randommissingpro- cessesaretreatedseparatelyastheyrequiredifferentregularityconditions.Westate resultsfortherandommissingprocessfollowedbyresultsforthenon-randommissing process.AlthoughwediscusstraditionalasymptotictheoryforHACrobusttests basedonconsistencyoftheHACestimators,wearemainlyinterestedinobtaining b asymptoticapproximationsasproposedbyKieferandVogelsang(2005).Inthe b asymptoticframeworkthebandwidthofthecovariancematrixestimatorismodeledas aproportion, b ,ofthesamplesize.Thisisincontrasttothetraditionalapproach wherethebandwidthismodeledasincreasingslowerthanthesamplesize.Theadvan- tageofthe b approachisthattheresultingasymptoticapproximationsforthetest statisticsdependonthechoiceofkernelandbandwidth.Inthetraditionalapproachthe kernelandbandwidthchoicedonotappearintheasymptoticapproximation.The b approachisthereforemoreaccuratethanthetraditionalapproachbecausethe b approachcapturesmuchoftheimpactofthesamplingdistributionoftheHACestimator ontheteststatistic.Fortheoreticalevidenceonthesuperiorperformanceofthe b approachseeJansson(2002),Sun,Phillips,andJin(2008),andGonçalvesandVogelsang (2011). Becausethe b asymptoticdistributionsdependonthekernelsusedtocompute theHACestimators,somerandommatricesthatappearintheasymptoticresultsneedto beHerewefollowthenotationofKieferandVogelsang(2005). 1. Leth > 0 beanintegerandB h ( r ) denoteagenerich 1 vectorofstochastic processes.Lettherandommatrix,P ( b , B h ) ,beasfollowsforb 2 ( 0,1 ] . Case(i):ifk ( x ) istwicecontinuouslydifferentiableeverywhere, P b , B h R 1 0 R 1 0 1 b 2 k 00 r s b B h ( r ) B h ( s ) 0 drds , Case(ii):ifk ( x ) iscontinuous,k ( x ) = 0 for j x j 1 andk ( x ) istwicecontinuously differentiableeverywhereexceptfor j x j = 1 , 49 P b , B h RR j r s j < b 1 b 2 k 00 r s b B h ( r ) B h ( s ) 0 drds + k ( 1 ) 0 b R 1 b 0 B h ( r + b ) B h ( r ) 0 + B h ( r ) B h ( r + b ) 0 dr , wherek ( 1 ) 0 = lim h ! 0 [( k ( 1 ) k ( 1 h )) / h ] , Case(iii):ifk ( x ) istheBartlettkernel, P b , B h 2 b R 1 0 B h ( r ) B h ( r ) 0 dr 1 b R 1 b 0 B h ( r + b ) B h ( r ) 0 + B h ( r ) B h ( r + b ) 0 dr. Throughout,thesymbolfi ) fldenotesweakconvergenceofasequenceofstochastic processestoalimitingstochasticprocessand p ! denotesconvergenceinprobability.We alsousethefollowingnotation.Let Q = E ( x t x 0 t ) and Q = E ( x t x 0 t ) .Let v t = x t u t and W = L L 0 = G 0 + å ¥ j = 1 ( G j + G 0 j ) ,where G j = E ( v t v 0 t j ) and L isthe lowertriangularmatrixbasedontheCholeskydecompositionof W .Similarly,letand v t = a t v t and W = LL 0 = G 0 + å ¥ j = 1 ( G j + G 0 j ) ,where G j = E ( v t v 0 t j ) and L is thelowertriangularmatrixbasedontheCholeskydecompositionof W .Thematrix W isthelongrunvariancematrixofthelatentvector v t whereas W isthelongrunvariance matrixoftheAMseriesvector v t . Wederiveresultsundertheassumptionsthatthelatentprocessesarenearepochde- pendent(NED)onsomeunderlyingmixingprocessandthatthemissingprocessisstrong mixing.WefollowtheinDavidson(2002).Letthe L p normof x be as k x k p =( E j x j p ) 1 p .Also,let jj denotetheEuclideannormofthecorresponding vectorormatrix.Forastochasticsequence f # t g ¥ ¥ ,onaprobabilityspace ( W , F , P ) , let F t + m t m = s ( # t m ,..., # t + m ) ,suchthat fF t + m t m g ¥ m = 0 isanincreasingsequenceof s Wesaythatasequenceofintegrablerandomvariables f w t g ¥ ¥ isL p -NEDon f # t g ¥ ¥ if,for p > 0, k w t E ( w t jF t + m t m ) k p < d t n m ,where n m ! 0and f d t g ¥ ¥ isa sequenceofpositiveconstants.Forasequence f a t g ¥ ¥ ,let F t ¥ = s ( ..., a t 1 , a t ) ,and similarly F ¥ t + m = s ( a t + m , a t + m + 1 ,... ) .Thesequenceissaidtobe a -mixingif lim m ! ¥ a m = 0,where a m = sup t sup G 2F t ¥ , H 2F ¥ t + m j P ( G \ H ) P ( G ) P ( H ) j .A sequenceis a -mixingofsize y 0 if a m = O ( m y ) forsome y > y 0 .Similarly,asequence 50 is L p -NEDofsize f 0 if n m = O ( m f ) forsome f > f 0 . 2.3.1RandomMissingProcess Whenthemissingprocessisrandom,theasymptotictheoryisdrivenbytheobserved AMseries.IftheAMseriesconditionsrequiredfor b asymptotictheory, thentheHACestimatorandtherobustWaldtestofthenullhypothesisfollowtheusual b asymptoticlimitsasobtainedbyKieferandVogelsang(2005). Thefollowinghigh-levelassumptionsaresufforthispurpose. AssumptionR. 1. T 1 [ rT ] å t = 1 x t x 0 t ) rQ , 8 r 2 [ 0,1 ] . 2. T 1/2 [ rT ] å t = 1 v t ) L W k ( r ) , 8 r 2 [ 0,1 ] . AssumptionR1statesthatauniform(in r )lawoflargenumbers(LLN)holdsfor f x t x 0 t g .Aslongas f x t g iscovariancestationaryandweaklydependent,thisassumption isafairlygeneralcondition.AssumptionR2statesthatafunctionalcentrallimittheorem (FCLT)holdsforthenormalizedpartialsumoftheAMseries f v t g .BelowAssumption R 0 ,whichisintermsofthelatentprocessandthemissingprocessratherthantheAM seriesitself,issufforAssumptionR. AssumptionR 0 . 1. Forsomer > 2 , x t 2 r D < ¥ forallt = 1,... . 2. n x t o isaweaklystationarysequenceL 2 NEDon f # t g withNEDofsize 2 ( r 1 ) r 2 . 3. v t r D < ¥ ,andE ( v t )= 0 forallt = 1,2,... . 4. n v t o isameanzeroweaklystationarysequenceL 2 -NEDon f # t g withNEDof size 1 2 . 51 5. f ( a t , # t ) g isa a -mixingsequencewith a mixingofsize 2 r r 2 . 6. f a t g isaweaklystationaryprocessthatisindependentof n x t , u t o . 7. W = lim T ! ¥ Var T 1/2 å T t = 1 a t v t ispositive UnderAssumptionR 0 ,thelatentprocessconditionssufforthe b asymptotictheorytogothrough.Inparticular,underAssumption R 0 ,forall r 2 ( 0,1 ] , T 1 å [ rT ] t = 1 x t x 0 t ) rQ and T 1/2 å [ rT ] t = 1 v t ) L W k ( r ) .Intermsofthelatentpro- cess,Assumptions R 0 arerelativelyweak.Forexample,PhillipsandDurlauf(1986)states thatif f v t g is L 2 + d boundedstationaryprocess( d > 0)andstrongmixingthenthepar- tialsumsof f v t g theFCLT.The L 2 -NEDconditioninAssumptionR 0 4isactually weakerthanthiscondition.Hence,thepresenceofthemissingobservationsgenerally doesnotrequireadditionalassumptionsonthelatentprocess.AssumptionR 0 6isrel- ativelystrongandstatesthatthemissinglocationsarenotrelatedtothelatentprocess. Thisassumptionissuffor ‹ b tobeconsistentfor b becauseitimpliesif E ( x t u t )= 0, then E ( x t u t )= E ( a t x t u t )= E ( a t ) E ( x t u t )= 0.Inaddition,AssumptionsR 0 5andR 0 6 ensurethattheLLNandFCLTthatholdforthelatentprocessesextendtotheobserved AMseries,i.e.AssumptionsR 0 5andR 0 6ensurethatAssumptionsRhold. Withtheseassumptionswecanstateourmainresultfortheestimatorandstatistics basedontheAMserieswhenthemissingprocessisrandom. Theorem2.1. UnderAssumptionR 0 thefollowingholdasT ! ¥ . (a). (AsymptoticBehaviorofOLS) p T ‹ b b ) Q 1 L W k ( 1 ) = N ( 0, Q 1 W Q 1 ) . (b). (Fixed-bapproximationofHACestimator)Let Ÿ B k ( r ) denoteak 1 vectorofstochastic processesas Ÿ B k ( r ) W k ( r ) r W k ( 1 ) ,forallr 2 ( 0,1 ] .AssumeM = bTwhere b 2 ( 0,1 ] isThen, ‹ W ) L P b , Ÿ B k L 0 , 52 wheretheformofP ( b , Ÿ B k ) dependsonthetypeofkernelvia1. (c). (Fixed-basymptoticdistributionoftests)UnderH 0 , W T )W q ( 1 ) 0 P b , Ÿ B q 1 W q ( 1 ) andwhenq = 1 , t T ) W 1 ( 1 ) q P b , Ÿ B 1 . BecauseAssumptionsR 0 implyAssumptionsR,Theorem2.1directlyfollowsfromKiefer andVogelsang(2005)althoughdirectlyestablishingTheorem2.1(a)iseasy.Ifweplugin y t = x 0 t b + u t totheOLSestimator ‹ b ,then ‹ b = b + å T t = 1 x t x 0 t 1 å T t = 1 v t .Therefore wecanwrite p T ( ‹ b b )= T 1 T å t = 1 x t x 0 t ! 1 T 1/2 T å t = 1 v t . andthelimitisobtainedbyusingAssumptionsRwith r = 1.Whilethe b approx- imationismoreusefulthanthetraditionalresultthatreliesonaconsistencyresultfor ‹ W ,onecouldeasilyobtaintraditionalresultsfortheWaldand t -statisticsundersimilar regularityconditions. 2 Theorem2.1showsthatwhenthemissingprocessisrandom,onecansimplyplug inzerosforthemissingobservationsandconductstandard b inferencetreatingthe zerosasthoughtheywereobserveddata.Givenaparticularsamplewith T timeperiods ofdata(includingthezeros),rejectionswouldbecomputedrelativeto b critical valuesobtainedbyKieferandVogelsang(2005).Thecriticalvaluesarefunctionsofthe kernelandthevalueof b = M / T where M isthebandwidthusedtocompute ‹ W . The b asymptoticdistributionsinKieferandVogelsang(2005)arenon-standard. Whileitisrelativelyeasytosimulatefromtheasymptoticdistributions,moreuser-friendly 2 Consistetncyof ‹ W requiresaslightlystrongerassumptionthanAssumptionR 0 .For example,Andrews(1991)requires f v t g tobeafourthorderstationaryprocess,and Hansen(1992)requires f v t g tobemixingwithsize ( 2 + d )( r + d ) / 2 ( r 2 ) .AssumptionR 00 inthissectionissufforHansen(1992).SeeAppendixBfortheproof. 53 methodsareavailableforthecomputationofcriticalvaluesand p -values.Forthecaseof theBartlettkernel,Vogelsang(2012)hasdevelopedanumericalmethodfortheeasycom- putationofstandard b criticalvaluesand p -valuesforanylevel.For otherkernelscomprehensivenumericalapproacheshavenotbeendeveloped.Kieferand Vogelsang(2005)doprovidecriticalvaluefunctionsforpopularlevelsbut theirfunctionsdonotallowthecomputationof p -values.Agoodalternativeforthecom- putationof b criticalvaluesand p -valuesisthebootstrap.GonçalvesandVogelsang (2011)showedthatthenaivemovingblockbootstraphasthesamelimitingdistribution asthe b asymptoticdistributionunderregularityconditionssimilartothoseused here.Thebootstrapworkswithbothablocklength( l )orablocklengththatin- creaseswiththesamplesizebutataslowerrate( l 2 / T ! 0).Inparticular,forthecaseof l = 1theblockbootstrapbecomesan i . i . d .bootstrap.Therefore,theresultsofGonçalves andVogelsang(2011)indicatethatvalid b criticalvaluescanbeobtainedviaasim- ple i . i . d .bootstrapmethod. Asshowninthenextsubsection,the b limitoftherobuststatisticsbecomesmore complicatedundertheassumptionthatthemissinglocationsarenon-random.Inthiscase thebootstrapbecomestheidealtoolforobtaining b criticalvaluesonacasebycase basisinpractice.Therefore,itisusefultoprovidesomedetailsontheimplementationof thebootstrap.thevector w t =( y t , x 0 t ) 0 thatcollectsdependentandexplanatory variables.Let l 2 N ( 1 l T ) beablocklengthandlet B t , l = f w t , w t + 1 ,..., w t + l 1 g betheblockof l consecutiveobservationsstartingat w t .Draw k 0 = T / l blocksrandomly withreplacementfromthesetofoverlappingblocks f B 1, l ,..., B T l + 1, l g toobtaina bootstrapresampledenotedas w t =( y t , x 0 t ) 0 , t = 1,..., T .Noticethatweareresam- plingfromtheAMseries(thezerosareincluded).Thebootstrapteststatistics, W T and t T ,areas W T = r ( ‹ b ) r ( ‹ b ) 0 [ TR ( ‹ b ) ‹ Q 1 ‹ W ‹ Q 1 R ( ‹ b ) 0 ] 1 r ( ‹ b ) r ( ‹ b ) 54 and t T = p T r ( ‹ b ) r ( ‹ b ) q R ( ‹ b ) ‹ Q 1 ‹ W ‹ Q 1 R ( ‹ b ) 0 , where ‹ Q = T 1 T å t = 1 x t x 0 t , ‹ W = T 1 T å t = 1 T å t = 1 k ( t s / M ) ‹ v t ‹ v 0 s , and ‹ b istheOLSestimatefromtheregressionof y t on x t ,and ‹ v t = x t y t x 0 t ‹ b . Noticethatbootstrapstatisticsusethesameformulaas W T and t T andthisiswhat makesthisbootstrapapproachfinaivefl.Let p denotetheprobabilitymeasureinduced bythebootstrapresamplingconditionalonarealizationoftheoriginaltimeseries.If T 1 å [ rT ] t = 1 x t x 0 t ) p rQ forsome Q and T 1/2 å [ rT ] t = 1 v t ) p L W k ( r ) forsome L ,thenbecausethe b asymptoticdistributionoftheWaldteststatisticsispivotal, thelimitingdistributionof W T coincideswiththelimitingdistributionof W T ,indepen- dentlyof L and Q .WeshowthatstrengtheningAssumption R 0 3-5to R 00 3-5issuf forthispurpose. AssumptionR 00 . 3. v t r + d < ¥ ,r > 2 . 4. n v t o isaweaklystationaryL 2 + d NEDon f # t g with n m ofsize 1 . 5. f ( a t , # t ) g isa a -mixingsequencewith a m ofsize ( 2 + r )( r + d ) r 2 . Thisstrengtheningisnecessaryforbootstrapresamplestosatisfyconditionsrequired fortheFCLT.Alsonotethatexceptfortheassumptionsrelatedtothemissingprocess,the otherassumptionsareidenticaltothethoseinGonçalvesandVogelsang(2011),whichim- pliesthattheexistenceofmissingobservationsdoesnotchangetheassumptionsrequired forthelatentprocessforthebootstraptoprovidevalidcriticalvalues.(SeeGonçalvesand 55 Vogelsang(2011,p764-766)fordetails.)Hence,ingeneral,aslongasthemissingprocess thestrongmixingcondition,thenaivemovingblockbootstrapprovidesvalid criticalvalues.Weformallystatetheresultbelow.ProofsareprovidedintheAppendix B. Theorem2.2. LetW T andt T benaivebootstrapteststatisticsobtainedfromthemovingblock bootstrapresamples.SupposethattheblocksizeliseitherdasT ! ¥ orl ! ¥ asT ! ¥ suchthat l 2 / T ! 0 .Letb 2 ( 0,1 ] beandsupposeM = bT.Then,underAssumptionR 0 withAssumptionR 0 3-5strengthenedtoAssumptionR 00 3-5,asT ! ¥ , W T p )W 0 q ( 1 ) P ( b , Ÿ B q ( r )) 1 W q ( 1 ) and t T p ) W 1 ( 1 ) q P ( b , Ÿ B 1 ( r )) . 2.3.2Non-randomMissingProcess Whenthemissingprocessisnon-random,missinglocationsareandhencethe asymptoticbehavioroftheestimatorsandstatisticsdependonthelocationsofmissing observations.Wethestructureofthetimingofthemissingobservations. 2. Wecharacterizeanarbitrarydatasetwithmissingobservationsasfollows.From t = 1 tot = T 1 weobservedata,fromt = T 1 + 1 tot = T 2 dataaremissing,fromt = T 2 + 1 to t = T 3 weobservedataandsoforth.LetthenumberofmissingclustersbeC < ¥ .Forsimplicity, weassumethatdataareobservedatt = 1 andt = T. 3 Thus,ingeneralforn = 1,..., C,from t = T 2 n 1 + 1 tot = T 2 n dataaremissingwhereasfromt = T 2 n + 1 tot = T 2 n + 2 dataare observed(seeFigure2.1).Fornotationalpurposes,letT 0 = 0 andT 2 C + 1 = T. 3 Thisassumptionisonlyfornotationalsimplicity.Theresultsofthischaptergo throughwithoutthisassumption. 56 Figure2.1:Datawithmissingobservations data ! vvvvvffffvvvf t = 1 missing ! t = T 1 data ! t = T 2 t = T 3 ... data ! fvvvv t = T 2 C t = T Whenthemissingprocessisnon-random,theasymptotictheoryisdrivenbythelatent process.Thisisbecausethelatentprocessistheonlyrandomprocessandwhatmatters iswhetherthelatentprocessconditionsrequiredfor b asymptotictheory. Thefollowingassumptionsaresufforustoobtaina b result. AssumptionNR. 1. Themissing/observedcutoffssatisfy lim T ! ¥ T n T = l n ,n = 0,1,...,2 C + 1 ,wherethe numberofcutoffsisnon-randomandi.e.,C < ¥ . 2. T [ rT ] å t = 1 x t x 0 t ) rQ , 8 r 2 [ 0,1 ] . 3. T 1/2 [ rT ] å t = 1 v t ) L W k ( r ) , 8 r 2 [ 0,1 ] . AssumptionNR1treatsthenumberofobservationsinamissingorobservedblockas aproportionofthesamplesizewiththenumberofmissingblocksalsoThis isnotmeanttobeadescriptionofthewaydataisgatheredbutissimplyanaturalmathe- maticaltoolforobtainingapproximationsthatdependonthelocationsofthemissingand observeddata.Thetotalnumberofobservedtimeperiodsisgivenby å T t = 1 a t andusing AssumptionNR1wecanquantifytheproportionofthetimeperiodsthathaveobserved dataas l = lim T ! ¥ T 1 T å t = 1 a t = 2 C + 1 å i = 1 ( 1 ) i + 1 l i .(2.4) AssumptionNR2statesthatauniform(in r )LLNholdsfor f x t x 0 t g .AssumptionNR3 statesthattheFCLTholdsforthescaledpartialsumsof f v t g .Wenowstatemoreprimi- tiveconditionsthataresufforAssumptionsNRtohold: AssumptionNR 0 . 57 1. Forsomer > 2 , x t 2 r D < ¥ forallt = 1,... . 2. n x t o isaweaklystationarysequenceL 2 NEDon f # t g withNEDofsize 2 ( r 1 ) r 2 . 3. v t r D < ¥ ,andE ( v t )= 0 forallt = 1,2,... . 4. n v t o isameanzeroweaklystationarysequenceL 2 -NEDon f # t g withNEDof size 1 2 . 5. f # t g isa a -mixingsequencewith a mixingofsize 2 r r 2 . 6. f a t g isanon-randomprocess. 7. W = lim T ! ¥ Var T 1/2 å T t = 1 v t ispositive AssumptionNR 0 isthesameasAssumption R 0 exceptforthepropertiesrelatedtothe missingprocess f a t g .Recallingthatintermsofthelatentprocess,allthatAssumptionR 0 requiredwastheconditionssufforthelatentprocesstosatisfy b asymptotic theory,thisisnatural. Wenowstateourmainresultswhenthemissingprocessisnon-random.Notethatfor twonumbers r and s , r ^ s denotestheminimumof r and s .TheproofofTheorem2.3is giveninAppendixC. Theorem2.3. Let W k å 2 C + 1 j = 1 ( 1 ) j + 1 W k l j andlet B k r , f l i g beak 1 vectorof stochasticprocessesas B k r , f l i g å C n = 0 1 n l 2 n < r l 2 ( n + 1 ) o å 2 n + 1 j = 1 ( 1 ) j + 1 W k r ^ l j r ^ l j l 1 W k ,forr 2 ( 0,1 ] .UnderAssumptionNR 0 ,asT ! ¥ , (a). (AsymptoticBehaviorof ‹ b ) p T ‹ b b ) l Q 1 L W k = N 0, l 1 Q 1 W Q 1 (b). (Fixed-basymptoticapproximationof ‹ W )AssumeM = bTwhereb 2 ( 0,1 ] isthen ‹ W ) L P b , B k ( f l i g ) L 0 . 58 (c). (Fixed-basymptoticdistributionofW T )UnderH 0 , W T ) W 0 q P b , B q ( f l i g ) 1 W q andwhenq = 1 , t T ) W 1 q P b , B 1 ( f l i g ) . UsingtheasymptoticnormalityresultinTheorem2.3(a),onecouldpursueatradi- tionalinferenceapproachwhichwouldrequireaconsistentestimatoroftheasymptotic variance.Thechallengewouldbeconstructingaconsistentestimatorofthelatentpro- cesslongrunvariancematrix, W .UsingresultsfromDunsmuirandRobinson(1981)a consistentestimatorof W canbeconstructedas ‹ W = ‹ G 0 + T 1 å j = 1 k j M ‹ G j + ‹ G 0 j where ‹ G j = å T t = j + 1 ‹ v t ‹ v 0 t j / å T t = j + 1 a t a t j .Because ‹ G j isconstructedusingtheeffectivesample sizeofthesequence f b v t b v t j g thereisnoguaranteethat ‹ W willbepositiveeven ifkernelsliketheBartlett,ParzenandQSareused.Besidesonlyprovidingarelatively crudeapproximationforteststatistics,thedifinconstructingapositive estimatorof ‹ W makesthetraditionalapproachevenlessappealinginpractice. Incontrastthe b approachshowsthatonecansimplyuse ‹ W toconstructvalid teststatisticsbecauseunder b theory, ‹ W isasymptoticallyproportionalto W when thelocationsofmissingdataarenon-random.Eventhough ‹ W isnotanestimatorof W ,it canstillbeusedtoconstructteststatisticsbecause b theoryshowsthat ‹ W scalesout W .LookingcloselyattheresultgivenbyTheorem2.3(b)weseethatthe b limit of ‹ W issimilarbutisnoticeablydifferentfromthelimitobtainedforthecaseofmissing atrandom.Thestochasticprocess B k r , f l i g isdifferentthantheBrownianbridge Ÿ B ( r ) anddependsonthelocationsofthemissing/observeddata.Therefore,criticalvalues forthelimitingrandomvariablesgivenbyTheorem2.3(c)aredifferentfromthecritical valuesgivenbythestandard b limitsgivenbyTheorem2.1(c). 59 Giventhelocationsofthemissingdata,thenon-standarddistributioninTheorem2.3 (c)canbecomputedbysimulationmethodsbecausethelimitingdistributionsarestill functionsofBrownianmotions.Althoughthismethodisfeasible,itcanbepractically inconvenientbecauseasymptoticcriticalvalueswouldhavetobesimulatedonacaseby casebasisdependingonthelocationsofthemissingdata.Inthissituationthebootstrap isamoreconvenientmethodforobtaining b criticalvalues.Becausethelocations ofmissingdataaretreatedasnon-random,weneedthebootstrapresamplingschemeto preservethemissinglocations.Thismeansthatblockingisnotpracticalbecauseblocks willshufthelocationsofthemissingdatauponresampling.Insteadthe i . i . d .bootstrap ismoreappropriatewherebootstrapsamplesarecreatedbysamplingwithreplace- mentfromtheobserveddataandcreatingabootstrapsamplewiththesamemissing locationsastheoriginaldata. detailsareasfollows. w t =( y t , x 0 t ) 0 , t = 1,..., T ,thatcollectsthe dependentandindependentvariablesoftheAMseries.Amongthose T observations collectonlytheobserveddata,whichwedenote Ÿ w t , t = 1,..., Ÿ T , Ÿ T = å T t = 1 a t .Resample Ÿ T observationswithreplacementfrom Ÿ w t andgetbootstrapresamplewhichwedenote Ÿ w t , t = 1,..., Ÿ T .Fillintheobservedlocationswithresampleddata Ÿ w t andleavethe missinglocationsaszeros.Thiswayweconstructan i . i . d .resamplewithmissinglocations Denotethis i . i . d .resampleas w t =( y t , x 0 t ) 0 , t = 1,..., T .Thenaivebootstraptest statistics W T and t T arecomputedas W T = T r ( ‹ b ) r ( ‹ b ) 0 [ R ( ‹ b ) ‹ Q 1 ‹ W ‹ Q 1 R ( ‹ b ) 0 ] 1 r ( ‹ b ) r ( ‹ b ) and t T = p T r ( ‹ b ) r ( ‹ b ) q R ( ‹ b ) ‹ Q 1 ‹ W ‹ Q 1 R ( ‹ b ) 0 , where ‹ b istheOLSestimatefromtheregressionof y t on x t , ‹ Q = 1 / T å T t = 1 x t x 0 t ,and ‹ W = 1 / T å T t = 1 å T t = 1 k ( t s / M ) ‹ v t ‹ v 0 s ,where ‹ v t = x t ( y t x 0 t ‹ b ) . 60 Becauseweresamplefromobservedtimeperiodsonly,thisresamplingcanbethought ofasresamplingfromthelatentprocess w t ( y t , x 0 t ) 0 .Wedonotknowthevalueof w t when a t = 0andthusweareresamplingfrom Ÿ T observationsnotthefullnumberof timeperiods T .However,becausetheresamplingisbasedon i . i . d .draws,thisbootstrap resamplehasessentiallythesamepropertiesasan i . i . d .resampleofthelatentprocess. Wecouldtakeanother T Ÿ T independentdrawsfrom Ÿ w t andinthemissinglocations of Ÿ w t .Callthisresample w t .Thenbyconstruction w t = a t w t where w t canbeviewedasasamplefromthelatentprocessgiventhe i . i . d .resampling.Ifthe bootstrapprocess, w t ,(a) T 1 å T t = 1 x t x 0 t ) rQ forsome Q and(b) T 1/2 å T t = 1 v t ) L W k ( r ) forsome L ,thenusingTheorem2.3(c)itfollowsthat W T p ) W 0 q P ( b , B q ( f l i g )) W q , with W q = å 2 C + 1 j = 1 ( 1 ) j + 1 W q ( l j ) where f l m g 2 C + 1 m = 0 arethemissinglocationsin thebootstrapresample, C isthenumberofmissingclustersinthebootstrapresample, and p denotestheprobabilitymeasureinducedbythebootstrapresamplingconditional onarealizationoftheoriginaltimeseries.Becausethemissinglocationsofthebootstrap resamplesareedtobeidenticaltothemissinglocationsofthedata,itfollows that l j = l j and C = C .Therefore, W T p ) W 0 q P ( b , B q ( f l i g )) W q , whichisthesame b limitof W T asinTheorem2.3(c).Thisasymptoticequivalence ismainlyduetothefactthatthelimitingdistributioninTheorem2.3(c)ispivotalwith respectto L and Q sothat W T hasanasymptoticdistributionequivalentto W T even though L and Q arepotentiallydifferentfrom L and Q .Obviously, t T and t T haveequivalentasymptoticapproximationsaswell.StrengtheningAssumptionNR 0 3-5 toNR 00 3-5issuffor w t tosatisfyconditions(a)and(b)above. AssumptionNR 00 . 61 3. v t r + d < ¥ ,r > 2 . 4. n v t o isaweaklystationaryL 2 + d NEDon f # t g with n m ofsize 1 . 5. f # t g isa a -mixingsequencewith a m ofsize ( 2 + r )( r + d ) r 2 . SeeGonçalvesandVogelsang(2011)fortheproofs.HeretheresultofGonçalvesand Vogelsang(2011)directlyappliesbecausetheseassumptionsaremadeaboutthelatent processwhichhasnothingtodowiththemissingprocesswhenthemissinglocationsare non-random.Aformalstatementofthisbootstrapresultisinthefollowingtheorem. Theorem2.4. LetW T andt T benaivebootstrapteststatisticscomputedfromthei . i . d . bootstrap resamplewiththelocationsofmissingobservationsandidenticaltothemissinglocationsof therealdata.Letb 2 ( 0,1 ] beandsupposeM = bT.Then,underAssumptionNR 0 with AssumptionNR 0 3-5strengthenedtoAssumptionNR 00 3-5,asT ! ¥ , W T p ) W 0 q P ( b , B q ( r , f l i g )) 1 W q and t T p ) W 1 q P ( b , B 1 ( r , f l i g )) . 2.4FINITESAMPLEPERFORMANCE InthissectionweuseMotelCarlosimulationstoevaluatethesampleperformance ofthe b asymptoticapproximationoftheHACrobustWaldtestinSection 2.3. 62 2.4.1DataGeneratingProcess Weconsiderasimplelocationmodelforthelatentprocessgivenby, y t = b + u t , u t = r u t 1 + q 1 r 2 # t , # t ˘ i . i . d . N ( 0,1 ) , u 1 = 0, with t = 1,2,..., T sothat T isthetimespan T .Weset b = 0and r 2 f 0,0.3,0.6,0.9 g .The timeserieswithmissingobservationsischaracterizedasanAMseries y t = x t b + u t , where y t = a t y t , x t = a t , u t = a t u t .Weuseseveralofthemissingprocess, f a t g ,asfollows. 1. Fortherandommissingprocesswemodel f a t g asaBernoulli( p )process,i.e. P ( a t = 1 )= p ,with p 2f 0.3,0.5,0.7 g .Weprovideresultsforthetimespan T 2 f 50,100,200 g . 2. Weconsiderthreetypesofnon-randommissingprocesses. (a)First,weconsiderwhatwecall missinginclusters .Therearecaseswhereobser- vationsaremissinginlargeclusterswithasmallnumberofclusters., weconsidercaseswheredataaremissingintwoclusters( C = 2)duetoWorld WarI(from1914to1918)andWorldWarII(from1939to1945).Wegeneratedata bothyearlyandquarterlywheretimespansfrom1911to Y , Y 2 f 1946,1958,1970 g . Foryearlydata,thismeansthat12observationsaremissingoutof T observations, T 2 f 36,48,60 g ,andthemissingprocessis a [ rT ] = 0when r 2 ( l 1 , l 2 ] [ ( l 3 , l 4 ] and a [ rT ] = 1otherwise,with l 1 = 3 / T , l 2 = 8 / T , l 3 = 28 / T ,and l 4 = 35 / T (See Figure2.3).Forquarterlydata,thisimpliesthat48observationsaremissingoutof T timeperiods, T 2 f 144,192,240 g ,and a [ rT ] = 0when r 2 ( l 1 , l 2 ] [ ( l 3 , l 4 ] 63 and a [ rT ] = 1otherwise,with l 1 = 12 / T , l 2 = 32 / T , l 3 = 112 / T ,and l 4 = 140 / T . Missinglocationsareacrossiterationsinthesimulations. (b)Second,weconsider initiallyscarcedata followingthesimulationsetupinNeave (1970)wherethesamplingpointisshortenedatsomepointduringtheperiodof observations.,wethinkaboutacasewhereatstonlythequarterly data( N Q observations)wereavailablebutlatermonthlydata( N M observations) becameavailable.Hencethelatentprocessismonthlydata,andduringtheperiods whenonlythequarterlydataareavailable,everytwoobservationsoutofthreeare missing.(SeeFigure2.4.)Wesetthenumberofobservationsavailablemonthly tobe N M = f 12,24,48 g ,andthenumberofobservationsavailablequarterlytobe N Q 2 f 12,24 g .Underthissetting,thenumberofmissingclustersis C 2f 11,23 g , thetotaltimespan T 2f 46,58,82,94,118 g ,and f 22,46 g observationsaremissing. 4 Themissingprocessis a [ rT ] = 0when r 2[ N Q 1 n = 1 ( l 2 n 1 , l 2 n ] and a [ rT ] = 1 otherwisewith l 2 n 1 = ( 3 n 2 ) / T and l 2 n = 3 n / T .Themissinglocationsare acrossiterations. (c)Third,weconsidera conditionalBernoulli(p) missingprocesstocomparetothe randomBernoulli( p )missingprocess.TheconditionalBernoulli( p )missingpro- cessdiffersfromrandomBernoulli( p )missingprocessinthewayinwhichitis simulated.Oncethemissingprocess, f a t g ,isgeneratedfromthecorresponding Bernoulli( p )processfortheiteration,themissinglocationsarethenfor subsequentiterations.Hence,alltheiterationshavethesamemissinglocationsin contrasttotherandomBernoulli( p )processwheremissinglocationschangeforeach iteration.AsfortherandomBernoulliprocess,weconsider p 2f 0.3,0.5,0.7 g and thetotaltimespan T 2 f 50,100,200 g . 4 Thenumberofmissingclustersis C = N Q 1.Thetotaltimespanis T =( N Q 1 ) 3 + 1 + N M .Thenumberofmissingobservationsis2 ( N Q 1 ) . 64 2.4.2TeststatisticsandCriticalValues WiththedatageneratingprocessinSection2.4.1,weconsidertestingthenull hypothesisthat b = 0againstthealternative b 6 = 0atanominallevelof5%.When computingtheHACestimator,weuse b 2f 0.1,0.15,...,0.95,1 g throughout.TheHAC robustt-statisticfor b is t T = ‹ b r T å T t = 1 x 2 t 1 ‹ W å T t = 1 x 2 t 1 = ‹ b r T å T t = 1 a 2 t 1 ‹ W å T t = 1 a 2 t 1 = ‹ b r T å T t = 1 a t 2 ‹ W , where ‹ b = T å t = 1 x 2 t ! 1 T å t = 1 x t y t = T å t = 1 a t ! 1 T å t = 1 a t y t = T å t = 1 a t ! 1 T å t = 1 y t , and ‹ W = T 1 å T i = 1 å T j = 1 k ( j i j j / [ bT ] ) ‹ v i ‹ v j with ‹ v t = a t y t ‹ b .Werejectthenullhy- pothesiswhenever t T > t c (orrejectthenullwhenever t T < t l c or t T > t r c if t l c 6 = t r c ) where t c isacriticalvalue.Using10,000replications,wecomputeempiricalrejection probabilities.AsshownfromTheorems2.1(c)and2.3(c),theteststatisticshavedifferent asymptoticdistributionsdependingonwhetherthemissingprocessisrandomornon- random.Hencecriticalvaluesarecalculateddifferentlyforthetwocases. Whenthemissingprocessisrandom, t c isthe97.5%percentileofthestandard b asymptoticdistributionderivedbyKieferandVogelsang(2005)(SeeTheorem2.1(c)). FromSection2.3.1weknowthatwecancomputetheasymptoticcriticalvaluesbyeither simulatingthedistributionitselforbythenaivemovingblockbootstrapwhichwedenote as ( f t R boot , l c , t R boot , r c g ) .Toevaluatethesampleperformanceweusebothof themethodstogetthecriticalvalues.Forthenaivemovingblockbootstrap,weuseblock 65 length l = 1(the i . i . d .bootstrap).Fromtheoriginalrandomsampleof T observations, y 1 ,..., y T ,weget999bootstrapresamples, ( y B 1 , a B 1 ) ..., ( y B T , a B T ) , B = 1,...,999.For eachbootstrapresamplewecomputethebootstrapt-statisticas t B T = ( ‹ b B ‹ b ) q T ( å T t = 1 a B t ) 2 ‹ W B , where ‹ b B = å T t = 1 a B t 1 å T t = 1 y B t ,and ‹ W B = T 1 å T t = 1 å T s = 1 k ( j t s j / [ bT ] ) ‹ v B t ‹ v B s , where ‹ v B t = a B t ( y B t ‹ b B ) .Then t R boot , l c isthe0.025quantileand t R boot , r c isthe 0.975quantileof t B T , B = 1,...,999. Whenthemissingprocessisnon-random, t c isthe97.5%percentileofthedistribu- tionderivedinTheorem2.3(c).FromSection2.3.2weknowthatcriticalvaluescanbe computedeitherbysimulatingthelimitingdistributionorbynaive i . i . d .bootstrap(see Theorem2.4)whichwedenoteas ( f t NR boot , l c , t NR boot , r c g ) .Becausethelimitingdis- tributiondependsonmissinglocations,thenaive i . i . d .bootstrapismoreconvenientin practice.However,toillustratetherelativeperformance,wecomputethecritical valuesusingbothmethods.Fromtheoriginalsampleof T observations, y 1 ,..., y T ,we pulloutthedatafromtheobservedtimeperiods, Ÿ y 1 ,..., Ÿ y Ÿ T , Ÿ T = å T t = 1 a t .Fromthese Ÿ T observations,weresample Ÿ T observationswithreplacement.Repeatingthisprocedure 999timesweobtainresampleswhichwedenote Ÿ y B 1 ,..., Ÿ y B Ÿ T , B = 1,...,999.By observedlocationswithresampleddata, Ÿ y t ,andmissinglocationswithzeros,we obtainthe i . i . d .bootstrapresamples,whichwedenote y B 1 ,..., y B T ,for B = 1,...,999. Wecomputethenaivebootstrapt-statisticas t B T = ( ‹ b B ‹ b ) r T å T t = 1 a t 2 ‹ W B where ‹ b B = å T t = 1 a B t 1 å T t = 1 y B t and ‹ W B = T 1 å T t = 1 å T s = 1 k ( j t s j / [ bT ] ) ‹ v B t ‹ v B s , where ‹ v B t = a t ( y B t ‹ b B ) .Then t NR boot , l c isthe0.025and t NR boot , r c isthe0.975 quantileof t B T , B = 1,...,999.Notethatweareusing a t ratherthan a t inthiscasebecause weareconditioningonthelocationsofthemissingdatawhenresampling. 66 2.4.3FiniteSamplePerformance Weillustratethenon-randommissingprocesscaseFigures2.5-2.34showempirical rejectionprobabilitiescomputedfrom10,000replicationsusingAMseriesforthefour missingprocessesinSection2.4.1.Sincethemissingprocessisnon-random,by Theorem2.3(c)theHACrobustteststatisticshavea b asymptoticdistributionthat dependsonthemissinglocations.Criticalvaluesareobtainedbythenaive i . i . d .bootstrap withlocationsofmissingobservations(labeled L-bootstrap )orbydirectlysimulating thelimiting b distributions(labeled b ).Inadditiontothesetwocriticalvalues weconsidercriticalvaluesobtainedbythenaive i . i . d .bootstrapthatdoesnotcondition onmissinglocations(labeled bootstrap )andbysimulatingthestandard b limitin KieferandVogelsang(2005)(labeled b )forcomparisonalthoughthesetwocritical valuesarenottheoreticallyvalid. Thethingwecannoticeisthatthe b criticalvaluesthattreatmissinglo- cationshaslesssizedistortionsthanthestandard b criticalvalueswhenthe samplesizeissmalland/orserialcorrelationishigh.Thisdifferencetendstoincreaseas thenumberofmissingobservationsincrease.FortheWorldWarmissingprocess,when T = 36and r 2f 0.6,0.9 g (Figure2.5),empiricalrejectionprobabilitiesbytheed- b limitthatdependsonlocationsofmissingobservationshaslesssizedistortionthanthe usual b limit,andthissizedifferenceisbiggerwhen r = 0.9thanwhen r = 0.6. ComparingtheWorldWarmissingprocesswith T = 36tothatof T = 144(Figures2.5 and2.8),weseethatwhileforbothcasesonethirdofthedataaremissing,thedifference inrejectionprobabilitiesbetweentheusual b andthe b conditionalonthelo- cationsisbiggerwhen T = 36,thesmallersamplesize.Similartendencycanbefound inconditionalBernoullimissingprocess.Considerthesimulationwith T = 50, r = 0.9, and p = 0.3 ( 70%missing ) asabasecase(Figure2.17).Ifwecomparethisbasecaseto thesimulationswhere(i) T = 100, r = 0.9,and p = 0.3 ( 70%missing ) (Figure2.20),(ii) T = 50, r = 0.6,and p = 0.3 ( 70%missing ) (Figure2.17),and(iii) T = 50, r = 0.9,and 67 p = 0.5 ( 50%missing ) (Figure2.18),itisalwaysthebasecasethathasabiggerdifference inrejectionprobabilitiesbetweentheusual b andthe b thatdependsonthe locationsofmissingobservations.Relativetothebasecase,thesethreecaseshaveoneof threefeatures:abiggersamplesize(i),lessserialcorrelation(ii),andasmallermissing proportion(iii).Eventhoughrejectionratesbasedonthestandard b criticalvalues andtheconditional b criticalvaluesaresometimessimilar,thesimulationsshowthe moreprudentapproachistousetheconditional b criticalvaluesaswaspredicted bythetheoreticalresults. Thesimulationresultsalsosuggestthatonemaygainbybootstrappingratherthan simulatingthe b distributionespeciallywhentheserialcorrelationishigh,thesam- plesizeissmall,orthemissingproportionislarge.Thistendencyholdsregardlessof missinglocationsbeingtreatedasornotinthebootstrapresamplingscheme.The empiricalrejectionprobabilitiesfromnaive i . i . d .bootstrapwithmissinglocations havelesssizedistortionthantheempiricalrejectionprobabilitiesobtainedbysimulating the b distributioninTheorem2.3(c).Thesamethingholdsbetweenthestandard b limitandthenaive i . i . d bootstrapthatdoesnotconditiononmissinglocations.For example,ifwelookatthe T = 50Bernoulli(Figures2.17-2.19), ( N Q = 12, N M = 12 ) ini- tiallyscarce(Figure2.11),and T = 36WorldWarmissing(Figure2.5)cases,allofwhich haveasmalltimespan,weseedifferencesbetweentherejectionratesfrombootstrap- pingandbysimulatingthe b distributionespeciallywhen r = 0.9.Comparingthe WorldWarmissingprocesswith T = 48(Figure2.6)and T = 144(Figure2.8),weseethat eventhough T = 144hasabiggertimespan,thereisstillabiggerbootstrapgainwith T = 144becausewhen T = 48aroundonefourthofthedataaremissingwhereasfor T = 144aroundonethirdoftheobservationsaremissing.Overall,thesimulationsindi- catethatthenaive i . i . d .bootstrapwithlocationsseemsmostrobusttothestationary asymptoticbreakdowns. Inadditionitappearsthatthepatternofmissinglocationsmattersaswell.Forthe 68 conditionalBernoullimissingprocesswith p = 0.5and T = 50(Figure2.18)wesee adifferencebetweenthe b criticalvalueconditionalonmissinglocationsandthe usual b criticalvalue.Ontheotherhand,forinitiallyscarcedatawith N Q = 12 and N M = 12(Figure2.11)or N Q = 12and N M = 24(Figure2.12),wherebothhave timespanofaround50andaroundhalfoftheobservationsaremissing( 24 / 46 and 24 / 58 respectively),thedifferencebetweentheusual b criticalvalueandtheconditional b criticalvalueisquitesmall.Forinitiallyscarcedata,themaximumlengthofthe missingclusteris2andtheobservationsaremissingonlyatthebeginningofthesample. Whenthemissingprocessisrandom,theusual b limitinKieferandVogelsang (2005)isvalid.Hencecriticalvaluescanbeobtainedbysimulatingthisdistributionor bythenaive i . i . d .bootstrap.Aswiththenon-randommissingprocess,forcomparison, wealsoconsidercriticalvaluesobtainedfromthenaive i . i . d .bootstrapconditionalon themissinglocations.Figures2.26-2.34showempiricalrejectionprobabilitiescomputed from10,000replicationsusingtheAMseries.Resultsfortherandommissingprocessare similartothenon-randommissingcase.Theconditional b limitingeneralperforms noworsethantheusual b limitandtheconditional b limitbecomesadvanta- geouswhenthedataisnotwell-behaved.Howeverthistendencyislessstrongthanthe non-randomBernoullimissingprocess.Startingwiththe T = 100and p = 0.5(50% missing)caseandmovingtocaseswhereeither p or T increase(lessmissingproportion orincreasedtimespan),thedifferencebetweentheconditionalandunconditional b rejectionratesdisappearsfortherandomBernoullimissingprocess.When T = 200,even with p = 0.3(70%missing)and r = 0.9(Figure2.32)thereisnodifferencebetweenthe tworejectionprobabilities.Giventhatitisbettertoconditionalonlocationscriticalval- ueswhenthemissingprocessisnon-randomandgiventhatitappearsconditioningon locationscausesnoharmwhenthemissingprocessisrandom,oursimulationssuggest theuseofconditionalonlocationscriticalvaluesinpractice.Thebootstrapisthemost convenientwaytoobtainthesecriticalvaluesgiventhateachapplicationwithmissing 69 datawillhaveapplicationmissinglocations. 2.5WHENMISSINGOBSERVATIONSAREIGNORED Inpracticeanempiricalresearchermightbetemptedtosimplyignoreanymissingdata problemsandestimatethetimeseriesregressionwiththedatathatisobserved.From theperspectiveofestimating b thishasnoconsequencesbecauseoneobtainsthesame estimatorof b asisobtainedwhenmissingobservationsarereplacedwithzeros.Forthe computationoflongrunvarianceestimators,ignoringmissingdatamattersbecausethe timedistancesbetweenobservationsisskewedformanypairsoftimeperiods.Thus, robustteststatisticsarecomputationallydifferentwhenignoringmissingdataversesre- placingmissingdatawithzeros.Areasonableconjectureisthatignoringmissingdata invalidatesinferenceusingHACrobustteststatistics.Surprisingly,xed- b asymptotic theorysuggestsotherwise.Asweshowinthissection,ignoringthemissingdataleads toHACrobustteststhathave standard b limits.Thisistruewhetherthemissing processisrandomornon-random.IncontrasttotheAMseriesapproach,theempirical researcherdoesnothavetoworryaboutrobustnesstowhethermissingdatesarebest viewedasrandomornon-random. 2.5.1ModelsandTeststatistics Intermsoftheregressionmodel,ignoringmissingobservationsamountstostackingonly theobservedobservationsasiftheyareequallyspacedintime.(SeeFigure2.2.)Taking outthemissingobservationsfromthelatentprocessandrelabelingobservedobserva- tions,theregressionmodelbecomes y ES t = x ES 0 t b + u ES t t = 1,2,..., T ES ,(2.5) where T ES = å T t = 1 a t isthenumberofnon-missingobservations.FollowingDattaand Du(2012)wecallthismodelthe equalspace(ES)regressionmodel . 70 Figure2.2:EqualSpaceRegressionModel data ! vvvvvffffvvvf y ES 1 missing ! y ES 5 data ! y ES 6 y ES 8 ... data ! fvvvv y ES T ES 3 y ES T ES AswiththeAMseries,theESregressionmodelusesonlytheobserveddata.Noat- temptismadetoforecastorproxymissingobservations.HoweverunliketheAMseries, theoriginaltimedistancesbetweenobservationsarenotpreservedintheESregression model.Thedistancebetweenthe t th and s th observations(intermsofthelatentprocess) isnotnecessarily j t s j butisinsteadequalto j å t i = 1 a i å s i = 1 a i j whichisthenumber ofobserveddatapointsbetweentimeperiods t and s .Onlywhentherearenomissing observationsbetweentimeperiods t and s willthetimedistanceremain j t s j in(2.5). TheOLSestimatoroftheESregressionmodelisas ‹ b ES = 0 @ T ES å t = 1 x ES t x ES 0 t 1 A 1 T ES å t = 1 x ES t y ES t . RecallthatmissingobservationsarereplacedwithzerosintheAMseriesandmissing observationsaredeletedintheESregressionmodel.Becausetheonlydifferencebetween å T t = 1 x t x 0 t and å T ES t = 1 x ES t x ES 0 t comesfromthemissingobservationswhicharesettoze- ros,itfollowsthat å T t = 1 x t x 0 t = å T ES t = 1 x ES t x ES 0 t .Bythesamereasoning, å T t = 1 x t y t = å T ES t = 1 x ES t y ES t .Therefore,itfollowsthat ‹ b = ‹ b ES . Hence,intermsoftheOLSestimator,theESregressionmodelprovidesthesameestimate of b astheAMseries. Let W ES = lim T ! ¥ Var T 1/2 å T ES t = 1 v ES t ,where v ES t = x ES t u ES t .Then,the usualkernelbasedHACestimatorfor W ES isas ‹ W ES = ‹ G ES 0 + T ES 1 å j = 1 k j M ES ‹ G ES j + ‹ G ES 0 j , 71 where ‹ G ES j = T 1 ES å T ES t = j + 1 ‹ v ES t ‹ v ES 0 t j arethesampleautocovariancesof ‹ v ES t = x ES t ‹ u ES t and ‹ u ES t = y ES t x ES 0 t ‹ b ES areOLSresidualsfromtheESregressionmodel.Asbefore, k ( x ) isakernelfunctionsuchthat k ( x )= k ( x ) , k ( 0 )= 1, j k ( x ) j 1,continuousat x = 0,and R ¥ ¥ k 2 ( x ) < ¥ and M ES isthebandwidth. 5 Bywellknownalgebrawecanrewrite ‹ W ES as ‹ W ES = T 1 ES T ES å n = 1 T ES å m = 1 k n m M ES ‹ v ES n ‹ v ES 0 m . Recallthat ‹ v t = x t ‹ u t where ‹ u t aretheOLSresidualsfromtheAMseries.Therefore,by construction ‹ v t = x t ‹ u t = a t x t ( y t x 0 t ‹ b ) whichimpliesthat ‹ v t = 0atmissingdates. BecausetheESregressionmodelandtheAMseriessharethesame ‹ b , ‹ v ES t isobtainedby droppingmissingobservationsfrom ‹ v t . 6 Usingthesefacts,wecanrecast ‹ W ES ,whichisa weightedsumof ‹ v ES n ‹ v ES 0 m ,insteadasaweightedsumof ‹ v t ‹ v 0 s becausealltheelementsof ‹ v ES t ‹ v ES 0 s arefoundamongthoseof ‹ v t ‹ v 0 s withtheremainingelementsof ‹ v t ‹ v 0 s beingzeros. Theonlycomplicationthatariseswhenrewriting ‹ W ES intermsof ‹ v t ‹ v 0 s liesinmatching thekernelweightsusedon ‹ v t ‹ v s tothosethatareusedby ‹ W ES whicharedifferentfrom theweightsusedby ‹ W . Thetimedistancebetween ‹ v t and ‹ v s intheESregressionmodelis j å t i = 1 a i å s i = 1 a i j andwecanrewrite ‹ W ES as ‹ W ES = T 1 ES T å t = 1 T å s = 1 k 0 @ å t i = 1 a i å s i = 1 a i M ES 1 A ‹ v t ‹ v 0 s . RecallthattheHACestimatoroftheAMseriesisgivenby(2.3).Both ‹ W ES and ‹ W are weightedsumsof ‹ v t ‹ v 0 s , t , s = 1,..., T ,butwithdifferentweights.FortheESregression 5 WedenotethebandwidthoftheESregressionmodelas M withsubscript ES because ifwe b = M ES / T ES ,then M ES dependsonthetimespanoftheESregressionmodel ( T ES ). 6 Tobemore ‹ v t = ‹ v ES g with g = å t i = 1 a i whenever a t = 1. ‹ v ES g forall g = 1,..., T ES canbethisway.When a i = 0,thereisnotermintheESregression modelthatmatches ‹ v i (whichiszero)becausemissingobservationsaredroppedinthe ESregressionmodel. 72 model,bytakingoutthemissingobservations,thetimedistancesbetweenobservations becomeshorterthanthetruetimedistances, j t s j ,unlesstherearenomissingobserva- tionsbetween t and s .Therefore, ‹ W ES gives ‹ v t ‹ v 0 s weightsatleastasbigas ‹ W ifthesame bandwidthisused: k ( ( å t i = 1 a i å s i = 1 a i ) / M ES ) k ( ( t s ) / M ) if M = M ES . Wenowrevisittestingthenullhypothesis H 0 : r b 0 = 0against H A : r b 0 6 = 0. WetheESHACrobustWaldstatisticas W ES T = T ES r ‹ b 0 h R ‹ b ‹ Q ES 1 ‹ W ES ‹ Q ES 1 R ‹ b 0 i 1 r ‹ b , andwhen q = 1,t-statisticsoftheform t ES T = p T ES r ‹ b q R ‹ b ‹ Q ES 1 ‹ W ES ‹ Q ES 1 R ‹ b 0 , where ‹ Q ES = T 1 ES å T ES t = 1 x ES t x ES 0 t .Notethat å T t = 1 x t x 0 t = å T ES t = 1 x ES t x ES 0 t implies ‹ Q ES = ( T / T ES ) ‹ Q .Wethereforecanwrite W ES T as W ES T = Tr ‹ b 0 T ES T R ‹ b ‹ Q 1 ‹ W ES ‹ Q 1 R ‹ b 0 1 r ‹ b . Otherthanthescalingfactor T ES / T and ‹ W ES ,theothertermsin W ES T areidenticalto W T . Therefore,intermsofteststatistics,choosingbetweentheAMseriesstatisticsandtheES statisticboilsdowntochoosingthekernelweightswhencomputingtheHACestimator. 2.5.2AsymptoticTheory AswiththeAMseries,wearemainlyinterestedinthe b asymptoticlimitsof W ES T and t ES T underthenullhypothesis H 0 inSection2.5.1. 2.5.2.1Non-randommissingprocess Weconsiderthenon-randommissingprocesscase.Becausethe b asymptotic distributionsdependonthekernelsusedtocomputetheHACestimators,weneedto 73 somerandommatricesthatappearintheasymptoticresults.Therandommatri- cesin1nolongerworkherebecausethekernelweightsin ‹ W ES aredifferent fromthoseof ‹ W .Infactbecausethekernelweightfor ‹ v t ‹ v 0 s dependsonthenumberof missingobservationsbetween t and s ,unliketherandommatricesin 1,therandommatricesthatappearintheasymptoticapproximationof ‹ W ES dependon themissinglocations f l i g 2 C + 1 i = 0 .Notethatfortwonumbers r and s , r ^ s denotesthe minimumof r and s and r _ s denotethemaximumof r and s . 3. Leth > 0 beaninteger.Let f l i g 2 C + 1 i = 0 begivenbyAssumptionNR.1andlet l begivenby (2.4) .LetB h ( r , f l i g ) denoteagenerich 1 vectorofstochasticprocessthatdepends on f l i g .Lettherandommatrix,P ES ( b , B h ( f l i g )) ,beasfollowsforb 2 ( 0,1 ] : Case(i):ifk ( x ) istwicecontinuouslydifferentiableeverywhere, P ES b , B h ( f l i g ) 1 b 2 l 3 å C n = 0 å C l = 0 R l 2 n + 1 l 2 n R l 2 l + 1 l 2 l k 00 ( l b ) 1 å 2 n + 1 j = 1 ( 1 ) j + 1 ( r ^ l j ) å 2 l + 1 j = 1 ( 1 ) j + 1 ( u ^ l j ) B h ( r , f l i g ) B h ( u , f l i g ) 0 i dudr Case(ii):ifk ( x ) iscontinuous,k ( x ) = 0 for j x j 1 andk ( x ) istwicecontinuously differentiableeverywhereexceptfor j x j = 1 , P ES b , B h ( f l i g ) 1 b 2 l 3 å C n = 0 å C l = 0 R l 2 n + 1 l 2 n R l 2 l + 1 l 2 l 1 ˆ j r u j < b l + å 2 ( n _ l ) j = 2 ( n ^ l )+ 1 ( 1 ) j l j ˙ k 00 ( l b ) 1 å 2 n + 1 j = 1 ( 1 ) j + 1 r ^ l j å 2 l + 1 j = 1 ( 1 ) j + 1 u ^ l j B h ( r , f l i g ) B h ( u , f l i g ) 0 i drdu + k 0 ( 1 ) b l 2 å C n = 0 å n l = 0 R l 2 l + 1 l 2 l 1 ˆ l 2 n b l å 2 n j = 2 l + 1 ( 1 ) j l j < u l 2 n + 1 b l å 2 n j = 2 l + 1 ( 1 ) j l j ˙ ˆ B h u + b l + å 2 n j = 2 l + 1 ( 1 ) j l j , f l i g B h ( u , f l i g ) 0 + B h ( u , f l i g ) B h u + b l + å 2 n j = 2 l + 1 ( 1 ) j l j , f l i g 0 )# du, wherek ( 1 ) 0 = lim h ! 0 [( k ( 1 ) k ( 1 h )) / h ] , 74 Case(iii):ifk ( x ) istheBartlettkernel, P ES b , B h ( f l i g ) 2 b l 2 å C n = 0 R l 2 n + 1 l 2 n B h r , f l i g B h r , f l i g 0 dr 1 b l 2 å C n = 0 å n l = 0 R l 2 l + 1 l 2 l h 1 n l 2 n b l å 2 n k = 2 l + 1 ( 1 ) k l k u l 2 n + 1 b l å 2 n k = 2 l + 1 ( 1 ) k l k o ˆ B h u , f l i g B h u + b l + å 2 n k = 2 l + 1 l k ( 1 ) k , f l i g 0 + B h u + b l + å 2 n k = 2 l + 1 l k ( 1 ) k , f l i g B h u , f l i g 0 oi du. Whenthemissingprocessisnon-random,theasymptotictheoryfortheESregression modelisbasedonAssumptionNR 0 whichisthesameassumptionthattheAMseries resultsarebasedon.Theorem2.5belowprovidestheasymptoticlimitsof ‹ W ES and W ES T ( t ES T when q = 1)whenthemissingprocessisnon-random.BecausetheESregression modelandtheAMseriesmodelhavethesameOLSestimator,wedonotrestatethe asymptoticresultoftheOLSestimatorgivenbyTheorem2.3(a).TheproofforTheorem 2.5isprovidedinAppendixD. Theorem2.5. Let W k beasinTheorem2.3.Let B k r , f l i g beak 1 vectorofstochas- ticprocessesas B k r , f l i g C å n = 0 1 n l 2 n < r l 2 ( n + 1 ) o 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j r ^ l j l 1 W k , forr 2 ( 0,1 ] .AssumeM ES = bT ES whereb 2 ( 0,1 ] isThenunderAssumptionNR 0 ,as T ! ¥ , (a). (Fixed-basymptoticapproximationof ‹ W ES ) ‹ W ES ) L P ES b , B k ( f l i g ) L 0 , (b). (Fixed-basymptoticdistributionofW ES T )underH 0 , W ES T ) W 0 q h l P ES b , B q ( f l i g ) i 1 W q 75 andwhenq = 1 , t ES T ) W 1 q l P ES b , B 1 ( f l i g ) . Althoughthedifferenceofthelimitsof ‹ W and ‹ W ES isintheformofthefunctions P ( ) and P ES ( ) becauseofthedifferentrelativedistancesbetweenobservations,itis proportionalto W andisafunctionof B q ( r , f l i g ) like ‹ W .Similarto W T , W ES T hasa limitingdistributionthatisnon-standardanddependsonthelocationsofthemissing databutremainspivotalwithrespectto W and Q . Surprisingly,itturnsoutthattheasymptoticdistributioninTheorem2.5(b)isequiv- alenttothestandard b asymptoticdistributioninKieferandVogelsang(2005)with b = M ES / T ES .Toestablishthisresultweconsiderthespecialcasewherethelatent processis i . i . d .Whenthelatentprocessis i . i . d .,theESregressionmodelisatimeseries regressionwith T ES observationsandthereisnoserialcorrelationinthedata.Therefore, W ES T istheusualHACstatisticcomputedwith T ES observations.For M ES = bT ES , W ES T hastheusual b limitbecausetheresultsofKieferandVogelsang(2005)directly apply.Intuitivelyspeaking,whenthedatais i . i . d .,thetimedistancesbetweenobserva- tionsdonotmatterandmissingobservationsonlyreducethesamplesize.Therefore,the b theorygoesthroughasusual.Thisresultforthe i . i . d .caseisformallystatedinthe followingLemma. Lemma1. Letthemissingprocess f a t g benon-random.Thelatentprocessisgivenby (2.1) . Supposethat f ( x t , u t ) g isi.i.d.AssumeM ES = bT ES whereb 2 ( 0,1 ] isThenunder H 0 asT ! ¥ , W ES T )W 0 q P b , Ÿ B q 1 W q andwhenq = 1 , t ES T ) W 1 r P b , Ÿ B q . 76 Becausethe i . i . d .assumptionneededforLemma1isaspecialcaseoftheconditions neededforTheorem2.5,the b limitsgivenbyTheorem2.5(b)andLemma1are distributionallyequivalent.BecausethelimitsinTheorem2.5(b)continuetoholdwhen thedataisnot i . i . d .,thefollowingTheoremholdsasadirectconsequenceofLemma1: Theorem2.6. AssumeM ES = bT ES whereb 2 ( 0,1 ] isThenunderAssumptionNR 0 andH 0 ,asT ! ¥ , W ES T )W 0 q P ( b , Ÿ B q ) 1 W q andwhenq = 1 , t ES T ) W 1 q P ( b , Ÿ B 1 ) . 2.5.2.2Randommissingprocess Nowweconsiderthecasewherethemissingprocessisrandomandexploretheasymp- toticpropertiesof W ES T ( t ES T when q = 1).FromTheorem2.6wecaneasilydeducethe asymptoticlimitof W ES T inthemissingatrandomcase.SupposethatAssumptionR 0 holds.Supposeweconditiononthemissingprocess f a t g .Conceptuallythisisthesame astreatingthemissingprocessasnon-random.RecallthatAssumptionR 0 andAssump- tionNR 0 areidenticalintermsofthelatentprocess.Thusthe b limitingdistribution of W ES T conditionalonthemissingprocess isgivenbyTheorem2.6.Becausetheconditional distributioninTheorem2.6isthestandard b distributioninKieferandVogelsang (2005)anddoesnotdependontheconditioningprocess f a t g (doesnotdependon f l i g ), itdirectlyfollowsthattheunconditionallimitingdistributionof W ES T mustalsobethe distributioninTheorem2.6.Formally,wehavethefollowingresult. Theorem2.7. AssumeM ES = bT ES whereb 2 ( 0,1 ] isThenunderAssumptionR 0 and H 0 ,asT ! ¥ , W ES T )W 0 q P ( b , Ÿ B q ) 1 W q 77 andwhenq = 1 , t ES T ) W 1 q P ( b , Ÿ B 1 ) . RemarkablyfortheESregressionmodel,regardlessofwhetherthemissingprocess israndomornon-random,theHACrobustwaldstatisticandthe t -statistichaveusual b asymptoticdistributionasinKieferandVogelsang(2005).Asdiscussedforthe AMseriescasewitharandommissingprocess,thestandard b criticalvaluescan beobtainedusingvarioussimulationandnumericalmethods.Itisworthnotingthatfor theequallyspacedcase,the i . i . d .bootstrapcanbeappliedtotheequallyspaceddataas theresultsofGonçalvesandVogelsang(2011)directlyapplyundertheassumptionsof Theorem2.4. 2.5.3FiniteSampleProperties Inthissectionweanalyzethesampleperformanceof W ES T usingMonteCarlosim- ulations.WeusethesamedatageneratingprocessinSection2.4.1.Fromthe simplelocationmodelinSection2.4.1thatliesonthetimespan T weconstructtheES regressionmodel y ES t = b + u ES t , t = 1,..., T ES , with T ES = å T t = 1 a t .Weset b = 0and r 2 f 0,0.3,0.6,0.9 g asfortheAMseries.The HACrobustt-statisticfor b is t ES T = p T ES ‹ b q ‹ W ES , where ‹ b = å T t = 1 y t / T ES and ‹ W ES = T 1 ES å T ES i = 1 å T ES j = 1 k ( j i j j / [ bT ES ] ) ‹ v ES i ‹ v ES j with ‹ v ES t = y ES t ‹ b .AswiththeAMseries,weuse b 2f 0.1,0.15,...,1 g .Werejectthenullhypoth- esiswhenever t ES T > t c (orrejectthenullwhenever t ES T < t l c or t ES T > t r c if t l c 6 = t r c ) where t c isacriticalvalue.FromSection2.5.2,weknowthat t ES T hasthestandard b limitingdistributioninKieferandVogelsang(2005)whetherthemissingprocessis 78 randomornon-random.Therefore t c isthe97.5%percentileofthe b asymptotic distributioninKieferandVogelsang(2005).Criticalvaluescanbecomputedeitherby directlysimulatingthelimitingdistribution ( t ES c ) orbyusingthenaive i . i . d .bootstrap ( f t ES boot , l c , t ES boot , r c g ) .Wecomputecriticalvaluesusingbothmethods.Fromthe originalsampleof T ES observedobservations, y ES 1 ,..., y ES T ES ,weresample T ES obser- vationswithreplacement.Repeatingthisprocedure999timesweobtain i . i . d .bootstrap resamplesforESregressionmodelwhichwedenote y ES B 1 ,..., y ES B T ES , B = 1,...,999. Wecomputethenaivebootstrapt-statistic t ES , B T = p T ES ( ‹ b B ‹ b ) q ‹ W B ES , where ‹ b B = å T t = 1 y B t / T ES and ‹ W B ES = 1 / T ES å T ES t = 1 å T ES s = 1 k ( j t s j / [ bT ES ] ) ‹ v ES B t ‹ v ES B s with ‹ v ES B t = y ES B t ‹ b B .Then t ES boot , l c isthe0.025quantileand t ES boot , r c isthe0.975 quantileof t ES , B T for B = 1,...,999. Figures2.35-2.64showtheempiricalrejectionprobabilitiescomputedfrom10,000 replicationsusingtheESregressionmodel.Forallfourcasesofmissingprocessesde- inSection2.4.1,theempiricalrejectionprobabilitiesarecomputedusingcritical valuesobtainedbythenaive i . i . d .bootstrapandbysimulatingthe b limitingdis- tributioninKieferandVogelsang(2005).WecanseethatESregressionmodelworks reasonablywellregardlessofthemethodsusedtocomputethecriticalvaluesevenwhen alargeportionofthedataaremissing.ForexampleconsidertheWorldWarsmissing process.When T = 36(yearlycase,Figure2.35)and T = 144(monthlycase,Figure2.38), onethirdofthedataaremissing.Inthesetwocasesfor r = 0,0.3,therearemildover- rejectionproblemsifany.For T = 36someover-rejectionproblemsappearwith r = 0.6 andbecomeseverewith r = 0.9.For T = 144over-rejectionproblemsaremuchless severefor r = 0.6,0.9.SimilarpatternsholdforinitiallyscarceandBernoullimissing processes.Thisover-rejectiontendencywhenthedataishighlycorrelatedissomething thatisroutinelyfoundwhennoobservationsaremissing. 79 InadditionweseeinallFigures2.35-2.64thatrejectionratesfromthetwocritical valuesareclosetogethernearlyallthetime.Minorexceptionsoccurwhenthesample sizeissmallandthelatentprocessishighlycorrelatedinwhichcasethetworejection ratesshowsomedifferences.FortheWorldWarmissingprocesswith T = 36(Figure 2.35),onethirdofdataaremissingandthusweonlyhave T ES = 24.Forinitiallyscarce datawith N Q = 12and N M = 12(Figure2.41),wehave24observationsmissingout of46( T ES = 22).Fortherandomandnon-randomBernoulli(0.3)missingprocesswith T = 50(Figure2.47)wehave T ES ˇ 15.Itiswhen T ES isverysmallthatwecansee somedifferencebetweenthetwoempiricalrejectionratesbutonlywhen r = 0.9.This isnotsurprisingbecausewithsmall T ES and r closeto1theasymptoticapproximation provided b islikelytobeinaccurate.Whenthereisadifferencebetweenthetwo rejectionprobabilities,itisalwaysthecasethatthenaive i . i . d .bootstraphasthebetter sizeproperties.SeeFigures2.35,2.41,2.47and2.56.Ultimately,oursimulationresults suggestthatinthepresenceofmissingobservations,onedoeswellbyignoringmissing observationsandcomputingcriticalvaluesusingthenaive i . i . d .bootstrap. 2.5.4ComparisonofAMandESStatistics Figures2.65-2.94compareempiricalrejectionprobabilitiesoftheAMseriesapproachto thoseoftheESregressionapproach.InSection2.4.3wefoundthatfortheAMseries approachthenaive i . i . d .bootstrapconditionalonthelocationsofmissingobservations alwaysperformsnoworsethandirectlysimulatingtheasymptotic b criticalvalues. Thisistruewhetherthemissingprocessisrandomornon-random.Similarly,wefoundin Section2.5.3thatthenaive i . i . d .bootstrapalwaysperformsnoworsethansimulatingthe standard b criticalvaluesfortheESregressionapproachaswell.Hence,tomake comparisonsbetweentheAMseriesapproachandtheESregressionapproach,critical valuesarecomputedbythenaive i . i . d .bootstrapconditionalonthelocationsofmissing observations.Forthemostpartthetwoapproachesgivesimilarrejections.However,the 80 AMseriesapproachhasatendencytooutperformtheESregressionapproachwhenthe latentprocessishighlyseriallycorrelatedandthistendencyisstrongerwhenthesample sizeissmall.Inotherwords,whenthestationarityasymptotictheoryismorelikelyto breakdown,itismorelikelythattheAMseriesapproachoutperformstheESregression approach. 2.6CONCLUSION InthischapterwediscussedthepropertiesofHACrobustteststatisticsintimeseries regressionsettingwhenthereismissingdata.Weconsideredtworegressionmodels,AM seriesandESregressionmodel,bothfortherandomandnon-randommissingprocesses. Dependingontheregressionmodelusedandthemissingprocessbeingrandomornon- random,HACrobusttestshavedifferentasymptoticlimits.Fromsimulationstudieswe thatthenaive i . i . d .bootstrapisthemosteffectiveandpracticalwaytoobtain b criticalvaluesinthepresenceofmissingobservationsespeciallywhenthebootstrap conditionsonthelocationsofthemissingdata. 81 Figure2.3:MissingduetoWorldWarIandWorldWarII:Yearlydata xxxhhhhhxx ... 1911 T 1 = 3 1914 1918 T 2 = 8 Missing ! 1919 x 1938 T 3 = 28 h 1939 Missing ! hhhhhh 1945 T 4 = 35 x 1946 x ... x Y Figure2.4:InitiallyScarceData x ! Missing hhx ! Missing hhx ! Missing hh ... T 1 = 1 T 2 = 3 T 3 = 4 T 5 = 7 QuarterlyData( N Q observations) z }| { h T 2 N Q 2 =( N Q 1 ) 3 x x xxx ... MonthlyData( N M observations) z }| { T 2 N Q 1 + N M ( N Q 1 ) 3 + 1 + N M = x 82 Figure2.5:AMSeries-WorldWar(yearly),Bartlett, T = 36 83 Figure2.5:(cont'd) 84 Figure2.5:(cont'd) 85 Figure2.5:(cont'd) 86 Figure2.6:AMSeries-WorldWar(yearly),Bartlett, T = 48 87 Figure2.6:(cont'd) 88 Figure2.6:(cont'd) 89 Figure2.6:(cont'd) 90 Figure2.7:AMSeries-WorldWar(yearly),Bartlett, T = 60 91 Figure2.7:(cont'd) 92 Figure2.7:(cont'd) 93 Figure2.7:(cont'd) 94 Figure2.8:AMSeries-WorldWar(quarterly),Bartlett, T = 144 95 Figure2.8:(cont'd) 96 Figure2.8:(cont'd) 97 Figure2.8:(cont'd) 98 Figure2.9:AMSeries-WorldWar(quarterly),Bartlett, T = 192 99 Figure2.9:(cont'd) 100 Figure2.9:(cont'd) 101 Figure2.9:(cont'd) 102 Figure2.10:AMSeries-WorldWar(quarterly),Bartlett, T = 240 103 Figure2.10:(cont'd) 104 Figure2.10:(cont'd) 105 Figure2.10:(cont'd) 106 Figure2.11:AMSeries-InitiallyScarceData,Bartlett, N Q = 12 N M = 12 107 Figure2.11:(cont'd) 108 Figure2.11:(cont'd) 109 Figure2.11:(cont'd) 110 Figure2.12:AMSeries-InitiallyScarceData,Bartlett, N Q = 12 N M = 24 111 Figure2.12:(cont'd) 112 Figure2.12:(cont'd) 113 Figure2.12:(cont'd) 114 Figure2.13:AMSeries-InitiallyScarceData,Bartlett, N Q = 12 N M = 48 115 Figure2.13:(cont'd) 116 Figure2.13:(cont'd) 117 Figure2.13:(cont'd) 118 Figure2.14:AMSeries-InitiallyScarceData,Bartlett, N Q = 24 N M = 12 119 Figure2.14:(cont'd) 120 Figure2.14:(cont'd) 121 Figure2.14:(cont'd) 122 Figure2.15:AMSeries-InitiallyScarceData,Bartlett, N Q = 24 N M = 24 123 Figure2.15:(cont'd) 124 Figure2.15:(cont'd) 125 Figure2.15:(cont'd) 126 Figure2.16:AMSeries-InitiallyScarceData,Bartlett, N Q = 24 N M = 48 127 Figure2.16:(cont'd) 128 Figure2.16:(cont'd) 129 Figure2.16:(cont'd) 130 Figure2.17:AMSeries-ConditionalBernoulli( p = 0.3),Bartlett, T = 50 131 Figure2.17:(cont'd) 132 Figure2.17:(cont'd) 133 Figure2.17:(cont'd) 134 Figure2.18:AMSeries-ConditionalBernoulli( p = 0.5),Bartlett, T = 50 135 Figure2.18:(cont'd) 136 Figure2.18:(cont'd) 137 Figure2.18:(cont'd) 138 Figure2.19:AMSeries-ConditionalBernoulli( p = 0.7),Bartlett, T = 50 139 Figure2.19:(cont'd) 140 Figure2.19:(cont'd) 141 Figure2.19:(cont'd) 142 Figure2.20:AMSeries-ConditionalBernoulli( p = 0.3),Bartlett, T = 100 143 Figure2.20:(cont'd) 144 Figure2.20:(cont'd) 145 Figure2.20:(cont'd) 146 Figure2.21:AMSeries-ConditionalBernoulli( p = 0.5),Bartlett, T = 100 147 Figure2.21:(cont'd) 148 Figure2.21:(cont'd) 149 Figure2.21:(cont'd) 150 Figure2.22:AMSeries-ConditionalBernoulli( p = 0.7),Bartlett, T = 100 151 Figure2.22:(cont'd) 152 Figure2.22:(cont'd) 153 Figure2.22:(cont'd) 154 Figure2.23:AMSeries-ConditionalBernoulli( p = 0.3),Bartlett, T = 200 155 Figure2.23:(cont'd) 156 Figure2.23:(cont'd) 157 Figure2.23:(cont'd) 158 Figure2.24:AMSeries-ConditionalBernoulli( p = 0.5),Bartlett, T = 200 159 Figure2.24:(cont'd) 160 Figure2.24:(cont'd) 161 Figure2.24:(cont'd) 162 Figure2.25:AMSeries-ConditionalBernoulli( p = 0.7),Bartlett, T = 200 163 Figure2.25:(cont'd) 164 Figure2.25:(cont'd) 165 Figure2.25:(cont'd) 166 Figure2.26:AMSeries-RandomBernoulli( p = 0.3),Bartlett, T = 50 167 Figure2.26:(cont'd) 168 Figure2.26:(cont'd) 169 Figure2.26:(cont'd) 170 Figure2.27:AMSeries-RandomBernoulli( p = 0.5),Bartlett, T = 50 171 Figure2.27:(cont'd) 172 Figure2.27:(cont'd) 173 Figure2.27:(cont'd) 174 Figure2.28:AMSeries-RandomBernoulli( p = 0.7),Bartlett, T = 50 175 Figure2.28:(cont'd) 176 Figure2.28:(cont'd) 177 Figure2.28:(cont'd) 178 Figure2.29:AMSeries-RandomBernoulli( p = 0.3),Bartlett, T = 100 179 Figure2.29:(cont'd) 180 Figure2.29:(cont'd) 181 Figure2.29:(cont'd) 182 Figure2.30:AMSeries-RandomBernoulli( p = 0.5),Bartlett, T = 100 183 Figure2.30:(cont'd) 184 Figure2.30:(cont'd) 185 Figure2.30:(cont'd) 186 Figure2.31:AMSeries-RandomBernoulli( p = 0.7),Bartlett, T = 100 187 Figure2.31:(cont'd) 188 Figure2.31:(cont'd) 189 Figure2.31:(cont'd) 190 Figure2.32:AMSeries-RandomBernoulli( p = 0.3),Bartlett, T = 200 191 Figure2.32:(cont'd) 192 Figure2.32:(cont'd) 193 Figure2.32:(cont'd) 194 Figure2.33:AMSeries-RandomBernoulli( p = 0.5),Bartlett, T = 200 195 Figure2.33:(cont'd) 196 Figure2.33:(cont'd) 197 Figure2.33:(cont'd) 198 Figure2.34:AMSeries-RandomBernoulli( p = 0.7),Bartlett, T = 200 199 Figure2.34:(cont'd) 200 Figure2.34:(cont'd) 201 Figure2.34:(cont'd) 202 Figure2.35:ES-WorldWar(yearly),Bartlett, T = 36 203 Figure2.35:(cont'd) 204 Figure2.35:(cont'd) 205 Figure2.35:(cont'd) 206 Figure2.36:ES-WorldWar(yearly),Bartlett, T = 48 207 Figure2.36:(cont'd) 208 Figure2.36:(cont'd) 209 Figure2.36:(cont'd) 210 Figure2.37:ES-WorldWar(yearly),Bartlett, T = 60 211 Figure2.37:(cont'd) 212 Figure2.37:(cont'd) 213 Figure2.37:(cont'd) 214 Figure2.38:ES-WorldWar(quarterly),Bartlett, T = 144 215 Figure2.38:(cont'd) 216 Figure2.38:(cont'd) 217 Figure2.38:(cont'd) 218 Figure2.39:ES-WorldWar(quarterly),Bartlett, T = 192 219 Figure2.39:(cont'd) 220 Figure2.39:(cont'd) 221 Figure2.39:(cont'd) 222 Figure2.40:ES-WorldWar(quarterly),Bartlett, T = 240 223 Figure2.40:(cont'd) 224 Figure2.40:(cont'd) 225 Figure2.40:(cont'd) 226 Figure2.41:ES-InitiallyScarceData,Bartlett, N Q = 12 N M = 12 227 Figure2.41:(cont'd) 228 Figure2.41:(cont'd) 229 Figure2.41:(cont'd) 230 Figure2.42:ES-InitiallyScarceData,Bartlett, N Q = 12 N M = 24 231 Figure2.42:(cont'd) 232 Figure2.42:(cont'd) 233 Figure2.42:(cont'd) 234 Figure2.43:ES-InitiallyScarceData,Bartlett, N Q = 12 N M = 48 235 Figure2.43:(cont'd) 236 Figure2.43:(cont'd) 237 Figure2.43:(cont'd) 238 Figure2.44:ES-InitiallyScarceData,Bartlett, N Q = 24 N M = 12 239 Figure2.44:(cont'd) 240 Figure2.44:(cont'd) 241 Figure2.44:(cont'd) 242 Figure2.45:ES-InitiallyScarceData,Bartlett, N Q = 24 N M = 24 243 Figure2.45:(cont'd) 244 Figure2.45:(cont'd) 245 Figure2.45:(cont'd) 246 Figure2.46:ES-InitiallyScarceData,Bartlett, N Q = 24 N M = 48 247 Figure2.46:(cont'd) 248 Figure2.46:(cont'd) 249 Figure2.46:(cont'd) 250 Figure2.47:ES-ConditionalBernoulli( p = 0.3),Bartlett, T = 50 251 Figure2.47:(cont'd) 252 Figure2.47:(cont'd) 253 Figure2.47:(cont'd) 254 Figure2.48:ES-ConditionalBernoulli( p = 0.5),Bartlett, T = 50 255 Figure2.48:(cont'd) 256 Figure2.48:(cont'd) 257 Figure2.48:(cont'd) 258 Figure2.49:ES-ConditionalBernoulli( p = 0.7),Bartlett, T = 50 259 Figure2.49:(cont'd) 260 Figure2.49:(cont'd) 261 Figure2.49:(cont'd) 262 Figure2.50:ES-ConditionalBernoulli( p = 0.3),Bartlett, T = 100 263 Figure2.50:(cont'd) 264 Figure2.50:(cont'd) 265 Figure2.50:(cont'd) 266 Figure2.51:ES-ConditionalBernoulli( p = 0.5),Bartlett, T = 100 267 Figure2.51:(cont'd) 268 Figure2.51:(cont'd) 269 Figure2.51:(cont'd) 270 Figure2.52:ES-ConditionalBernoulli( p = 0.7),Bartlett, T = 100 271 Figure2.52:(cont'd) 272 Figure2.52:(cont'd) 273 Figure2.52:(cont'd) 274 Figure2.53:ES-ConditionalBernoulli( p = 0.3),Bartlett, T = 200 275 Figure2.53:(cont'd) 276 Figure2.53:(cont'd) 277 Figure2.53:(cont'd) 278 Figure2.54:ES-ConditionalBernoulli( p = 0.5),Bartlett, T = 200 279 Figure2.54:(cont'd) 280 Figure2.54:(cont'd) 281 Figure2.54:(cont'd) 282 Figure2.55:ES-ConditionalBernoulli( p = 0.7),Bartlett, T = 200 283 Figure2.55:(cont'd) 284 Figure2.55:(cont'd) 285 Figure2.55:(cont'd) 286 Figure2.56:ES-RandomBernoulli( p = 0.3),Bartlett, T = 50 287 Figure2.56:(cont'd) 288 Figure2.56:(cont'd) 289 Figure2.56:(cont'd) 290 Figure2.57:ES-RandomBernoulli( p = 0.5),Bartlett, T = 50 291 Figure2.57:(cont'd) 292 Figure2.57:(cont'd) 293 Figure2.57:(cont'd) 294 Figure2.58:ES-RandomBernoulli( p = 0.7),Bartlett, T = 50 295 Figure2.58:(cont'd) 296 Figure2.58:(cont'd) 297 Figure2.58:(cont'd) 298 Figure2.59:ES-RandomBernoulli( p = 0.3),Bartlett, T = 100 299 Figure2.59:(cont'd) 300 Figure2.59:(cont'd) 301 Figure2.59:(cont'd) 302 Figure2.60:ES-RandomBernoulli( p = 0.5),Bartlett, T = 100 303 Figure2.60:(cont'd) 304 Figure2.60:(cont'd) 305 Figure2.60:(cont'd) 306 Figure2.61:ES-RandomBernoulli( p = 0.7),Bartlett, T = 100 307 Figure2.61:(cont'd) 308 Figure2.61:(cont'd) 309 Figure2.61:(cont'd) 310 Figure2.62:ES-RandomBernoulli( p = 0.3),Bartlett, T = 200 311 Figure2.62:(cont'd) 312 Figure2.62:(cont'd) 313 Figure2.62:(cont'd) 314 Figure2.63:ES-RandomBernoulli( p = 0.5),Bartlett, T = 200 315 Figure2.63:(cont'd) 316 Figure2.63:(cont'd) 317 Figure2.63:(cont'd) 318 Figure2.64:ES-RandomBernoulli( p = 0.7),Bartlett, T = 200 319 Figure2.64:(cont'd) 320 Figure2.64:(cont'd) 321 Figure2.64:(cont'd) 322 Figure2.65:AMandES-WorldWar(quarterly),Bartlett, T = 36 323 Figure2.65:(cont'd) 324 Figure2.65:(cont'd) 325 Figure2.65:(cont'd) 326 Figure2.66:AMandES-WorldWar(quarterly),Bartlett, T = 48 327 Figure2.66:(cont'd) 328 Figure2.66:(cont'd) 329 Figure2.66:(cont'd) 330 Figure2.67:AMandES-WorldWar(quarterly),Bartlett, T = 60 331 Figure2.67:(cont'd) 332 Figure2.67:(cont'd) 333 Figure2.67:(cont'd) 334 Figure2.68:AMandES-WorldWar(quarterly),Bartlett, T = 144 335 Figure2.68:(cont'd) 336 Figure2.68:(cont'd) 337 Figure2.68:(cont'd) 338 Figure2.69:AMandES-WorldWar(quarterly),Bartlett, T = 192 339 Figure2.69:(cont'd) 340 Figure2.69:(cont'd) 341 Figure2.69:(cont'd) 342 Figure2.70:AMandES-WorldWar(quarterly),Bartlett, T = 240 343 Figure2.70:(cont'd) 344 Figure2.70:(cont'd) 345 Figure2.70:(cont'd) 346 Figure2.71:AMandES-InitiallyScarceData,Bartlett, N Q = 12 N M = 12 347 Figure2.71:(cont'd) 348 Figure2.71:(cont'd) 349 Figure2.71:(cont'd) 350 Figure2.72:AMandES-InitiallyScarceData,Bartlett, N Q = 12 N M = 24 351 Figure2.72:(cont'd) 352 Figure2.72:(cont'd) 353 Figure2.72:(cont'd) 354 Figure2.73:AMandES-InitiallyScarceData,Bartlett, N Q = 12 N M = 48 355 Figure2.73:(cont'd) 356 Figure2.73:(cont'd) 357 Figure2.73:(cont'd) 358 Figure2.74:AMandES-InitiallyScarceData,Bartlett, N Q = 24 N M = 12 359 Figure2.74:(cont'd) 360 Figure2.74:(cont'd) 361 Figure2.74:(cont'd) 362 Figure2.75:AMandES-InitiallyScarceData,Bartlett, N Q = 24 N M = 24 363 Figure2.75:(cont'd) 364 Figure2.75:(cont'd) 365 Figure2.75:(cont'd) 366 Figure2.76:AMandES-InitiallyScarceData,Bartlett, N Q = 24 N M = 48 367 Figure2.76:(cont'd) 368 Figure2.76:(cont'd) 369 Figure2.76:(cont'd) 370 Figure2.77:AMandES-ConditionalBernoulli( p = 0.3),Bartlett, T = 50 371 Figure2.77:(cont'd) 372 Figure2.77:(cont'd) 373 Figure2.77:(cont'd) 374 Figure2.78:AMandES-ConditionalBernoulli( p = 0.5),Bartlett, T = 50 375 Figure2.78:(cont'd) 376 Figure2.78:(cont'd) 377 Figure2.78:(cont'd) 378 Figure2.79:AMandES-ConditionalBernoulli( p = 0.7),Bartlett, T = 50 379 Figure2.79:(cont'd) 380 Figure2.79:(cont'd) 381 Figure2.79:(cont'd) 382 Figure2.80:AMandES-ConditionalBernoulli( p = 0.3),Bartlett, T = 100 383 Figure2.80:(cont'd) 384 Figure2.80:(cont'd) 385 Figure2.80:(cont'd) 386 Figure2.81:AMandES-ConditionalBernoulli( p = 0.5),Bartlett, T = 100 387 Figure2.81:(cont'd) 388 Figure2.81:(cont'd) 389 Figure2.81:(cont'd) 390 Figure2.82:AMandES-ConditionalBernoulli( p = 0.7),Bartlett, T = 100 391 Figure2.82:(cont'd) 392 Figure2.82:(cont'd) 393 Figure2.82:(cont'd) 394 Figure2.83:AMandES-ConditionalBernoulli( p = 0.3),Bartlett, T = 200 395 Figure2.83:(cont'd) 396 Figure2.83:(cont'd) 397 Figure2.83:(cont'd) 398 Figure2.84:AMandES-ConditionalBernoulli( p = 0.5),Bartlett, T = 200 399 Figure2.84:(cont'd) 400 Figure2.84:(cont'd) 401 Figure2.84:(cont'd) 402 Figure2.85:AMandES-ConditionalBernoulli( p = 0.7),Bartlett, T = 200 403 Figure2.85:(cont'd) 404 Figure2.85:(cont'd) 405 Figure2.85:(cont'd) 406 Figure2.86:AMandES-RandomBernoulli( p = 0.3),Bartlett, T = 50 407 Figure2.86:(cont'd) 408 Figure2.86:(cont'd) 409 Figure2.86:(cont'd) 410 Figure2.87:AMandES-RandomBernoulli( p = 0.5),Bartlett, T = 50 411 Figure2.87:(cont'd) 412 Figure2.87:(cont'd) 413 Figure2.87:(cont'd) 414 Figure2.88:AMandES-RandomBernoulli( p = 0.7),Bartlett, T = 50 415 Figure2.88:(cont'd) 416 Figure2.88:(cont'd) 417 Figure2.88:(cont'd) 418 Figure2.89:AMandES-RandomBernoulli( p = 0.3),Bartlett, T = 100 419 Figure2.89:(cont'd) 420 Figure2.89:(cont'd) 421 Figure2.89:(cont'd) 422 Figure2.90:AMandES-RandomBernoulli( p = 0.5),Bartlett, T = 100 423 Figure2.90:(cont'd) 424 Figure2.90:(cont'd) 425 Figure2.90:(cont'd) 426 Figure2.91:AMandES-RandomBernoulli( p = 0.7),Bartlett, T = 100 427 Figure2.91:(cont'd) 428 Figure2.91:(cont'd) 429 Figure2.91:(cont'd) 430 Figure2.92:AMandES-RandomBernoulli( p = 0.3),Bartlett, T = 200 431 Figure2.92:(cont'd) 432 Figure2.92:(cont'd) 433 Figure2.92:(cont'd) 434 Figure2.93:AMandES-RandomBernoulli( p = 0.5),Bartlett, T = 200 435 Figure2.93:(cont'd) 436 Figure2.93:(cont'd) 437 Figure2.93:(cont'd) 438 Figure2.94:AMandES-RandomBernoulli( p = 0.7),Bartlett, T = 200 439 Figure2.94:(cont'd) 440 Figure2.94:(cont'd) 441 Figure2.94:(cont'd) 442 CHAPTER3 INFERENCEINTIMESERIESMODELSUSINGSMOOTHEDCLUSTERED STANDARDERRORS 3.1INTRODUCTION Thischapterproposesalongrunvarianceestimatorforconductinginferenceintime seriesregressionmodelsthatcombinesthetraditionalnonparametrickernelapproach, NeweyandWest(1987)andAndrews(1991),withaclusterapproach,Besteretal.(2011). Thebasicideaistodividethetimeperiodsintonon-overlappingclusterswithequalnum- bersofobservations.Thelongrunvarianceestimatorisconstructedbyaggregating withinclustersandthenkernelsmoothingacrossclusters.Thisapproachissimilarin spirittotheapproachproposedbyDriscollandKraay(1998)inpanelsettings.Under theassumptionthatthetimeseriesdataisweaklydependentandcovariancestationary, wedevelopanasymptotictheoryforteststatisticsbasedonthis"smoothedclustered" longrunvarianceestimator.Wederiveasymptoticresultsholdingthenumberofclusters andalsotreatingtheclustersasincreasingwiththesamplesize.Ourlargenumber ofclustersresultsarecloselylinkedtothe b resultsobtainedbyVogelsang(2012)for DriscollandKraay(1998)statisticsinpanelsettings.Weshowthatinthelargenumberof clusterssettingrobustteststatisticsfollowthestandard b limitsobtainedbyKiefer andVogelsang(2005)assumingthatthekernelbandwidthistreatedasaproportion ofthesamplesize.Incontrast,forthenumberofclusterscase,weobtainadifferent asymptoticlimit.Whileonemightexpecttherelativeaccuracyofthetwoasymptoticap- proximationstodependonthenumberofclustersrelativetothesamplesize,wein asimulationstudythattheednumberofclusterflasymptoticapproximationworks wellwhetherthenumberofclustersissmallorlarge.Thesimulationsalsosuggestthat thenaive i . i . d .bootstrapmimicsthenumberofclusterscriticalvalues. 443 Themotivationforclusteringbeforekernelsmoothingisasfollows.Averagingwithin clustersworkswellevenwhenserialcorrelationisrelativelystrongwithinclusters.Given ourweakdependenceandcovariancestationarityassumption,withinclusteraverages willbeasymptoticallyindependent.But,insamplestheclusteraverageswillbe correlatedandkernelsmoothingcanhelptoreducesampleover-rejectionproblems. Infact,weinoursamplesimulationsthatclusteringbeforekernelsmoothing doesreduceover-rejectionscausedbystrongserialcorrelationwithoutagreatcostin termsofpower. Therestofthechapterisorganizedasfollows.Inthenextsectionthemodelisgiven andthelongrunvarianceisSection3.3laysouttheinferenceproblemand providesasymptoticresultsforteststatisticsbasedonthesmoothedclusteredlongrun varianceestimator.Section3.4exploresthesamplepropertiesoftheteststatisticsin asimplelocationmodel.ProofsaregiveninAppendixE.Thecasewherethenumberof groupsdoesnotevenlydividethesampleisdiscussedinAppendixF. 3.2MODELANDCLUSTEREDSMOOTHEDSTANDARDERRORS Considerthetimeseriesregressionmodel, y t = x 0 t b + u t , t = 1,..., T , where b isa ( k 1 ) vectorofregressionparameters, x t isa ( k 1 ) vectorofregressors, and u t isameanzeroerrorprocess.Theordinaryleastsquares(OLS)estimatorof b is ‹ b = T å t = 1 x t x 0 t ! 1 T å t = 1 x t y t . 444 Dividethetimeperiodsinto G contiguous,non-overlappinggroupsofequalsize n G such that T = n G G . 1 RewritingtheOLSestimator, ‹ b ,usinggroupnotationas ‹ b = 0 B @ G å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t 1 C A 1 G å g = 1 gn G å t =( g 1 ) n G + 1 x t y t .(3.1) Conceptually,thiswayofrewriting ‹ b canbeviewedastheoutcomeofrearrangingthe datainto G timeperiodswith n G "cross-section"unitspertimeperiodresultinginan paneldatastructure.Fromthispanelperspective ‹ b in(3.1)isexactly thepooledOLSestimatorof b .Plugginginfor y t gives ‹ b b = 0 B @ G å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t 1 C A 1 G å g = 1 gn G å t =( g 1 ) n G + 1 x t u t = 0 B @ G å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t 1 C A 1 G å g = 1 gn G å t =( g 1 ) n G + 1 v t ,(3.2) where v t = x t u t .Usingthepanelperspective,wecandirectlyapplythevariance- covariancematrixestimatorproposedbyDriscollandKraay(1998)asfollows.Let ‹ v t = x t ‹ u t ,where ‹ u t = y t x 0 t ‹ b aretheOLSresiduals. b v g = gn G å t =( g 1 ) n G + 1 ‹ v t , g = 1,..., G , whichisthesumof ‹ v t withingroup g .ComputethenonparametrickernelHACestimator using b v g for g = 1,2,..., G as b W = b G 0 + G 1 å j = 1 k j M b G j + b G 0 j , where b G j = G 1 å G g = j + 1 b v g b v 0 g j arethesampleautocovariacesof b v g .Here, k ( x ) isa kernelfunctionsuchthat k ( x )= k ( x ) , k ( 0 )= 1, j k ( x ) j 1, k ( x ) iscontinuousat x = 0, 1 Caseswhere G doesnotevenlydivide T iseasilyhandledbutthenotationismore tedious.SeeAppendixF. 445 R ¥ ¥ k 2 ( x ) < ¥ ,and M isthebandwidthparameter.Usingwellknownalgebrawecan rewrite b W as b W = 1 G G å g = 1 G å h = 1 k j g h j M b ‹ v g b ‹ v 0 h , whichwecalltheficlusterthenHACflvariance-covariancematrixestimatororCHACfor short.NoticethattheCHACestimatorgivesfullweightforobservationswithinclusters, afeaturethattheusualnonparametrickernelHACestimatordoesnothave.Smoothing acrossclustersaccountsforsampleserialcorrelationacrossclusterswhichisagen- eralizationoftheclusterestimatorproposedbyBester,Conley,andHansen(2011).Note thattheBester,Conley,andHansen(2011)estimatorisaspecialcaseof b W obtainedwhen b W = b G 0 ,i.e.whenzeroweightsisimposedacrossclusters.Alsonotethatwhen G = T and n G = 1,theCHACestimatorbecomestheusualkernelHACestimator.Therefore, theCHACestimatorismoregeneralandneststhetraditionalapproachandthetimese- riesclusterapproach. Using b W asthemiddletermofasandwichvariancefor ‹ b ,weobtainthesample variance-covariancematrix b V CHAC = G 0 B @ G å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t 1 C A 1 b W 0 B @ G å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t 1 C A 1 . 3.3INFERENCEANDASYMPTOTICTHEORY Thissectionteststatisticsfortestinglinearrestrictionsonthe b vectorandderives theasymptoticnullbehaviorofthetests.Resultsforlarge- G , n G andlarge- n G , G aretreatedseparatelyastheyrequiredifferentregularityconditions.Through- out,thesymbolfi ) fldenotesweakconvergenceofasequenceofstochasticprocessestoa limitingstochasticprocess. Weconsidertestingthenullhypothesis H 0 : R b = r against H 0 : R b 6 = r ,where R isa q k matrixofknownconstantswithfullrankwith q k and r isa q 1vectorof 446 knownconstants.theWaldstatisticas W CHAC = R ‹ b r 0 h R b V CHAC R 0 i 1 R ‹ b r , orwiththesinglerestriction( q = 1)thet-statisticas t CHAC = R ‹ b r q R b V CHAC R 0 . 3.3.1Large- G ,Fixed- n G Vogelsang(2012)developed b resultsforthepanelanaloguesto W CHAC and t CHAC forthecasesofalargenumberoftimeperiodsandanumberofcross-sectionunits. Vogelsang(2012)providedconditionsunderwhichthe b limitsareequivalenttothe standard b limitsobtainedbyKieferandVogelsang(2005)inpuretimeseriesset- tings.Giventhenaturalsimilaritiesbetween W CHAC and t CHAC andthepanelstatis- tics,itisnotsurprisingthatthelarge- G , n G limitsof W CHAC and t CHAC follow thestandard b limitsundersuitableregularityconditions.Theasymptotictheoryin Vogelsang(2012)mainlyreliesonweakdependenceandcovariancestationarityintime dimension.Inourmodelbecausewedividethepuretimeseriesintonon-overlapping clusters,aslongastheoriginaltimeseriesweakdependenceandcovariancesta- tionarity,theregularityconditionsusedbyVogelsang(2012)holdinourmodelaswell. v g = å gn G t =( g 1 ) n G + 1 v t .Wemakethefollowingassumptions. AssumptionA. 1. n G isanumberandG = n G T. 2. Forr 2 ( 0,1 ] ,G 1 å [ rG ] g = 1 å gn G t =( g 1 ) n G + 1 x t x 0 t ) rQ c .Q c isnon-singular. 3. E ( v g )= 0 andG 1 / 2 å [ rG ] g = 1 v g ) L c W k ( r ) ,where W k ( r ) isank 1 vectorofinde- pendentstandardWienerprocessesand L c L 0 c = W c isthek klongrunvariancematrix ( 2 p timesthezerofrequencyspectraldensitymatrix)of v g . 447 AssumptionA1isstatingthatweareconsideringthelarge- G , n G case.As- sumptionsA2-A3aretheusualhighlevelassumptionsusedtoobtain b asymptotic results.Notethat 1 G [ rG ] å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t = 1 G [ rG ] n G å t = 1 x t x 0 t = n G T [ r n G T ] n G å t = 1 x t x 0 t , wherethesecondequalityisobtainedbypluggingin G = T / n G .Ifthesecondmomentof x t alawoflargenumbers(LLN)uniformlyin r ,i.e. T 1 å [ rT ] t = 1 x t x 0 t ) rQ ,then AssumptionA2iswith Q c = n G Q because ( n G / T ) å [ r / n G T ] n G t = 1 x t x 0 t isasymptot- icallyequivalentto ( n G / T ) å [ rT ] t = 1 x t x 0 t .AssumptionA3statesthatthefunctionalcentral limittheorem(FCLT)holdsforthescaledpartialsumsof v g .AswithassumptionA2,we canwrite G 1 / 2 [ rG ] å g = 1 v g = 1 G [ rG ] å g = 1 gn G å t =( g 1 ) n G + 1 v t = n G 1 / 2 T 1 / 2 [ r n G T ] n G å t = 1 v t . If v t itselffollowsaFCLTsothat T 1 / 2 å [ rT ] t = 1 v t ) L W k ( r ) ,thenAssumptionA3issat- with L c L 0 c = n G LL 0 because n G 1 / 2 T 1 / 2 å [ r / n G T ] n G t = 1 v t isasymptoticallyequiva- lentto n 1 / 2 G T 1 / 2 å [ rT ] t = 1 v t .IfwearemakingprimitiveassumptionsforaFCLTsuchas v t beingameanzero d -order(forsome d > 2)covariancestationaryprocessthatis a -mixing ofsize b / ( b 2 ) , 2 then v g isalsoameanzero d -order(forsome d > 2)covariancestation- aryprocessthatis a -mixingofthesamesizebecausesums( n G < ¥ )of a -mixing processesarealso a -mixingwiththesamesize. 3 Therefore,ifaFCLTholdsfor v t then itwillholdfor v g .IngeneralAssumptionsA2-A3areslightlyweakerthanassumptions usuallyusedtoobtain b resultsandaresufforthefollowingtheorem.The prooffollowsdirectlyfromVogelsang(2012,Theorem1). 2 PhillipsandDurlauf(1986)providesufconditionsfor v t tosatisfyaFCLT. 3 SeeWhite(2001). 448 Theorem3.1. Leth > 0 beanintegerandletB h ( r ) denoteagenerich 1 vectorofstochastic processes.Lettherandommatrix,P ( b , B h ) ,beasfollowsforb 2 ( 0,1 ] . Case(i):ifk ( x ) istwicecontinuouslydifferentiableeverywhere, P b , B h R 1 0 R 1 0 1 b 2 k 00 r s b B h ( r ) B h ( s ) 0 drds , Case(ii):ifk ( x ) iscontinuous,k ( x ) = 0 for j x j 1 andk ( x ) istwicecontinuously differentiableeverywhereexceptfor j x j = 1 , P b , B h RR j r s j < b 1 b 2 k 00 r s b B h ( r ) B h ( s ) 0 drds + k ( 1 ) 0 b R 1 b 0 B h ( r + b ) B h ( r ) 0 + B h ( r ) B h ( r + b ) 0 dr , wherek ( 1 ) 0 = lim h ! 0 [( k ( 1 ) k ( 1 h )) / h ] , Case(iii):ifk ( x ) istheBartlettkernel, P b , B h 2 b R 1 0 B h ( r ) B h ( r ) 0 dr 1 b R 1 b 0 B h ( r + b ) B h ( r ) 0 + B h ( r ) B h ( r + b ) 0 dr. (a) ThenunderAssumptionA,asG ! ¥ , p G ‹ b b ) Q 1 c L c W k ( 1 ) . (b) Let f W k ( r ) denoteak 1 vectorofstochasticprocessesas f W k ( r ) W k ( r ) r W k ( 1 ) ,forallr 2 ( 0,1 ] .AssumeM = bGwhereb 2 ( 0,1 ] isThen,under AssumptionAandH 0 ,forG ! ¥ ,n G W CHAC )W q ( 1 ) 0 h P ( b , f W q ) i 1 W q ( 1 ) orifthereisonerestriction(q = 1 ), t CHAC ) W 1 ( 1 ) q P ( b , f W 1 ) . 3.3.2Fixed- G ,Large- n G results WhenthenumberofclustersistheLLNandFCLTworkwithintheclustersrather thanacrosstheclusters.Toobtainanasymptoticallypivotalresult,itissufforthe LLNandFCLTtoholdfortheoriginaltimeseries.Considertheassumption: 449 AssumptionB. 1. Gisd.n G = G 1 T. 2. Forr 2 ( 0,1 ] ,T 1 å [ rT ] t = 1 x t x 0 t ) rQandQisnon-singular. 3. Forr 2 ( 0,1 ] ,T 1 / 2 å [ rT ] t = 1 v t ) L W k ( r ) ,where W k ( r ) isank 1 vectorofindependent standardWienerprocessesand W = LL 0 isthek klongrunvariancematrix( 2 p times thezerofrequencyspectraldensitymatrix)ofv t . AssumptionB1restatesthatweareconsideringthecasewherethenumberofclusters isandthesizeofeachclusterisincreasingwith T .AssumptionsB2-B3statethat alawoflargenumbersappliesto T 1 å [ rT ] t = 1 x t x 0 t uniformlyin r andaFCLTappliesto thescaledpartialsumof v t .Thesetwoassumptionsaresuffor b asymptotic theorytogothroughwhen G isand n G ! ¥ .Thefollowingtheoremstatesasymp- toticbehaviorofOLS,CHAC,and W CHAC ( t CHAC when q = 1)when G isand n G ! ¥ .TheproofisprovidedinAppendixE. Theorem3.2. Letk > 0 beanintegerandletB k ( r ) denoteagenerick 1 vectorofstochastic processes.Lettherandommatrix,P ( G , M , B k ) ,beasfollows: P ( G , M , B k )= G 1 å g = 1 G 1 å h = 1 B k g G 2 k j g h j M k j g h + 1 j M k j g h 1 j M B k h G 0 , andwhenk( )isBartlettkernelP ( G , M , B k ) canbefurtheras P ( G , M , B k )= 2 M G 1 å g = 1 B k g G B k g G 0 1 M G M 1 å g = 1 B k g G B k g + M G 0 + B k g + M G B k g G 0 ! . (a) ThenunderAssumptionB,asT ! ¥ andn G ! ¥ , p T ‹ b b ) Q 1 L W k ( 1 ) , 450 (b) Let f W k ( r ) denoteak 1 vectorofstochasticprocessesas f W k ( r ) W k ( r ) r W k ( 1 ) ,forallr 2 ( 0,1 ] .UnderAssumptionB,forGn G ! ¥ , G T b W ) L P ( G , M , f W k ) L 0 , (c) andunderH 0 ,asT ! ¥ andn G ! ¥ , W CHAC )W q ( 1 ) 0 P ( G , M , f W q ) 1 W q ( 1 ) andwhenq = 1 , t CHAC ) W 1 ( 1 ) q P ( G , M , f W 1 ) . The G asymptoticapproximationof W CHAC inTheorem3.2(c)isdifferentfrom the b asymptoticapproximationfoundinTheorem3.1(b)whichistheusual b limitinKieferandVogelsang(2005).InfactfromBesteretal.(2011)weknowthat when M = 1andatruncatingkernelisused,the G limitof W CHAC inTheorem 3.2(c)to Gq / ( G q ) F q , G q andwhen q = 1thelimitof t CHAC to p G / ( G 1 ) t G 1 . Table3.1tabulatestheasymptoticcriticalvaluesforthe G limitforthecaseof theBartlettkernel.Thecriticalvalueswereobtainedviasimulationmethods.TheWiener processesinthelimitswereapproximatedbyscaledpartialsumsof1,000independent standardnormalrandomvariables.50,000replicationswereused.WeseefromTable 3.1thatasthenumberofclusters, G ,getssmallerand/orthebandwidth, M ,getslarger, thetailofthedistributionbecomesfatter.As G decreasesand/or M increases,lessdown- weightingisusedwhencalculatingCHACanditiswellknownfromthe b literature thatlessdown-weightingleadstofattertailsofteststatisticsbecauseofsystematicdown- wardbiasinthevarianceestimator. Generallyspeaking,itiswellknownthatusinglessdown-weightinginconjunction with b criticalvaluestendstoalleviateover-rejectionproblemscausedbystrong 451 serialcorrelation.ThestandardHACestimatorcanonlyreducedown-weightingbyin- creasing M (foragivenkernel).CHACcanreducedown-weightingbynotonlyincreas- ing M butalsobyincreasingthenumberofobservationspercluster,i.e.bydecreasing G .Thisadditionalindown-weightinggivestheCHACapproachtheabilityto reducesizedistortionswithlesslossinpowerthantheoriginalHACapproach.Wecom- paretherelativeperformanceoftwodifferentweightingschemes,i.e.tochooseHACor CHAC,usingasimulationstudyinthenextsection. 3.4FINITESAMPLEPERFORMANCE 3.4.1EmpiricalRejectionProbabilities Inthissectionweexaminethesampleperformanceoftherobustteststatisticsbased ontheCHACestimatorusingboththe G ,large- n G approximationandlarge- G , n G approximation.HerewefocusontheBartlettkernel.When G = T ,itfollows that n G = 1andtheCHACestimatortotheusualHACestimatorwithout clustering,andwhenweuse M = 1,theCHACestimatortothepureclustering approachofBesteretal.(2011).Therefore,wecanmakedirectcomparisonsofthosetwo existingapproachesinourresults. Wefocusonthesimplelocationmodel y t = b + u t ,(3.3) u t = r u t 1 + # t + q# t 1 , where u 0 = # 0 = 0, # t ˘ i . i . d . N ( 0,1 ) with r 2 f 0.5,0,0.5,0.8,0.9 g , q 2 f 0.5,0,0.5 g . Weset b = 0.Resultsaregivenforsamplesize T = 60andthenumberofclusters G 2 f 2,3,4,5,6,10,12,15,60 g .Notethatthesevaluesof G arefactorsof60andsotheclusters evenlydividethesample.Withthisdatageneratingprocesswetestthenullhypothesis 452 that b = 0againstthealternative b 6 = 0atanominallevelof5%.Whencomputingthe CHACestimator,weusetheBartlettkernelwith M 2f 1,2,...,9,10,12,15,30,40,50,60 g . ForthesimplelocationmodeltheCHACbasedt-testiscomputedas t CHAC = ‹ b r G å T t = 1 x 2 t 1 b W å T t = 1 x 2 t 1 = ‹ b r G T 2 b W where ‹ b = T å t = 1 x 2 t ! 1 T å t = 1 x t y t = T 1 T å t = 1 y t and b W = 1 G G å g = 1 G å h = 1 k j g h j M b v g b v h where b v g = å gn G t =( g 1 ) n G + 1 ‹ v t with ‹ v t = y t ‹ b . Werejectthenullhypothesiswhenever t T > t c (orrejectthenullwhenever t T < t l c or t T > t r c if t l c 6 = t r c )where t c isacriticalvalue.Using10,000replications,wecompute empiricalrejectionprobabilities.FromTheorem3.1(b),weknowthatunderlarge- G , n G asymptotictheory, t c = t large G c isthe97.5%percentileofthestandard b asymptoticdistributionwith b = M / G .Under G ,large- n G asymptotictheory t c = t fixed G c isthe97.5%percentileofthedistributionderivedinTheorem3.2(c).We obtain t large G c and t fixed G c bysimulatingthecorrespondingdistributionwhichis possiblebecausebothofthedistributionsarefunctionsofBrownianmotion. Inadditionweusethebootstraptoobtaincriticalvalues.Weconsiderthenaivemov- ingblockbootstrapwithblocksize l = n G sothattheblocklengthsusedintheresam- plingmatchtheclustersizesusedtocompute b W .Wealsouseblocksize l = 1(the i . i . d . bootstrap).detailsaboutcomputingbootstrapcriticalvaluesareasfollows.Let thevector w t =( y t , x 0 t ) 0 collectthedependentandexplanatoryvariables(here x t = 1). Let B t , n G = f w t , w t + 1 ,..., w t + n G 1 g betheblockof n G consecutiveobservations startingat w t .Draw G blocksrandomlywithreplacementfromthesetofoverlapping blocks f B 1, n G ,..., B T n G + 1, n G g andobtainabootstrapresampleofsize T .Repeating 453 this999timesweobtain999bootstrapresampleswhichwedenote w B t =( y B t , x B 0 t ) 0 , t = 1,..., T , B = 1,...,999.Forthe i . i . d .bootstrapweresample T observationsfromthe originalobservationswithreplacementandrepeatingthis999timesweagainobtain999 bootstrapresamples.Foreachbootstrapresamplewecomputethenaivebootstraptest statistic t B CHAC = ‹ b B ‹ b r G T 2 b W B where ‹ b B = T å t = 1 y B t T istheOLSestimatorofthe B th bootstrapresampleand b W B = 1 G G å g = 1 G å h = 1 k j g h j M b v B g b v B h where b v B g = å gn G t =( g 1 ) n G + 1 ‹ v B t with ‹ v B t = y B t ‹ b B .Thenthebootstrapcriticalval- ues f t l c , t r c g arethe0.025and0.975quantileofthe t B CHAC , B = 1,...,999respectively.We denotethecriticalvaluesobtainedfromthe n G blockbootstrapas f t l block c , t r block c g andfromthe i . i . d .bootstrapas f t l i . i . d . c , t r i . i . d . c g . GonçalvesandVogelsang(2011)showedthatthenaivemovingblockbootstrapwith blocklengthorincreasingbutslowerthanthesamplesize ( l 2 / T ! 0 ) hasthesame limitingdistributionasthe b asymptoticdistribution.Thisequivalenceismainly duetothefactthatbootstrapresamplesgeneratedfromthemovingblockbootstrap, whichwedenote ( y t , x 0 t ) ,satisfy(a) T 1 å [ rT ] t = 1 x t x t ) rQ and(b) T 1/2 å [ rT ] t = 1 v t ) L W k ( r ) forsome Q and L where p denotestheprobabilitymeasureinducedby thebootstrapresampling,conditionalonarealizationoftheoriginaltimeseries.Our asymptotictheoryframeworkandtheteststatisticsarenotexactlythesameastheones consideredinGonçalvesandVogelsang(2011).HowevertheresultsinGonçalvesand Vogelsang(2011)canstillbeapplied.RecallthatinSection3.3.1(large- G ),wepointed 454 outthattheoriginaltimeseriessatisfyingconditions(a) T 1 å [ rT ] t = 1 x t x t ) rQ and(b) T 1/2 å [ rT ] t = 1 v t ) L W k ( r ) issufforAssumptionA2andA3.Then,ifbootstrap resamplessatisfy(a) and(b) ,thenfromTheorem3.1(b),theasymptoticdistribution of t B CHAC isthestandard b limitevaluatedat M / G .Becausetheasymptoticdistri- butioninTheorem3.1(b)ispivotalwithrespectto L and Q , t B CHAC and t CHAC have thesamelimitingdistributions.SimilarlyforTheorem3.2,becausetheconditions(a) and(b) arethesameasAssumptionsB2andB3,whenwetreat G as t B CHAC will havethe G distributionasinTheorem3.2(c)because G asymptoticdistribu- tionsarepivotalwithrespectto Q , L and Q , L respectively.Therefore,ifthebootstrap resamplessatisfy(a) and(b) ,thenthecriticalvaluescomputedfromthebootstrapwill beorderasymptoticallyequivalentto t CHAC inthe b sense.SeeGonçalves andVogelsang(2011)forsufconditionsontheoriginaltimeseriesforthebootstrap resamplestosatisfy(a) and(b) .Therequiredconditionsaresimilartotheusualweak dependenceassumptionsrequiredfor b asymptotictheoryinKieferandVogelsang (2005)togothrough.Therefore,wecanconjecturethatwhen G issmall,thebootstrapwill mimicthe G ,large- n G criticalvaluesandwhen G islargeitwillmimicthelarge- G n G criticalvalues. Tables3.2-3.3reportsempiricalnullrejectionprobabilitiesforthe t CHAC .Table3.2 reportsrejectionsusinglarge- G , n G criticalvalues.Table3.3reportstherejection probabilitiesusing G ,large- n G criticalvalues.BecauseweareusingtheBartlett kernel(whichtruncates),when M = 1, t CHAC d ! p G / G 1 t G 1 if G isfollowing Besteretal.(2011). ExaminingtherejectionsinTables3.2-3.3,itisclearthatthe G asymptoticap- proximationhasbettersizepropertiesthanthelarge- G asymptoticapproximationregard- lessof G .When G issmall,the G asymptoticapproximationworkswellinterms ofsizeacrossall r , q combinationsand M .When r = 0.9and q = 0.5,using G = 2,the G criticalvaluedeliversempiricalrejectionprobabilitiesof0.06forbothbandwidth 455 valuesof M = 1,2.Anullrejectionof0.06,whichisveryclosetothenominallevel0.05, isimpressivegiventhestrongserialcorrelationandrelativelysmallvalueof T .When G islarge,the G andlarge- G asymptoticapproximationshavesimilarperformance. Therefore,theuseof G criticalvaluesisagoodideaforallvaluesof G . Tables3.4-3.5reportsempiricalnullrejectionprobabilitiesforthe t CHAC usingthe bootstrapcriticalvalues.Table3.4reportsrejectionprobabilitiesusingtheoverlapping n G blockbootstrap.Table3.5reportsrejectionprobabilitiesusingthe i . i . d bootstrap.The obviouspatternisthatthe i . i . d .bootstrapandthe G rejectionprobabilitiesare nearlyidenticalinallcases.Thisistrueregardlessofthevalue G .Incontrastthelarge- G criticalvaluesandthe i . i . d bootstraponlyhavesimilarsampleperformanceonce G becomeslarge, G = 30to60.Theperformanceoftheblockbootstrapdependson thestrengthoftheserialcorrelation.Usingmiddlesizedblockscanresultinlesssize distortionthaneitherthe i . i . d .bootstraporthe G criticalvalueswhentheserial correlationisstrong.Whenserialcorrelationisweak,thenuseoftheblockbootstrap canresultinover-rejectionsthatdonotoccurwiththe i . i . d .bootstrap(seethe r = 0, q = 0case).Similarcomparisonsbetweentheblockbootstrapandthe i . i . d .bootstrap werefoundbyGonçalvesandVogelsang(2011)forthenon-clusteredHACcase. Itmayseemsurprisingatthati)evenwhen G islarge,the G approximation workswellandii)the i . i . d .bootstrapmimicsthe G limitevenwhenwhen G islarge includingthe G = T case.However,acloserlookatthetabulated G criticalvaluesin Table3.1indicatestheseresultsarenotsurprising.Ifwetook G = 60criticalvaluesfrom Table3.1andcomparedthemtothecriticalvaluestabulatedbyKieferandVogelsang (2005),wewouldseethatthecriticalvaluesareveryclosetoeachother.Thissuggests thatthecriticalvaluesofthe G randomvariableapproachesthatofthelarge- G (i.e. standard b )randomvariableas G increases.Itisthisapparentcontinuityin G that explainsthepatternsinthesamplesimulations.Thepatternsarenotsosurprising uponcloserexaminationofthetheory. 456 Thesimulationstudyshowsthatusingarelativelysmallvalueof G cansubstantially reducesizedistortions.Inthenextsubsectionweinvestigatetheimpactofthechoiceof G and M onpowertoassessthepowerlossthatisexpectedtobeincurredwhencontrolling over-rejectionproblems. 3.4.2SizeAdjustedPower Wenowreportsomepowercalculationstoinvestigatetheimpactof G and M onpower. Wecomputesize-adjustedpowersothatwecanmakepowercomparisonsindependent ofover-rejectionproblems.Thesize-correctionsweemployobviouslycannotbeusedin empiricalapplications.WeusethesamedatageneratingprocessasinSection3.4.1.We set q = 0andfocusonAR(1)errorswith r 2 f 0,0.5,0.8,0.9 g .AswithSection3.4.1,the nullis b = 0,andthesamplesizeis T = 60.Againweuse G 2f 2,3,4,5,6,10,12,15,60 g . Foragivenvalueof r andcombinationof G , M wesimulatethesamplenull criticalvaluesof t CHAC using10,000replicationsandthencomputesize-adjustedpower byobtainingtherejectionprobabilitiesforagridofvaluesof b 2 ( 0,7 ] againusing10,000 replications. Table3.6reportstheareaabovethesizeadjustedpowercurvewhichisconceptually theaverageofTypeIIerroracrossalternatives.Theareaisdividedbymax b sothatthe totalsquareareaisnormalizedto1.FromTable3.6wecanseethegeneralpower-size trade-off.Decreasing G ,with M orincreasing M ,with G alwaysincreases typeIIerror,andweknowfromTables3.2-3.5thatover-rejectionproblemsaredecreasing inthesescenarios.When M and G changetogether,itismorediftoseeclearpatterns inthesize-powertrade-off.Forexample,ifwewanttocomparethe G = 60, M = 30case (i.e.theusualHACestimatorwithbandwidthequaltohalfthesamplesize)withthe G = 30, M = 9case,when r = 0.5,thechangeinthepower-sizetrade-offitisnot soobviousbecausedecreasing G willincreasesizewhiledecreasingpowerwhereasthe decreasein M hastheoppositeeffect.FromTable3.6weseethatbetweenthesetwocases, 457 the G = 30, M = 9casehashigheraveragepower.More,theTypeIIerrorfor G = 30, M = 9is0.60whilefor G = 60, M = 30thetypeIIerroris0.63.Referring backtoTable3.5forthesetwocases,noticethat G = 30, M = 9hassizeof0.069while G = 60, M = 30hassizeof0.070.Althoughthedifferenceinsizeissmall,thereare improvementsinbothsizeandpowerbydividingthesampleinto30clusters.Byfurther decreasing G to15and M to3,thetypeIIerrordecreasesto0.595,butthesizeremains at0.069.So,wehavecaseswheredividingtimeseriesintoclustersandsmoothingcan reducetheover-rejectionproblemwithoutagreatcostintermsofpowerandsometimes itispossibletoincreasepowerwithoutinducingmoreover-rejections.Comparedtothe usualHACapproach,clusteringwithsmoothingisusuallyabetteroptionthansimply movingaroundthebandwidth. Figure3.1depictspowerforsomeinterestingcaseswhereclusteringandsmoothing providesgreaterpowerwhilenotincreasingover-rejectionsfor r 2f 0.5,0.8,0.9 g .The benchmarkcaseisthesize-adjustedpowercurveforthe G = 60, M = 30case.Theother combinationsof G and M givetestswithsimilarsizetothe G = 60, M = 30.Wesee thatthereindeedisroomforimprovementinpowerwhileholdingsizeconstantthrough clusteringandsmoothing.Forall r ,the G = 60case(theusualHACestimator),hasthe lowestpowercomparedtoothercombinationsof G and M whichhavesimilarsize. Figures3.2-3.27alsocomparethepower-sizetradeoffbetweentheusualHACesti- mator( G = 60)forarangeofbandwidthswiththeCHACestimator.Thelabelforeach powercurveindicatesthesizeofthattest.Figure3.2considersthecaseof r = 0.5and CHACimplementedwitharangeofvaluesof G butalwayswith M = 1.Thecasewhere noclusteringorsmoothingisbeingusedisthe G = 60, M = 1whichservesasthebench- mark(seethelightblueline).Sizeisquiteinthiscase:0.267.Ifaresearcher wantstoreducethisover-rejection,thenusingtheusualHACestimatorwouldneedto increase M ,whicharedepictedbythegreylines.Ontheotherhand,withCHACthe researchercanchoosetodividethetimeseriesintoclusters(whilestillusing M = 1)to 458 reducetheover-rejectionproblem(seethepinklines).InthesecondgraphofFigure3.2, wecanseethatbyincreasing M to20orlargertheresearcherreducessizetoabout0.069 fortheHACtest.AlloftheseHACtestsaredominatedbyusingCHACwith G = 6and M = 1intermsofbothsizeandpower.CaseswheretheCHACapproachdominatesthe usualHACapproacharefoundregularlywhenCHACisimplementedwithothervalues of M andfor r = 0.8,0.9.Forexample,inFigure3.3,weseethatwhenwematchthe sizebetweenHACandCHAC,thenthereexistsavalueof g suchthatCHACwiththat G = g , M = 2hasbetterpowerthanHAC.IfwematchthepoweroftheHACandCHAC tests,thereisalwayssomevalueof g suchthatCHACwith G = g , M = 2hasbetter size.Astheremainingpoweruresshow,thererarelyaresituationswherethereisnota CHACestimatorthatiseitherbetterinsizewhenholdingpowerorhasmorepower whenholdingsizedistortions 3.5CONCLUSIONANDREMAININGWORK Inthischapterweanalyzedsmoothedclusteredstandarderrorsintimeseriesregression models.Wethatasymptoticapproximationgeneratedundertheassumptionofa numberofclusters, G ,workswellevenforthelargevaluesof G .Evenunderstrong serialcorrelation,theover-rejectionproblemisrelativelysmallwhenasmallnumberof clustersisused.Alsobecause G asymptoticapproximationcanbesimplyobtained bythe i . i . d .naivebootstrapinpractice,inempiricalworkwithstrongserialcorrelation, smoothedclusteredstandarderrorscanbeuseful.Asimulationstudyshowsthatingen- eral,thererarelyaresituationswherethereisnotaCHACestimatorthatiseitherbetter insizewhenholdingpowerorhasmorepowerwhenholdingsizedistortions comparedtotheusualHACestimators.Whatthischapterdoesnotaddressisasystem- aticmethodforchoosingthenumberofclustersandthebandwidthusedtoimplement theCHACestimator.Developingadata-dependentmethodtochoose G and M remains asanimportanttopicofongoingresearch. 459 Forcertainapplicationsclusteringandsmoothingcanbenaturalgiventhestructureof thedata.Forexample,foramarketthatisnotopenonweekendsitisnaturalto clusterwithintheweekandsmoothacrossweeks.Tables3.7-3.8showsasmallsimulation studywiththreemonthsofdailydatageneratedfromtheDGPinSection3.4.1where everyweekendismissingresultingin T = 60observations.Theregressionmodelis estimatedbydeletingthemissingobservations.Thiscanbeconsideredasconstructing AMseriesorESregressionmodelinChapter2andthencalculatingtheteststatisticbased onCHACstandarderror.Manyofthepatternsexhibitedinthesimulationswithout missingdataareseeninTables3.7-3.8.Thereisoneinterestingpatternthatissurprising. When G = 12,i.e.eachclusterhaslength5,therejectionratesaresmallerthanthoseof G = 10when r 0.Intheothertablesofresultswealwaysseehigherrejectionrates using G = 12comparedto G = 10when r 0.Itseemsreasonabletoconjecturethatthe exactmatchof5observationsperclusterwiththenumberofobservationsperweekhas somethingtodowiththispattern.Theapplicationofclusteredstandarderrorstotimes serieswithmissingobservationsremainsasaninterestingtopicforfuturework. 460 Table3.1:CriticalValues:Fixed G G M 1%2.5%5%10%50%90%95%97.5%99% 2 1 45.991 17.920 8.992 4.390 0.0104.3758.87417.94246.230 2 2 65.041 25.342 12.716 6.208 0.0146.18712.55025.37465.379 3 1 8.710 5.323 3.605 2.325 0.0082.3053.5635.2278.680 3 2 11.315 7.057 4.702 2.997 0.0092.9804.6186.80511.286 3 3 13.858 8.642 5.759 3.671 0.0123.6505.6568.33413.823 4 1 5.303 3.670 2.724 1.917 0.0081.8962.7233.6765.214 4 2 6.945 4.716 3.428 2.349 0.0082.3463.4094.6796.769 4 3 8.005 5.603 4.038 2.782 0.0102.7644.0455.5187.931 4 4 9.243 6.470 4.663 3.212 0.0123.1914.6716.3719.158 5 1 4.143 3.120 2.407 1.732 0.0071.7202.3813.1244.240 5 2 5.272 3.857 2.907 2.056 0.0082.0502.8863.8295.322 5 3 6.288 4.540 3.407 2.403 0.0092.3903.3974.4926.246 5 4 7.010 5.136 3.874 2.720 0.0102.7103.8475.0927.069 5 5 7.837 5.742 4.331 3.041 0.0113.0294.3015.6937.903 6 1 3.693 2.837 2.230 1.628 0.0071.6232.2092.8053.641 6 2 4.526 3.396 2.615 1.887 0.0071.8722.5983.3494.514 6 3 5.356 3.980 3.022 2.159 0.0082.1523.0263.9155.301 6 4 6.006 4.507 3.430 2.434 0.0092.4103.4004.4275.945 6 5 6.619 4.942 3.775 2.684 0.0102.6713.7544.8836.564 6 6 7.251 5.414 4.136 2.940 0.0112.9264.1125.3497.190 7 1 3.405 2.658 2.114 1.569 0.0061.5542.1082.6513.401 7 2 4.057 3.114 2.431 1.778 0.0071.7682.4133.0894.110 7 3 4.752 3.576 2.779 2.004 0.0081.9922.7643.5704.770 7 4 5.401 4.032 3.128 2.239 0.0092.2253.0994.0035.358 7 5 5.949 4.436 3.461 2.470 0.0092.4483.4094.4365.913 7 6 6.394 4.823 3.749 2.681 0.0102.6663.7094.8166.358 7 7 6.906 5.209 4.050 2.896 0.0112.8794.0065.2026.868 461 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 8 1 3.210 2.526 2.048 1.522 0.0061.5162.0352.5303.208 8 2 3.749 2.915 2.311 1.704 0.0071.6932.2982.9083.746 8 3 4.346 3.324 2.611 1.899 0.0071.8952.5933.3244.351 8 4 4.945 3.739 2.907 2.100 0.0082.0952.8933.7204.878 8 5 5.457 4.109 3.205 2.300 0.0092.2893.1854.0825.364 8 6 5.876 4.454 3.474 2.485 0.0092.4843.4524.4455.799 8 7 6.279 4.788 3.731 2.673 0.0102.6753.6984.7676.228 8 8 6.712 5.118 3.989 2.857 0.0112.8603.9545.0966.658 9 1 3.099 2.467 1.997 1.498 0.0061.4821.9802.4653.084 9 2 3.559 2.779 2.222 1.648 0.0071.6302.2122.7733.553 9 3 4.079 3.138 2.491 1.820 0.0071.8082.4623.1374.057 9 4 4.630 3.513 2.743 1.994 0.0081.9882.7233.5124.554 9 5 5.091 3.880 3.015 2.167 0.0082.1562.9883.8595.017 9 6 5.493 4.196 3.268 2.342 0.0092.3323.2234.1655.439 9 7 5.878 4.481 3.496 2.510 0.0092.4983.4704.4565.801 9 8 6.227 4.769 3.716 2.674 0.0102.6623.6924.7436.137 9 9 6.605 5.058 3.941 2.836 0.0102.8233.9165.0316.509 10 1 2.989 2.401 1.954 1.470 0.0061.4631.9512.3942.986 10 2 3.383 2.692 2.149 1.606 0.0071.5942.1442.6803.434 10 3 3.876 3.000 2.382 1.749 0.0071.7492.3713.0213.883 10 4 4.310 3.325 2.613 1.911 0.0071.9082.6063.3584.348 10 5 4.761 3.655 2.849 2.069 0.0082.0632.8393.6634.733 10 6 5.156 3.943 3.072 2.237 0.0082.2233.0653.9485.144 10 7 5.497 4.222 3.296 2.391 0.0092.3693.2894.2315.520 10 8 5.827 4.472 3.498 2.538 0.0102.5203.4914.4945.868 10 9 6.134 4.730 3.698 2.685 0.0102.6733.6904.7476.178 10 10 6.465 4.986 3.898 2.830 0.0112.8183.8895.0046.512 462 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 11 1 2.910 2.350 1.913 1.447 0.0061.4421.9042.3332.916 11 2 3.314 2.612 2.101 1.565 0.0071.5632.0812.5813.267 11 3 3.724 2.899 2.303 1.708 0.0071.6902.2842.8683.669 11 4 4.112 3.198 2.519 1.846 0.0071.8392.4913.1934.068 11 5 4.536 3.470 2.746 1.995 0.0081.9832.7113.4694.485 11 6 4.899 3.750 2.959 2.137 0.0082.1212.9193.7414.868 11 7 5.240 4.017 3.157 2.285 0.0092.2643.1124.0005.171 11 8 5.548 4.260 3.346 2.422 0.0092.4023.3074.2595.488 11 9 5.842 4.495 3.524 2.553 0.0102.5343.4954.4695.789 11 10 6.143 4.713 3.710 2.688 0.0102.6663.6754.7086.055 11 11 6.443 4.943 3.892 2.819 0.0112.7963.8554.9376.351 12 1 2.867 2.311 1.889 1.428 0.0061.4271.8842.3042.840 12 2 3.173 2.541 2.057 1.537 0.0071.5382.0442.5263.167 12 3 3.553 2.806 2.236 1.662 0.0071.6612.2262.8003.533 12 4 3.932 3.066 2.426 1.793 0.0071.7852.4203.0573.899 12 5 4.325 3.328 2.621 1.923 0.0071.9212.6133.3214.271 12 6 4.667 3.609 2.814 2.055 0.0082.0532.8043.5864.617 12 7 5.003 3.854 3.008 2.188 0.0082.1753.0003.8264.935 12 8 5.300 4.096 3.188 2.321 0.0092.3093.1744.0375.230 12 9 5.587 4.289 3.365 2.446 0.0102.4263.3494.2665.512 12 10 5.862 4.509 3.541 2.563 0.0102.5483.5084.4675.819 12 11 6.129 4.720 3.703 2.684 0.0102.6643.6824.6766.097 12 12 6.402 4.930 3.868 2.803 0.0112.7833.8464.8846.368 463 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 13 1 2.795 2.281 1.869 1.424 0.0061.4151.8642.2732.811 13 2 3.100 2.499 2.018 1.521 0.0071.5162.0162.4963.095 13 3 3.444 2.726 2.191 1.630 0.0071.6242.1842.7243.416 13 4 3.797 2.975 2.370 1.751 0.0071.7432.3542.9713.801 13 5 4.168 3.216 2.556 1.874 0.0071.8602.5323.2174.132 13 6 4.481 3.460 2.724 1.992 0.0081.9842.7213.4484.446 13 7 4.790 3.686 2.913 2.117 0.0082.1002.8913.6844.775 13 8 5.094 3.915 3.090 2.236 0.0092.2143.0643.8945.053 13 9 5.350 4.128 3.248 2.355 0.0092.3323.2304.1235.300 13 10 5.612 4.328 3.408 2.473 0.0092.4483.3934.3155.586 13 11 5.859 4.528 3.559 2.584 0.0102.5633.5404.5115.828 13 12 6.107 4.706 3.717 2.694 0.0102.6753.6904.7056.084 13 13 6.356 4.898 3.869 2.804 0.0112.7843.8414.8976.333 14 1 2.765 2.265 1.846 1.413 0.0061.4051.8432.2502.748 14 2 3.030 2.447 1.995 1.500 0.0061.4951.9792.4373.046 14 3 3.345 2.657 2.151 1.605 0.0071.5912.1402.6533.358 14 4 3.683 2.893 2.315 1.715 0.0071.7012.2892.8863.665 14 5 3.994 3.121 2.477 1.827 0.0071.8142.4513.1144.003 14 6 4.314 3.341 2.640 1.934 0.0071.9222.6243.3444.298 14 7 4.609 3.574 2.817 2.048 0.0082.0312.7873.5514.587 14 8 4.906 3.783 2.982 2.158 0.0082.1462.9593.7694.844 14 9 5.176 3.972 3.128 2.275 0.0092.2623.1033.9705.096 14 10 5.383 4.154 3.278 2.387 0.0092.3653.2494.1735.358 14 11 5.611 4.340 3.423 2.488 0.0092.4733.4024.3505.603 14 12 5.865 4.522 3.566 2.587 0.0102.5733.5464.5285.857 14 13 6.095 4.707 3.715 2.694 0.0102.6773.6834.7036.091 14 14 6.325 4.885 3.856 2.795 0.0102.7783.8224.8816.321 464 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 15 1 2.717 2.230 1.834 1.401 0.0061.3931.8342.2352.723 15 2 2.965 2.420 1.965 1.488 0.0061.4751.9522.4102.971 15 3 3.255 2.613 2.106 1.577 0.0071.5742.0892.6013.249 15 4 3.558 2.822 2.257 1.679 0.0071.6752.2412.8113.564 15 5 3.875 3.021 2.417 1.786 0.0071.7772.3943.0293.847 15 6 4.177 3.245 2.570 1.889 0.0071.8812.5513.2374.155 15 7 4.430 3.445 2.733 1.991 0.0081.9902.7113.4334.424 15 8 4.727 3.656 2.887 2.097 0.0082.0862.8633.6384.681 15 9 4.980 3.851 3.023 2.208 0.0092.1923.0103.8344.949 15 10 5.205 4.010 3.171 2.315 0.0092.2993.1594.0045.176 15 11 5.416 4.196 3.312 2.412 0.0092.3973.2934.1675.422 15 12 5.618 4.372 3.443 2.511 0.0102.4983.4284.3395.619 15 13 5.826 4.531 3.578 2.606 0.0102.5963.5554.5105.844 15 14 6.036 4.702 3.704 2.704 0.0102.6913.6884.6656.059 15 15 6.248 4.867 3.834 2.799 0.0112.7853.8174.8296.271 20 1 2.606 2.160 1.780 1.369 0.0061.3601.7812.1562.604 20 2 2.786 2.290 1.875 1.433 0.0061.4261.8712.2892.780 20 3 2.998 2.443 1.986 1.499 0.0061.4921.9692.4312.990 20 4 3.227 2.590 2.099 1.572 0.0061.5652.0802.5773.221 20 5 3.446 2.747 2.209 1.642 0.0071.6402.1972.7323.440 20 6 3.686 2.895 2.320 1.722 0.0071.7182.3002.9033.642 20 7 3.900 3.050 2.438 1.802 0.0071.7952.4203.0683.880 20 8 4.138 3.211 2.551 1.881 0.0071.8752.5383.2214.101 20 9 4.343 3.357 2.666 1.959 0.0081.9502.6513.3704.327 20 10 4.530 3.514 2.788 2.036 0.0082.0252.7693.5204.520 20 11 4.724 3.661 2.905 2.118 0.0082.1012.8803.6684.723 20 12 4.943 3.793 3.013 2.197 0.0092.1772.9943.8004.890 465 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 20 13 5.093 3.927 3.126 2.273 0.0092.2543.0963.9445.061 20 14 5.305 4.063 3.230 2.347 0.0092.3293.1954.0715.256 20 15 5.441 4.196 3.328 2.422 0.0092.4053.2964.2025.403 20 20 6.235 4.801 3.823 2.779 0.0112.7623.7854.8156.221 30 1 2.490 2.088 1.731 1.343 0.0061.3321.7322.0892.501 30 2 2.615 2.166 1.794 1.381 0.0061.3731.7962.1722.618 30 3 2.745 2.266 1.867 1.422 0.0061.4161.8572.2672.749 30 4 2.892 2.368 1.936 1.469 0.0061.4631.9252.3652.895 30 5 3.035 2.464 2.013 1.516 0.0061.5111.9922.4573.041 30 6 3.192 2.571 2.087 1.563 0.0071.5602.0692.5553.193 30 7 3.355 2.673 2.163 1.614 0.0071.6102.1502.6613.349 30 8 3.491 2.766 2.238 1.669 0.0071.6572.2232.7673.474 30 9 3.651 2.880 2.312 1.718 0.0071.7112.2932.8813.624 30 10 3.795 2.985 2.392 1.768 0.0071.7632.3662.9943.770 30 11 3.945 3.087 2.464 1.822 0.0071.8152.4413.1013.918 30 12 4.083 3.194 2.545 1.875 0.0071.8662.5203.1894.075 30 13 4.228 3.293 2.622 1.923 0.0081.9172.5983.2894.224 30 14 4.357 3.404 2.701 1.974 0.0081.9732.6833.3834.358 30 15 4.482 3.508 2.779 2.026 0.0082.0222.7623.4804.488 30 20 5.104 3.968 3.140 2.292 0.0092.2733.1153.9545.070 30 25 5.617 4.397 3.480 2.534 0.0102.5193.4534.3725.634 30 30 6.138 4.799 3.804 2.771 0.0102.7523.7804.7826.170 466 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 40 1 2.462 2.055 1.709 1.330 0.0061.3201.7112.0672.450 40 2 2.546 2.121 1.756 1.359 0.0061.3471.7542.1272.525 40 3 2.630 2.189 1.802 1.390 0.0061.3821.7982.1942.635 40 4 2.741 2.267 1.859 1.421 0.0061.4181.8542.2662.728 40 5 2.854 2.342 1.914 1.453 0.0061.4481.9072.3282.836 40 6 2.951 2.415 1.972 1.490 0.0061.4851.9582.3992.946 40 7 3.059 2.489 2.028 1.525 0.0061.5212.0132.4753.058 40 8 3.184 2.564 2.085 1.564 0.0061.5572.0632.5513.173 40 9 3.291 2.644 2.140 1.601 0.0061.5962.1202.6303.283 40 10 3.400 2.715 2.195 1.639 0.0071.6312.1772.7073.402 40 11 3.520 2.806 2.249 1.678 0.0071.6672.2322.7873.505 40 12 3.635 2.879 2.310 1.716 0.0071.7082.2892.8733.619 40 13 3.728 2.960 2.368 1.755 0.0071.7472.3392.9553.716 40 14 3.848 3.037 2.423 1.794 0.0071.7852.3993.0353.828 40 15 3.970 3.111 2.481 1.832 0.0071.8242.4563.1143.929 40 20 4.486 3.494 2.773 2.029 0.0082.0182.7473.4814.447 40 25 4.953 3.840 3.054 2.222 0.0092.2103.0143.8384.913 40 30 5.373 4.167 3.310 2.413 0.0092.3943.2774.1605.340 40 35 5.753 4.478 3.557 2.590 0.0102.5753.5234.4625.754 40 40 6.167 4.784 3.805 2.768 0.0112.7493.7624.7726.150 467 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 60 1 2.410 2.026 1.697 1.316 0.0061.3091.6862.0252.416 60 2 2.473 2.067 1.719 1.334 0.0061.3231.7142.0692.450 60 3 2.538 2.113 1.751 1.355 0.0061.3441.7462.1132.513 60 4 2.596 2.161 1.784 1.377 0.0061.3681.7802.1602.580 60 5 2.663 2.209 1.817 1.399 0.0061.3901.8142.2032.650 60 6 2.734 2.256 1.855 1.421 0.0061.4131.8502.2552.722 60 7 2.806 2.304 1.890 1.441 0.0061.4351.8822.3092.792 60 8 2.876 2.353 1.927 1.465 0.0061.4581.9202.3482.857 60 9 2.943 2.407 1.965 1.488 0.0061.4831.9532.3912.928 60 10 3.017 2.456 2.005 1.513 0.0061.5071.9922.4413.011 60 11 3.094 2.506 2.039 1.536 0.0071.5322.0292.4953.083 60 12 3.171 2.555 2.078 1.560 0.0071.5552.0622.5483.166 60 13 3.248 2.612 2.114 1.585 0.0071.5812.1012.5953.242 60 14 3.333 2.660 2.154 1.611 0.0071.6052.1392.6533.314 60 15 3.403 2.707 2.189 1.638 0.0071.6302.1782.7003.386 60 20 3.770 2.963 2.379 1.765 0.0071.7592.3632.9753.740 60 25 4.155 3.229 2.570 1.897 0.0071.8842.5553.2344.105 60 30 4.481 3.491 2.776 2.026 0.0082.0152.7483.4674.447 60 35 4.791 3.714 2.957 2.155 0.0082.1442.9313.7034.767 60 40 5.080 3.943 3.137 2.288 0.0092.2703.0983.9275.037 60 45 5.357 4.164 3.308 2.412 0.0092.3913.2714.1475.324 60 50 5.613 4.363 3.468 2.531 0.0102.5173.4414.3515.590 60 55 5.883 4.565 3.636 2.653 0.0102.6343.6014.5585.857 60 60 6.136 4.771 3.798 2.772 0.0112.7493.7604.7656.118 468 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 80 1 2.386 2.009 1.684 1.313 0.0061.2991.6782.0142.397 80 2 2.437 2.042 1.705 1.325 0.0061.3141.6982.0492.428 80 3 2.472 2.077 1.725 1.338 0.0061.3271.7232.0792.467 80 4 2.520 2.106 1.750 1.354 0.0061.3421.7482.1172.515 80 5 2.567 2.146 1.776 1.372 0.0061.3591.7732.1452.560 80 6 2.621 2.179 1.800 1.390 0.0061.3781.7962.1842.613 80 7 2.670 2.215 1.824 1.406 0.0061.3961.8212.2182.670 80 8 2.731 2.255 1.854 1.421 0.0061.4121.8482.2562.729 80 9 2.785 2.294 1.881 1.437 0.0061.4301.8732.2902.777 80 10 2.836 2.330 1.907 1.454 0.0061.4481.8982.3272.831 80 11 2.893 2.366 1.934 1.471 0.0061.4651.9222.3602.878 80 12 2.939 2.403 1.960 1.487 0.0061.4821.9532.3902.935 80 13 2.991 2.443 1.989 1.507 0.0061.5021.9802.4282.998 80 14 3.040 2.483 2.020 1.524 0.0061.5182.0062.4673.055 80 15 3.105 2.517 2.047 1.542 0.0071.5362.0352.5043.110 80 20 3.394 2.709 2.189 1.638 0.0071.6282.1742.7003.391 80 25 3.674 2.904 2.332 1.733 0.0071.7292.3062.9043.658 80 30 3.943 3.095 2.474 1.830 0.0071.8222.4563.1033.939 80 35 4.204 3.297 2.621 1.927 0.0081.9222.6003.2934.178 80 40 4.485 3.490 2.776 2.025 0.0082.0162.7423.4764.443 80 45 4.714 3.661 2.912 2.125 0.0082.1102.8823.6524.681 80 50 4.929 3.833 3.043 2.222 0.0092.2063.0113.8234.901 80 55 5.140 3.997 3.186 2.317 0.0092.2993.1433.9895.130 80 60 5.342 4.168 3.307 2.409 0.0092.3903.2694.1485.323 80 65 5.544 4.320 3.425 2.500 0.0102.4853.3904.2965.523 80 70 5.753 4.464 3.554 2.591 0.0102.5713.5154.4525.732 80 75 5.940 4.618 3.673 2.679 0.0102.6583.6384.6145.954 80 80 6.131 4.766 3.795 2.768 0.0102.7483.7574.7636.148 469 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 120 1 2.362 1.996 1.675 1.304 0.0061.2931.6672.0042.367 120 2 2.392 2.012 1.685 1.312 0.0061.3011.6782.0202.398 120 3 2.426 2.038 1.702 1.321 0.0061.3111.6932.0442.419 120 4 2.457 2.058 1.715 1.331 0.0061.3191.7102.0632.449 120 5 2.491 2.086 1.733 1.343 0.0061.3321.7292.0872.475 120 6 2.518 2.108 1.749 1.354 0.0061.3431.7462.1132.502 120 7 2.554 2.135 1.764 1.365 0.0061.3541.7652.1332.536 120 8 2.587 2.156 1.783 1.378 0.0061.3671.7802.1552.572 120 9 2.622 2.179 1.798 1.389 0.0061.3791.7932.1772.606 120 10 2.653 2.204 1.818 1.399 0.0061.3901.8102.1992.643 120 11 2.693 2.227 1.834 1.411 0.0061.4011.8292.2262.680 120 12 2.730 2.254 1.853 1.420 0.0061.4111.8472.2512.713 120 13 2.767 2.277 1.873 1.429 0.0061.4241.8652.2762.752 120 14 2.797 2.304 1.889 1.441 0.0061.4351.8802.2962.790 120 15 2.836 2.327 1.908 1.452 0.0061.4471.9012.3232.827 120 20 3.011 2.456 2.002 1.511 0.0061.5071.9912.4403.012 120 25 3.199 2.579 2.097 1.571 0.0071.5672.0822.5653.202 120 30 3.395 2.706 2.190 1.638 0.0071.6282.1752.7013.378 120 35 3.585 2.836 2.283 1.701 0.0071.6942.2662.8343.559 120 40 3.764 2.961 2.379 1.765 0.0071.7582.3572.9703.735 120 45 3.951 3.098 2.475 1.831 0.0071.8232.4523.1013.911 120 50 4.142 3.224 2.569 1.895 0.0071.8872.5503.2244.096 120 55 4.307 3.358 2.671 1.961 0.0081.9532.6483.3444.266 120 60 4.471 3.493 2.772 2.026 0.0082.0162.7403.4714.427 120 65 4.617 3.599 2.862 2.092 0.0082.0792.8353.5954.597 120 70 4.790 3.720 2.952 2.158 0.0082.1422.9233.7094.743 120 75 4.934 3.835 3.044 2.222 0.0092.2083.0083.8304.899 120 80 5.090 3.944 3.132 2.286 0.0092.2683.0973.9295.043 470 Table3.1:(cont'd) G M 1%2.5%5%10%50%90%95%97.5%99% 120 85 5.224 4.052 3.227 2.349 0.0092.3273.1814.0355.193 120 90 5.353 4.164 3.308 2.409 0.0092.3903.2664.1415.328 120 95 5.480 4.268 3.387 2.470 0.0102.4523.3584.2485.441 120 100 5.599 4.355 3.465 2.531 0.0102.5123.4384.3435.575 120 105 5.741 4.461 3.550 2.593 0.0102.5713.5184.4575.722 120 110 5.871 4.568 3.631 2.651 0.0102.6293.6004.5625.866 120 115 6.006 4.665 3.709 2.709 0.0102.6873.6794.6645.996 120 120 6.131 4.765 3.789 2.769 0.0102.7433.7604.7686.127 471 Table3.2:Large G ,Empiricalnullrejectionprobabilities,5%level, T = 60 G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.5 0.510.2120.0940.0550.0340.0210.0060.0030.0010.0000.000 20.2170.0930.0560.0370.0260.0100.0060.0040.0010.000 30.0940.0540.0370.0250.0130.0080.0050.0010.000 40.0550.0350.0250.0140.0100.0080.0020.000 50.0360.0230.0130.0090.0080.0030.000 60.0230.0130.0090.0080.0040.000 70.0120.0090.0080.0040.000 80.0120.0100.0080.0050.001 90.0120.0090.0080.0060.001 100.0120.0090.0080.0050.001 120.0090.0080.0060.001 150.0080.0060.002 200.0060.003 250.0050.003 300.0060.003 400.004 500.003 600.003 472 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.5010.2390.1250.0940.0760.0610.0430.0350.0310.0180.002 20.2450.1230.0900.0750.0590.0460.0400.0370.0250.016 30.1240.0860.0720.0600.0490.0410.0400.0290.015 40.0870.0680.0590.0500.0420.0410.0330.021 50.0690.0570.0480.0430.0400.0340.022 60.0570.0480.0430.0410.0350.025 70.0470.0440.0430.0350.026 80.0460.0430.0410.0360.029 90.0470.0420.0420.0370.030 100.0470.0410.0400.0360.031 120.0420.0410.0380.032 150.0410.0390.032 200.0380.034 250.0370.036 300.0370.037 400.035 500.034 600.034 473 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.50.510.2420.1350.1010.0880.0710.0630.0580.0570.0540.053 20.2460.1300.0970.0820.0710.0570.0560.0530.0510.051 30.1320.0920.0790.0720.0570.0520.0520.0490.051 40.0930.0760.0690.0590.0550.0510.0490.049 50.0770.0670.0580.0540.0520.0490.049 60.0670.0570.0560.0530.0480.049 70.0560.0540.0510.0470.049 80.0550.0530.0510.0480.049 90.0550.0520.0500.0470.049 100.0550.0520.0510.0480.048 120.0520.0490.0490.047 150.0490.0510.048 200.0490.048 250.0460.049 300.0480.050 400.049 500.048 600.047 474 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0 0.510.2260.1150.0810.0590.0440.0260.0180.0110.0020.000 20.2320.1120.0810.0640.0460.0320.0230.0200.0060.001 30.1150.0760.0620.0480.0330.0270.0250.0110.003 40.0770.0590.0490.0340.0280.0260.0130.004 50.0600.0460.0340.0280.0260.0160.006 60.0460.0340.0280.0280.0190.009 70.0350.0290.0270.0190.010 80.0340.0270.0280.0200.010 90.0330.0280.0290.0220.012 100.0330.0270.0280.0210.013 120.0270.0270.0230.016 150.0270.0240.017 200.0230.019 250.0220.021 300.0230.021 400.020 500.020 600.019 475 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0010.2420.1350.1010.0880.0710.0630.0580.0570.0540.053 20.2460.1300.0970.0820.0710.0570.0560.0530.0510.051 30.1320.0920.0790.0720.0570.0520.0520.0490.051 40.0930.0760.0690.0590.0550.0510.0490.049 50.0770.0670.0580.0540.0520.0490.049 60.0670.0570.0560.0530.0480.049 70.0560.0540.0510.0470.049 80.0550.0530.0510.0480.049 90.0550.0520.0500.0470.049 100.0550.0520.0510.0480.048 120.0520.0490.0490.047 150.0490.0510.048 200.0490.048 250.0460.049 300.0480.050 400.049 500.048 600.047 476 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 00.510.2420.1360.1040.0900.0790.0710.0700.0710.0900.152 20.2480.1320.1000.0830.0760.0640.0620.0600.0680.090 30.1360.0970.0790.0760.0610.0580.0580.0610.072 40.0990.0780.0710.0620.0600.0570.0590.067 50.0790.0700.0620.0590.0570.0560.064 60.0700.0600.0590.0580.0560.061 70.0580.0590.0580.0550.060 80.0590.0570.0570.0550.059 90.0590.0570.0560.0550.057 100.0600.0570.0560.0540.057 120.0560.0560.0560.056 150.0550.0560.056 200.0540.055 250.0540.056 300.0540.056 400.055 500.054 600.055 477 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.5 0.510.2420.1350.1010.0880.0710.0630.0580.0570.0540.053 20.2460.1300.0970.0820.0710.0570.0560.0530.0510.051 30.1320.0920.0790.0720.0570.0520.0520.0490.051 40.0930.0760.0690.0590.0550.0510.0490.049 50.0770.0670.0580.0540.0520.0490.049 60.0670.0570.0560.0530.0480.049 70.0560.0540.0510.0470.049 80.0550.0530.0510.0480.049 90.0550.0520.0500.0470.049 100.0550.0520.0510.0480.048 120.0520.0490.0490.047 150.0490.0510.048 200.0490.048 250.0460.049 300.0480.050 400.049 500.048 600.047 478 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.5010.2480.1450.1110.1010.0920.0960.0990.1100.1770.269 20.2540.1410.1050.0900.0860.0770.0740.0790.1100.178 30.1420.1010.0890.0830.0740.0730.0720.0880.134 40.1020.0860.0830.0730.0690.0700.0800.110 50.0870.0790.0730.0710.0700.0750.097 60.0800.0700.0700.0700.0720.088 70.0700.0690.0700.0710.084 80.0700.0680.0690.0700.079 90.0700.0680.0670.0690.077 100.0710.0670.0680.0680.075 120.0680.0670.0690.072 150.0680.0700.071 200.0660.068 250.0670.070 300.0680.069 400.066 500.067 600.068 479 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.50.510.2500.1460.1130.1010.0970.1000.1050.1220.2030.338 20.2560.1430.1050.0930.0870.0790.0800.0830.1210.204 30.1440.1020.0910.0860.0750.0740.0760.0950.149 40.1040.0890.0830.0760.0730.0730.0840.121 50.0900.0810.0740.0740.0730.0790.105 60.0830.0720.0720.0730.0770.095 70.0720.0700.0740.0730.089 80.0720.0700.0720.0720.084 90.0720.0720.0700.0720.082 100.0730.0700.0700.0720.080 120.0720.0700.0730.077 150.0720.0720.073 200.0690.072 250.0700.073 300.0710.072 400.069 500.070 600.071 480 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.8 0.510.2650.1680.1430.1370.1420.1800.1990.2260.3100.385 20.2710.1580.1300.1190.1150.1250.1340.1500.2270.311 30.1600.1270.1170.1130.1110.1130.1240.1780.262 40.1290.1150.1090.1070.1090.1120.1500.227 50.1160.1090.1050.1060.1060.1340.201 60.1100.1050.1040.1060.1230.178 70.1030.1030.1040.1180.162 80.1020.1020.1020.1120.148 90.1050.1010.1010.1090.140 100.1060.1010.1000.1070.134 120.1030.1010.1050.123 150.1030.1020.113 200.1020.106 250.1010.104 300.1020.102 400.101 500.101 600.101 481 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.8010.2660.1710.1520.1460.1580.2100.2370.2770.4070.540 20.2730.1630.1370.1270.1250.1380.1510.1750.2770.408 30.1660.1310.1220.1220.1210.1280.1390.2150.329 40.1330.1210.1190.1170.1200.1240.1750.279 50.1220.1180.1150.1180.1200.1530.243 60.1190.1150.1150.1170.1390.215 70.1130.1150.1160.1300.192 80.1130.1130.1140.1240.175 90.1150.1140.1140.1220.163 100.1160.1130.1130.1190.153 120.1140.1120.1170.140 150.1140.1150.127 200.1140.119 250.1120.117 300.1140.115 400.113 500.112 600.114 482 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.80.510.2660.1720.1530.1480.1590.2150.2420.2810.4220.563 20.2730.1630.1370.1280.1260.1390.1540.1780.2840.424 30.1660.1320.1230.1230.1230.1300.1410.2200.339 40.1340.1210.1190.1200.1220.1260.1780.285 50.1230.1180.1170.1190.1210.1550.247 60.1190.1160.1180.1180.1410.219 70.1140.1170.1170.1330.196 80.1140.1150.1160.1270.179 90.1160.1150.1160.1230.165 100.1170.1140.1150.1210.155 120.1160.1140.1190.141 150.1160.1160.130 200.1150.121 250.1140.118 300.1170.117 400.115 500.115 600.117 483 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.9 0.510.2950.2180.2110.2260.2490.3300.3670.4020.5190.605 20.3010.2020.1820.1810.1860.2260.2520.2860.4070.520 30.2040.1810.1720.1740.1900.2030.2270.3340.458 40.1820.1730.1690.1760.1840.1990.2870.408 50.1750.1710.1710.1750.1840.2540.369 60.1720.1670.1720.1750.2290.335 70.1660.1670.1720.2100.309 80.1670.1660.1700.1990.288 90.1670.1660.1660.1910.270 100.1680.1660.1650.1830.255 120.1680.1660.1750.228 150.1680.1720.204 200.1650.183 250.1660.174 300.1680.172 400.165 500.166 600.168 484 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.9010.2980.2220.2160.2380.2630.3550.3970.4390.5740.688 20.3050.2080.1840.1830.1920.2370.2660.3060.4420.575 30.2110.1840.1770.1780.1990.2130.2390.3600.501 40.1860.1760.1760.1800.1900.2070.3060.442 50.1780.1760.1760.1790.1900.2680.399 60.1780.1720.1770.1810.2390.361 70.1730.1740.1770.2210.333 80.1730.1730.1740.2080.307 90.1740.1740.1730.1990.288 100.1750.1740.1730.1910.269 120.1760.1740.1810.241 150.1750.1760.214 200.1730.192 250.1740.180 300.1750.176 400.173 500.174 600.175 485 Table3.2:(cont'd) G ! ¥ criticalvalue rq M valuesof G 234561012153060 0.90.510.3000.2250.2180.2380.2630.3570.3990.4430.5820.700 20.3050.2090.1830.1830.1930.2390.2690.3080.4460.584 30.2120.1850.1770.1780.2000.2140.2410.3640.506 40.1860.1770.1770.1820.1920.2080.3090.447 50.1790.1770.1770.1800.1940.2700.402 60.1790.1740.1760.1820.2420.365 70.1740.1740.1780.2220.335 80.1730.1740.1760.2090.310 90.1760.1750.1730.2000.290 100.1770.1750.1740.1940.270 120.1770.1750.1820.242 150.1760.1770.215 200.1740.194 250.1750.180 300.1770.177 400.174 500.174 600.177 486 Table3.3:Fixed G ,Empiricalnullrejectionprobabilities,5%level, T = 60 G criticalvalue rq M valuesof G 234561012153060 0.5 0.510.0410.0330.0270.0180.0120.0040.0020.0010.0000.000 20.0410.0320.0290.0200.0160.0080.0050.0030.0010.000 30.0320.0270.0210.0150.0110.0070.0050.0010.000 40.0270.0200.0160.0110.0090.0060.0020.000 50.0200.0160.0110.0090.0070.0030.000 60.0160.0110.0080.0070.0040.000 70.0100.0080.0070.0040.000 80.0100.0080.0070.0050.001 90.0100.0090.0070.0060.001 100.0100.0090.0080.0050.002 120.0090.0080.0060.001 150.0070.0060.003 200.0060.003 250.0050.003 300.0060.003 400.004 500.003 600.004 487 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.5010.0500.0450.0470.0430.0380.0340.0300.0280.0170.001 20.0500.0440.0460.0440.0410.0390.0340.0330.0250.015 30.0440.0460.0460.0430.0420.0370.0380.0290.015 40.0460.0450.0420.0420.0380.0370.0330.021 50.0450.0410.0400.0390.0380.0340.022 60.0410.0410.0380.0390.0360.026 70.0410.0400.0400.0360.027 80.0410.0380.0390.0370.030 90.0400.0390.0400.0360.031 100.0400.0390.0380.0360.031 120.0390.0400.0370.032 150.0400.0380.033 200.0380.035 250.0370.036 300.0370.037 400.037 500.037 600.035 488 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.50.510.0500.0500.0510.0500.0490.0500.0490.0500.0510.052 20.0500.0480.0510.0510.0490.0480.0500.0500.0500.050 30.0480.0490.0500.0480.0490.0470.0500.0490.051 40.0490.0500.0490.0490.0500.0490.0480.049 50.0500.0500.0490.0490.0490.0490.050 60.0500.0510.0500.0500.0480.049 70.0490.0500.0490.0480.049 80.0500.0500.0470.0480.050 90.0490.0500.0480.0470.050 100.0490.0500.0480.0470.049 120.0490.0470.0480.048 150.0480.0510.049 200.0490.048 250.0480.049 300.0480.051 400.051 500.050 600.048 489 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0 0.510.0440.0410.0380.0340.0300.0200.0150.0090.0020.000 20.0440.0400.0410.0370.0330.0260.0210.0170.0050.001 30.0400.0400.0380.0330.0280.0230.0230.0100.003 40.0400.0380.0330.0290.0240.0240.0130.004 50.0380.0320.0290.0250.0250.0160.007 60.0320.0290.0240.0250.0190.009 70.0300.0250.0260.0190.010 80.0290.0250.0260.0210.011 90.0290.0260.0270.0210.012 100.0290.0250.0270.0210.014 120.0250.0270.0230.016 150.0260.0230.018 200.0230.019 250.0230.021 300.0230.021 400.021 500.021 600.020 490 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0010.0500.0500.0510.0500.0490.0500.0490.0500.0510.052 20.0500.0480.0510.0510.0490.0480.0500.0500.0500.050 30.0480.0490.0500.0480.0490.0470.0500.0490.051 40.0490.0500.0490.0490.0500.0490.0480.049 50.0500.0500.0490.0490.0490.0490.050 60.0500.0510.0500.0500.0480.049 70.0490.0500.0490.0480.049 80.0500.0500.0470.0480.050 90.0490.0500.0480.0470.050 100.0490.0500.0480.0470.049 120.0490.0470.0480.048 150.0480.0510.049 200.0490.048 250.0480.049 300.0480.051 400.051 500.050 600.048 491 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 00.510.0490.0500.0520.0540.0530.0570.0580.0640.0850.148 20.0490.0460.0530.0520.0520.0530.0560.0560.0650.089 30.0460.0510.0510.0530.0530.0530.0550.0600.072 40.0510.0500.0530.0520.0550.0540.0590.067 50.0500.0530.0540.0550.0540.0560.065 60.0530.0540.0540.0550.0560.062 70.0520.0530.0540.0550.060 80.0520.0530.0540.0550.061 90.0530.0540.0530.0540.059 100.0530.0540.0540.0540.058 120.0530.0550.0560.057 150.0540.0560.057 200.0550.056 250.0560.056 300.0550.056 400.056 500.056 600.056 492 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.5 0.510.0500.0500.0510.0500.0490.0500.0490.0500.0510.052 20.0500.0480.0510.0510.0490.0480.0500.0500.0500.050 30.0480.0490.0500.0480.0490.0470.0500.0490.051 40.0490.0500.0490.0490.0500.0490.0480.049 50.0500.0500.0490.0490.0490.0490.050 60.0500.0510.0500.0500.0480.049 70.0490.0500.0490.0480.049 80.0500.0500.0470.0480.050 90.0490.0500.0480.0470.050 100.0490.0500.0480.0470.049 120.0490.0470.0480.048 150.0480.0510.049 200.0490.048 250.0480.049 300.0480.051 400.051 500.050 600.048 493 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.5010.0490.0540.0550.0600.0620.0780.0860.1000.1710.265 20.0490.0520.0560.0580.0600.0660.0690.0730.1080.176 30.0520.0540.0580.0580.0640.0660.0690.0870.134 40.0540.0570.0580.0630.0640.0670.0790.110 50.0570.0570.0620.0660.0660.0750.098 60.0570.0610.0650.0650.0730.089 70.0610.0630.0660.0710.084 80.0630.0640.0660.0700.080 90.0640.0650.0640.0680.078 100.0640.0650.0660.0680.077 120.0650.0660.0690.074 150.0670.0680.072 200.0670.068 250.0690.070 300.0690.069 400.068 500.070 600.070 494 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.50.510.0490.0560.0570.0600.0650.0830.0930.1100.1980.335 20.0490.0540.0560.0590.0610.0690.0730.0780.1190.202 30.0540.0540.0580.0590.0650.0680.0730.0940.148 40.0540.0580.0590.0650.0670.0690.0830.121 50.0580.0600.0640.0680.0690.0800.105 60.0600.0630.0660.0680.0770.096 70.0650.0650.0700.0740.090 80.0650.0660.0690.0730.086 90.0650.0680.0670.0720.083 100.0650.0680.0680.0710.082 120.0690.0690.0720.078 150.0700.0710.075 200.0690.072 250.0710.073 300.0720.072 400.071 500.073 600.073 495 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.8 0.510.0530.0620.0750.0840.1010.1570.1800.2120.3050.382 20.0530.0600.0700.0760.0850.1120.1260.1420.2250.309 30.0600.0700.0760.0830.0990.1050.1190.1760.261 40.0700.0770.0840.0950.1000.1070.1490.226 50.0770.0860.0930.0990.1020.1350.202 60.0860.0930.0970.1010.1240.178 70.0920.0960.1000.1180.163 80.0940.0960.0980.1120.151 90.0960.0970.0980.1080.142 100.0960.0980.0980.1060.136 120.0990.1000.1040.125 150.1010.1010.115 200.1020.107 250.1030.104 300.1030.102 400.103 500.103 600.104 496 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.8010.0540.0640.0800.0910.1130.1840.2200.2610.4000.537 20.0540.0630.0740.0810.0930.1240.1410.1670.2750.406 30.0630.0770.0820.0890.1110.1190.1340.2140.328 40.0770.0830.0910.1050.1130.1200.1750.278 50.0830.0940.1020.1100.1140.1530.243 60.0940.1050.1070.1120.1400.216 70.1030.1070.1110.1310.193 80.1020.1080.1090.1250.178 90.1050.1080.1100.1210.165 100.1050.1090.1100.1180.155 120.1100.1110.1160.141 150.1120.1140.130 200.1140.120 250.1140.117 300.1150.115 400.116 500.116 600.116 497 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.80.510.0540.0640.0810.0930.1170.1880.2250.2680.4160.560 20.0540.0640.0750.0800.0930.1240.1440.1700.2820.422 30.0640.0760.0830.0900.1120.1210.1360.2180.338 40.0760.0840.0920.1070.1140.1210.1780.285 50.0840.0950.1030.1110.1160.1550.248 60.0950.1060.1080.1140.1420.220 70.1040.1090.1130.1330.196 80.1040.1100.1100.1270.180 90.1070.1090.1110.1220.168 100.1070.1110.1120.1200.157 120.1110.1130.1180.144 150.1150.1160.132 200.1160.122 250.1160.118 300.1180.117 400.117 500.118 600.119 498 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.9 0.510.0600.0860.1250.1550.1960.3040.3460.3900.5150.601 20.0600.0830.1060.1270.1480.2090.2400.2760.4040.518 30.0830.1100.1240.1360.1730.1930.2200.3330.457 40.1100.1280.1360.1610.1750.1930.2860.408 50.1280.1420.1560.1670.1790.2540.370 60.1420.1530.1610.1700.2300.336 70.1540.1590.1680.2110.310 80.1560.1590.1650.1990.289 90.1580.1600.1610.1900.272 100.1580.1620.1620.1820.257 120.1620.1650.1740.231 150.1660.1710.206 200.1660.184 250.1680.175 300.1690.172 400.168 500.170 600.171 499 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.9010.0600.0880.1270.1640.2050.3270.3760.4250.5680.686 20.0600.0830.1090.1290.1520.2200.2550.2960.4400.573 30.0830.1130.1280.1420.1820.2010.2310.3600.500 40.1130.1330.1400.1660.1810.2020.3060.442 50.1330.1460.1610.1720.1860.2690.400 60.1460.1600.1670.1750.2410.362 70.1610.1650.1730.2220.334 80.1640.1650.1700.2080.309 90.1650.1680.1670.1980.291 100.1650.1700.1700.1900.272 120.1710.1720.1800.243 150.1730.1760.216 200.1730.192 250.1750.180 300.1760.176 400.175 500.177 600.177 500 Table3.3:(cont'd) G criticalvalue rq M valuesof G 234561012153060 0.90.510.0600.0880.1260.1650.2060.3310.3790.4290.5770.697 20.0600.0840.1080.1310.1530.2210.2560.2990.4440.582 30.0840.1120.1280.1420.1820.2020.2340.3630.505 40.1120.1330.1410.1660.1810.2020.3080.447 50.1330.1470.1620.1730.1870.2700.403 60.1470.1610.1680.1760.2430.365 70.1610.1650.1740.2220.336 80.1640.1670.1700.2090.311 90.1650.1700.1680.1990.293 100.1650.1710.1710.1920.273 120.1710.1730.1800.245 150.1750.1770.217 200.1740.194 250.1770.180 300.1780.177 400.176 500.179 600.179 501 Table3.4:Empiricalnullrejectionprobabilitieswithblockbootstrapcriticalvalues,5%level, T = 60 blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5 0.510.0870.0650.0460.0340.0230.0090.0050.0030.0000.000 20.0870.0670.0490.0380.0290.0140.0100.0070.0010.000 30.0670.0500.0370.0300.0180.0130.0100.0020.000 40.0500.0380.0290.0180.0150.0110.0040.000 50.0380.0280.0180.0150.0130.0050.000 60.0280.0180.0150.0140.0060.000 70.0180.0150.0140.0070.001 80.0180.0140.0130.0090.001 90.0180.0150.0140.0090.002 100.0180.0150.0140.0100.002 120.0150.0130.0100.002 150.0130.0100.003 200.0100.003 250.0090.003 300.0090.003 400.004 500.004 600.004 502 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.1150.0820.0720.0640.0540.0450.0400.0350.0210.002 20.1150.0810.0690.0640.0550.0510.0440.0420.0290.016 30.0810.0700.0630.0570.0510.0440.0450.0340.016 40.0700.0660.0550.0520.0470.0440.0360.023 50.0660.0570.0500.0470.0450.0390.025 60.0570.0520.0470.0450.0400.027 70.0530.0470.0460.0410.029 80.0520.0470.0460.0400.030 90.0520.0460.0460.0400.031 100.0520.0470.0450.0410.031 120.0460.0450.0420.032 150.0460.0420.034 200.0410.036 250.0430.037 300.0420.037 400.038 500.037 600.036 503 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.50.510.1200.0830.0780.0710.0630.0590.0560.0550.0530.054 20.1200.0820.0750.0720.0630.0590.0560.0550.0510.053 30.0820.0750.0680.0650.0570.0530.0550.0520.052 40.0750.0690.0640.0600.0540.0520.0520.051 50.0690.0640.0570.0550.0520.0510.051 60.0640.0590.0550.0520.0520.051 70.0590.0560.0520.0510.051 80.0580.0550.0540.0510.050 90.0570.0530.0520.0510.051 100.0570.0540.0530.0510.050 120.0550.0540.0490.049 150.0530.0510.049 200.0500.050 250.0510.050 300.0490.050 400.049 500.050 600.050 504 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0 0.510.1050.0760.0640.0530.0440.0280.0230.0150.0030.000 20.1050.0780.0650.0570.0480.0380.0300.0250.0090.001 30.0780.0660.0570.0480.0370.0320.0310.0140.003 40.0660.0570.0470.0410.0330.0300.0190.005 50.0570.0480.0400.0350.0320.0200.007 60.0480.0390.0350.0320.0220.009 70.0390.0350.0310.0220.010 80.0390.0350.0320.0240.012 90.0390.0340.0320.0240.013 100.0390.0340.0320.0250.014 120.0340.0330.0260.017 150.0320.0250.019 200.0250.019 250.0250.020 300.0250.021 400.021 500.022 600.022 505 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0010.1200.0830.0780.0710.0630.0590.0560.0550.0530.054 20.1200.0820.0750.0720.0630.0590.0560.0550.0510.053 30.0820.0750.0680.0650.0570.0530.0550.0520.052 40.0750.0690.0640.0600.0540.0520.0520.051 50.0690.0640.0570.0550.0520.0510.051 60.0640.0590.0550.0520.0520.051 70.0590.0560.0520.0510.051 80.0580.0550.0540.0510.050 90.0570.0530.0520.0510.051 100.0570.0540.0530.0510.050 120.0550.0540.0490.049 150.0530.0510.049 200.0500.050 250.0510.050 300.0490.050 400.049 500.050 600.050 506 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 00.510.1210.0840.0760.0740.0660.0640.0620.0660.0860.150 20.1210.0830.0730.0730.0640.0600.0600.0590.0680.091 30.0830.0730.0710.0650.0600.0570.0580.0620.076 40.0730.0710.0650.0610.0570.0560.0590.070 50.0710.0660.0590.0570.0550.0580.067 60.0660.0610.0560.0550.0570.064 70.0610.0560.0570.0560.062 80.0610.0560.0550.0550.061 90.0600.0560.0560.0550.060 100.0600.0550.0570.0540.059 120.0560.0550.0550.059 150.0540.0570.057 200.0560.057 250.0550.058 300.0550.058 400.055 500.056 600.057 507 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5 0.510.1200.0830.0780.0710.0630.0590.0560.0550.0530.054 20.1200.0820.0750.0720.0630.0590.0560.0550.0510.053 30.0820.0750.0680.0650.0570.0530.0550.0520.052 40.0750.0690.0640.0600.0540.0520.0520.051 50.0690.0640.0570.0550.0520.0510.051 60.0640.0590.0550.0520.0520.051 70.0590.0560.0520.0510.051 80.0580.0550.0540.0510.050 90.0570.0530.0520.0510.051 100.0570.0540.0530.0510.050 120.0550.0540.0490.049 150.0530.0510.049 200.0500.050 250.0510.050 300.0490.050 400.049 500.050 600.050 508 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.1200.0830.0760.0750.0730.0830.0870.1010.1680.265 20.1200.0800.0710.0720.0660.0700.0700.0730.1090.177 30.0800.0710.0720.0690.0660.0660.0680.0890.135 40.0710.0720.0680.0670.0670.0650.0780.114 50.0720.0680.0680.0670.0650.0750.100 60.0680.0670.0640.0650.0730.092 70.0660.0670.0650.0710.087 80.0660.0650.0660.0700.083 90.0660.0650.0640.0690.080 100.0660.0640.0640.0690.078 120.0630.0650.0690.076 150.0640.0670.073 200.0660.071 250.0670.071 300.0670.070 400.068 500.069 600.070 509 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.50.510.1200.0820.0740.0730.0720.0850.0910.1080.1960.332 20.1200.0800.0680.0700.0670.0700.0720.0760.1170.205 30.0800.0700.0700.0680.0680.0680.0690.0940.153 40.0700.0700.0680.0680.0660.0670.0820.126 50.0700.0680.0670.0670.0650.0770.110 60.0680.0670.0660.0650.0760.101 70.0670.0680.0660.0730.094 80.0660.0670.0660.0720.089 90.0670.0670.0660.0720.083 100.0670.0660.0650.0720.082 120.0660.0650.0710.080 150.0650.0700.077 200.0680.072 250.0700.074 300.0690.072 400.073 500.072 600.074 510 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.8 0.510.1150.0810.0820.0870.1000.1450.1680.2030.3010.379 20.1150.0780.0730.0810.0830.1060.1170.1340.2220.309 30.0780.0750.0790.0830.0940.0990.1110.1750.262 40.0750.0810.0840.0910.0940.1020.1480.230 50.0810.0850.0920.0930.0980.1310.202 60.0850.0910.0940.0980.1200.182 70.0920.0930.0960.1140.165 80.0920.0930.0950.1090.153 90.0910.0940.0950.1070.144 100.0910.0930.0950.1060.137 120.0940.0950.1020.126 150.0950.1000.115 200.0980.107 250.0990.105 300.1000.103 400.100 500.102 600.103 511 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.8010.1080.0760.0800.0870.1020.1640.2010.2440.3940.537 20.1080.0720.0710.0770.0820.1100.1270.1530.2690.407 30.0720.0710.0740.0820.0980.1070.1210.2060.330 40.0710.0760.0820.0920.1000.1100.1700.278 50.0760.0830.0920.0970.1040.1490.243 60.0830.0930.0980.1020.1340.217 70.0940.0970.1030.1270.197 80.0960.0970.1020.1220.180 90.0950.0980.1010.1160.165 100.0950.0980.1020.1140.156 120.0980.1020.1110.143 150.1020.1070.132 200.1070.121 250.1100.118 300.1100.115 400.115 500.116 600.117 512 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.80.510.1060.0730.0800.0850.1010.1670.2010.2510.4090.559 20.1060.0710.0680.0750.0810.1090.1260.1550.2760.421 30.0710.0710.0730.0800.0980.1080.1220.2090.341 40.0710.0730.0810.0910.0990.1100.1730.286 50.0730.0830.0920.0980.1050.1500.249 60.0830.0930.0990.1030.1360.221 70.0940.0980.1030.1290.200 80.0950.0970.1010.1230.182 90.0960.0980.1020.1180.170 100.0960.0980.1010.1150.161 120.0980.1030.1120.147 150.1030.1100.132 200.1100.122 250.1100.119 300.1120.117 400.118 500.118 600.118 513 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.9 0.510.0990.0830.1010.1260.1590.2640.3130.3670.5050.601 20.0990.0770.0910.1060.1190.1820.2100.2560.3970.518 30.0770.0920.1040.1140.1500.1680.2000.3270.456 40.0920.1060.1140.1390.1500.1750.2810.408 50.1060.1170.1360.1430.1600.2450.369 60.1170.1340.1400.1530.2190.338 70.1360.1390.1490.2030.313 80.1380.1400.1460.1890.291 90.1380.1410.1480.1800.272 100.1380.1420.1470.1740.257 120.1430.1480.1640.232 150.1500.1580.206 200.1570.183 250.1600.173 300.1610.169 400.167 500.168 600.170 514 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.9010.0950.0740.0990.1260.1600.2820.3390.3990.5640.687 20.0950.0710.0850.1010.1170.1840.2190.2700.4320.573 30.0710.0850.1020.1120.1500.1720.2060.3500.499 40.0850.1030.1130.1370.1520.1770.2980.445 50.1030.1150.1360.1450.1620.2600.399 60.1150.1340.1410.1540.2320.364 70.1350.1400.1510.2100.333 80.1370.1400.1480.1950.308 90.1390.1420.1470.1860.291 100.1390.1430.1480.1780.272 120.1440.1490.1690.245 150.1510.1610.217 200.1620.192 250.1660.181 300.1670.176 400.174 500.177 600.177 515 Table3.4:(cont'd) blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.90.510.0930.0710.0990.1240.1580.2850.3410.4060.5700.697 20.0930.0680.0840.0990.1160.1830.2200.2710.4370.582 30.0680.0850.0980.1100.1490.1710.2070.3530.504 40.0850.1010.1100.1370.1530.1780.3010.447 50.1010.1140.1340.1450.1610.2610.405 60.1140.1340.1400.1540.2320.366 70.1340.1390.1490.2110.337 80.1360.1390.1470.1960.311 90.1370.1410.1460.1870.291 100.1370.1420.1460.1780.275 120.1440.1490.1700.247 150.1510.1640.218 200.1630.193 250.1670.182 300.1680.176 400.176 500.178 600.180 516 Table3.5:Empiricalnullrejectionprobabilitieswith i . i . d .bootstrapcriticalvalues,5%level, T = 60 iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5 0.510.0410.0350.0260.0190.0130.0040.0020.0010.0000.000 20.0410.0330.0290.0210.0170.0090.0050.0040.0010.000 30.0330.0290.0210.0160.0120.0070.0050.0010.000 40.0290.0210.0170.0110.0090.0070.0030.000 50.0210.0170.0120.0090.0080.0030.000 60.0170.0110.0080.0080.0040.000 70.0120.0090.0070.0050.001 80.0110.0090.0070.0060.001 90.0110.0080.0080.0060.002 100.0110.0090.0080.0050.002 120.0080.0090.0060.002 150.0080.0060.003 200.0060.003 250.0060.003 300.0060.003 400.004 500.004 600.004 517 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.0480.0450.0480.0440.0390.0340.0300.0280.0180.002 20.0480.0460.0470.0450.0400.0400.0360.0350.0250.016 30.0460.0460.0450.0440.0420.0390.0390.0300.016 40.0460.0450.0420.0430.0390.0390.0330.023 50.0450.0410.0410.0400.0380.0350.025 60.0410.0420.0400.0400.0360.027 70.0420.0400.0410.0360.029 80.0420.0390.0400.0360.030 90.0410.0400.0400.0370.031 100.0410.0400.0390.0370.031 120.0390.0400.0380.032 150.0400.0390.034 200.0390.036 250.0390.037 300.0390.037 400.038 500.037 600.036 518 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.50.510.0500.0510.0510.0510.0490.0510.0500.0520.0520.054 20.0500.0500.0500.0510.0490.0510.0510.0520.0520.053 30.0500.0510.0480.0490.0520.0490.0510.0510.052 40.0510.0490.0480.0510.0500.0490.0510.051 50.0490.0480.0510.0490.0500.0500.051 60.0480.0510.0500.0510.0480.051 70.0510.0500.0490.0490.051 80.0510.0500.0470.0490.050 90.0500.0490.0470.0500.051 100.0500.0500.0480.0500.050 120.0490.0480.0500.049 150.0470.0490.049 200.0490.050 250.0490.050 300.0490.050 400.049 500.050 600.050 519 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0 0.510.0430.0410.0400.0350.0290.0200.0140.0100.0020.000 20.0430.0410.0390.0390.0330.0270.0210.0180.0060.001 30.0410.0390.0380.0320.0290.0240.0230.0120.003 40.0390.0380.0350.0300.0260.0250.0130.005 50.0380.0320.0290.0260.0250.0150.007 60.0320.0310.0250.0260.0180.009 70.0300.0260.0260.0190.010 80.0300.0260.0270.0200.012 90.0300.0260.0260.0210.013 100.0300.0260.0260.0220.014 120.0260.0260.0220.017 150.0260.0240.019 200.0250.019 250.0240.020 300.0240.021 400.021 500.022 600.022 520 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0010.0500.0510.0510.0510.0490.0510.0500.0520.0520.054 20.0500.0500.0500.0510.0490.0510.0510.0520.0520.053 30.0500.0510.0480.0490.0520.0490.0510.0510.052 40.0510.0490.0480.0510.0500.0490.0510.051 50.0490.0480.0510.0490.0500.0500.051 60.0480.0510.0500.0510.0480.051 70.0510.0500.0490.0490.051 80.0510.0500.0470.0490.050 90.0500.0490.0470.0500.051 100.0500.0500.0480.0500.050 120.0490.0480.0500.049 150.0470.0490.049 200.0490.050 250.0490.050 300.0490.050 400.049 500.050 600.050 521 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 00.510.0480.0500.0530.0550.0560.0590.0610.0640.0870.150 20.0480.0490.0520.0520.0520.0560.0570.0580.0690.091 30.0490.0520.0510.0540.0550.0550.0570.0620.076 40.0520.0530.0530.0530.0550.0540.0590.070 50.0530.0530.0550.0540.0550.0580.067 60.0530.0550.0550.0550.0580.064 70.0530.0540.0550.0570.062 80.0530.0540.0540.0560.061 90.0550.0530.0540.0560.060 100.0550.0540.0560.0550.059 120.0540.0550.0570.059 150.0550.0570.057 200.0550.057 250.0550.058 300.0560.058 400.055 500.056 600.057 522 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5 0.510.0500.0510.0510.0510.0490.0510.0500.0520.0520.054 20.0500.0500.0500.0510.0490.0510.0510.0520.0520.053 30.0500.0510.0480.0490.0520.0490.0510.0510.052 40.0510.0490.0480.0510.0500.0490.0510.051 50.0490.0480.0510.0490.0500.0500.051 60.0480.0510.0500.0510.0480.051 70.0510.0500.0490.0490.051 80.0510.0500.0470.0490.050 90.0500.0490.0470.0500.051 100.0500.0500.0480.0500.050 120.0490.0480.0500.049 150.0470.0490.049 200.0490.050 250.0490.050 300.0490.050 400.049 500.050 600.050 523 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.0500.0550.0550.0600.0650.0800.0850.1020.1730.265 20.0500.0530.0550.0580.0600.0680.0710.0750.1100.177 30.0530.0550.0570.0590.0650.0660.0700.0900.135 40.0550.0580.0590.0650.0650.0690.0810.114 50.0580.0580.0650.0670.0670.0770.100 60.0580.0630.0650.0670.0750.092 70.0630.0640.0670.0740.087 80.0650.0630.0650.0720.083 90.0660.0640.0640.0700.080 100.0660.0650.0660.0710.078 120.0660.0650.0710.076 150.0650.0690.073 200.0680.071 250.0680.071 300.0680.070 400.068 500.069 600.070 524 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.50.510.0500.0560.0570.0610.0670.0860.0950.1150.2010.332 20.0500.0550.0560.0590.0600.0720.0740.0810.1220.205 30.0550.0560.0600.0600.0660.0710.0740.0980.153 40.0560.0590.0620.0670.0680.0700.0870.126 50.0590.0600.0660.0670.0690.0800.110 60.0600.0640.0670.0700.0790.101 70.0660.0660.0690.0760.094 80.0680.0670.0680.0750.089 90.0680.0670.0680.0740.083 100.0680.0680.0700.0720.082 120.0680.0680.0720.080 150.0700.0710.077 200.0710.072 250.0720.074 300.0720.072 400.073 500.072 600.074 525 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.8 0.510.0520.0630.0730.0880.1010.1570.1810.2120.3030.379 20.0520.0610.0680.0760.0840.1120.1260.1450.2260.309 30.0610.0710.0760.0840.1000.1060.1200.1780.262 40.0710.0770.0840.0970.0990.1060.1510.230 50.0770.0840.0930.0960.1020.1350.202 60.0840.0940.0960.1010.1230.182 70.0940.0950.0990.1160.165 80.0940.0950.0980.1120.153 90.0960.0940.0980.1070.144 100.0960.0950.0970.1060.137 120.0970.0970.1060.126 150.0990.1020.115 200.0990.107 250.1010.105 300.1020.103 400.100 500.102 600.103 526 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.8010.0540.0640.0790.0950.1170.1860.2200.2610.4000.537 20.0540.0640.0730.0830.0920.1260.1430.1690.2760.407 30.0640.0770.0830.0920.1110.1190.1350.2150.330 40.0770.0840.0930.1060.1110.1210.1790.278 50.0840.0940.1050.1100.1140.1540.243 60.0940.1050.1080.1130.1420.217 70.1050.1070.1120.1340.197 80.1050.1060.1090.1270.180 90.1080.1080.1090.1230.165 100.1080.1100.1100.1200.156 120.1110.1110.1180.143 150.1120.1130.132 200.1130.121 250.1160.118 300.1160.115 400.115 500.116 600.117 527 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.80.510.0530.0660.0800.0970.1180.1900.2250.2690.4140.559 20.0530.0630.0740.0830.0940.1280.1470.1720.2830.421 30.0630.0770.0850.0920.1120.1240.1380.2180.341 40.0770.0850.0920.1070.1140.1220.1800.286 50.0850.0940.1080.1100.1150.1580.249 60.0940.1060.1110.1150.1460.221 70.1080.1100.1110.1360.200 80.1080.1090.1130.1300.182 90.1090.1100.1130.1240.170 100.1090.1130.1130.1210.161 120.1120.1130.1180.147 150.1150.1160.132 200.1160.122 250.1160.119 300.1170.117 400.118 500.118 600.118 528 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.9 0.510.0600.0850.1230.1570.1950.3060.3440.3880.5120.601 20.0600.0830.1060.1260.1460.2090.2410.2780.4040.518 30.0830.1090.1220.1380.1730.1930.2220.3350.456 40.1090.1260.1350.1600.1740.1910.2880.408 50.1260.1410.1560.1650.1770.2540.369 60.1410.1540.1600.1690.2300.338 70.1550.1580.1650.2130.313 80.1570.1570.1620.1980.291 90.1570.1600.1610.1880.272 100.1570.1610.1620.1820.257 120.1620.1640.1730.232 150.1660.1680.206 200.1650.183 250.1670.173 300.1690.169 400.167 500.168 600.170 529 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.9010.0600.0890.1270.1640.2050.3290.3740.4260.5680.687 20.0600.0850.1080.1310.1520.2210.2530.2970.4400.573 30.0850.1130.1270.1420.1810.2020.2350.3610.499 40.1130.1310.1410.1680.1810.2000.3070.445 50.1310.1460.1630.1710.1850.2700.399 60.1460.1590.1670.1770.2420.364 70.1610.1660.1720.2230.333 80.1640.1650.1700.2070.308 90.1660.1670.1690.1960.291 100.1660.1680.1700.1900.272 120.1690.1710.1800.245 150.1740.1750.217 200.1730.192 250.1760.181 300.1760.176 400.174 500.177 600.177 530 Table3.5:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.90.510.0600.0870.1270.1660.2050.3320.3780.4320.5750.697 20.0600.0860.1100.1300.1530.2240.2550.2990.4450.582 30.0860.1150.1270.1410.1830.2030.2350.3640.504 40.1150.1330.1410.1670.1810.2030.3090.447 50.1330.1460.1640.1730.1870.2720.405 60.1460.1610.1680.1770.2450.366 70.1620.1650.1730.2240.337 80.1660.1650.1700.2090.311 90.1670.1670.1680.1980.291 100.1670.1690.1690.1910.275 120.1700.1720.1810.247 150.1740.1750.218 200.1750.193 250.1770.182 300.1780.176 400.176 500.178 600.180 531 Table3.6:AverageTypeIIError,5%level, T = 60 r M Valuesof G 234561012153060 010.8780.7830.7040.6580.6300.5760.5670.5570.5460.539 20.8780.7830.7190.6830.6560.5950.5860.5780.5500.538 30.7830.7130.6840.6680.6150.5970.5880.5560.544 40.7130.6780.6720.6330.6160.5970.5650.549 50.6780.6690.6380.6250.6080.5710.550 60.6690.6430.6380.6190.5790.557 70.6430.6330.6230.5820.560 80.6460.6350.6250.5890.564 90.6410.6380.6260.5960.571 100.6410.6340.6260.6010.570 120.6300.6280.6140.577 150.6310.6260.585 200.6260.600 250.6270.614 300.6270.623 400.627 500.627 600.625 532 Table3.6:(cont'd) r M Valuesof G 234561012153060 0.510.8750.7700.6920.6500.6310.5720.5630.5580.5400.537 20.8750.7840.7120.6710.6560.5980.5860.5760.5520.542 30.7840.7130.6770.6650.6110.6040.5950.5630.545 40.7130.6740.6710.6340.6210.6030.5730.550 50.6740.6670.6380.6290.6110.5800.556 60.6670.6470.6390.6270.5850.564 70.6480.6410.6330.5920.567 80.6480.6490.6330.6010.572 90.6440.6460.6330.6000.576 100.6440.6440.6420.6110.579 120.6430.6440.6210.585 150.6420.6340.597 200.6400.611 250.6380.621 300.6370.630 400.639 500.638 600.636 533 Table3.6:(cont'd) r M Valuesof G 234561012153060 0.810.8690.7480.6810.6400.6110.5640.5560.5550.5380.527 20.8690.7580.7050.6580.6460.6030.5830.5730.5480.537 30.7580.6970.6700.6660.6250.6120.5970.5600.543 40.6970.6750.6760.6360.6310.6130.5720.549 50.6750.6730.6450.6390.6290.5880.554 60.6730.6550.6520.6370.5970.561 70.6590.6520.6450.6050.566 80.6560.6540.6460.6130.572 90.6590.6590.6510.6230.577 100.6590.6610.6550.6310.587 120.6610.6570.6350.595 150.6570.6460.610 200.6510.629 250.6560.639 300.6560.644 400.651 500.655 600.655 534 Table3.6:(cont'd) r M Valuesof G 234561012153060 0.910.8670.7390.6590.6130.5800.5500.5420.5360.5260.515 20.8670.7520.6890.6560.6370.5860.5710.5580.5390.525 30.7520.6880.6700.6530.6220.5970.5840.5480.535 40.6880.6690.6540.6360.6260.6070.5590.540 50.6690.6610.6480.6380.6240.5710.546 60.6610.6520.6440.6370.5840.549 70.6540.6500.6440.5940.554 80.6550.6480.6520.6060.560 90.6570.6520.6520.6170.565 100.6570.6540.6490.6250.570 120.6540.6520.6360.583 150.6530.6470.602 200.6470.626 250.6530.638 300.6540.646 400.647 500.653 600.654 535 Table3.7:DailyData,WeekendsMissing, G BlockBootstrap,5%level overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.1150.0840.0690.0610.0610.0420.0510.0350.0250.004 20.1150.0820.0690.0610.0610.0440.0520.0430.0340.019 30.0820.0700.0620.0610.0460.0530.0440.0380.021 40.0700.0610.0610.0460.0510.0450.0410.029 50.0610.0590.0460.0530.0470.0420.030 60.0590.0460.0530.0460.0430.032 70.0460.0540.0460.0440.033 80.0460.0540.0480.0430.034 90.0450.0530.0470.0430.036 100.0450.0520.0470.0460.035 120.0530.0470.0430.038 150.0460.0430.037 200.0440.040 250.0430.041 300.0440.040 400.040 500.039 600.040 536 Table3.7:(cont'd) overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0010.1230.0850.0760.0690.0650.0590.0570.0570.0550.052 20.1230.0840.0750.0680.0640.0570.0560.0550.0550.050 30.0840.0760.0680.0640.0600.0550.0530.0550.053 40.0760.0670.0650.0580.0550.0530.0530.053 50.0670.0630.0580.0570.0530.0520.052 60.0630.0580.0580.0540.0510.053 70.0580.0580.0550.0520.052 80.0580.0570.0550.0520.053 90.0580.0570.0540.0530.053 100.0580.0570.0540.0540.051 120.0580.0530.0530.051 150.0540.0550.053 200.0530.051 250.0520.053 300.0520.052 400.053 500.052 600.051 537 Table3.7:(cont'd) overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.1010.1260.0830.0770.0700.0660.0610.0560.0620.0630.071 20.1260.0820.0740.0690.0650.0600.0560.0560.0590.061 30.0820.0750.0700.0640.0600.0560.0550.0570.059 40.0750.0700.0650.0600.0560.0550.0540.058 50.0700.0640.0590.0580.0550.0550.056 60.0640.0590.0590.0560.0540.057 70.0590.0590.0560.0550.055 80.0590.0570.0560.0560.055 90.0600.0570.0540.0550.055 100.0600.0570.0550.0550.054 120.0580.0550.0560.054 150.0550.0570.055 200.0550.054 250.0540.055 300.0530.055 400.054 500.054 600.053 538 Table3.7:(cont'd) overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.3010.1260.0830.0760.0740.0670.0680.0560.0730.0860.124 20.1260.0830.0740.0710.0650.0630.0550.0610.0690.089 30.0830.0740.0710.0640.0610.0550.0600.0630.075 40.0740.0710.0640.0630.0560.0590.0600.068 50.0710.0640.0620.0560.0590.0590.064 60.0640.0630.0570.0580.0590.063 70.0630.0560.0590.0590.061 80.0610.0560.0580.0590.060 90.0630.0570.0580.0580.059 100.0630.0580.0580.0570.057 120.0570.0580.0570.057 150.0590.0580.058 200.0570.061 250.0570.060 300.0580.059 400.058 500.059 600.057 539 Table3.7:(cont'd) overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.1230.0850.0740.0750.0660.0750.0590.0840.1290.206 20.1230.0820.0700.0730.0650.0660.0580.0670.0880.135 30.0820.0710.0710.0640.0630.0560.0640.0750.105 40.0710.0720.0630.0630.0570.0630.0660.091 50.0720.0630.0640.0570.0610.0650.082 60.0630.0660.0560.0620.0630.077 70.0660.0570.0620.0630.072 80.0670.0570.0610.0630.070 90.0660.0570.0610.0630.067 100.0660.0580.0620.0630.066 120.0570.0630.0640.064 150.0630.0630.064 200.0630.065 250.0640.066 300.0640.064 400.063 500.063 600.063 540 Table3.7:(cont'd) overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.8010.1140.0730.0730.0800.0790.1220.1230.1790.3120.452 20.1140.0740.0690.0710.0680.0870.0890.1140.1980.327 30.0740.0690.0730.0690.0800.0790.0930.1500.254 40.0690.0740.0690.0800.0750.0870.1250.209 50.0740.0690.0780.0750.0840.1110.180 60.0690.0790.0750.0820.1010.158 70.0780.0770.0830.0960.145 80.0790.0770.0830.0920.135 90.0790.0740.0840.0890.125 100.0790.0750.0830.0890.119 120.0760.0820.0890.110 150.0830.0860.100 200.0870.095 250.0890.094 300.0880.095 400.094 500.095 600.095 541 Table3.7:(cont'd) overlapping G blockbootstrapcriticalvalue rq M valuesof G 234561012153060 0.9010.0990.0690.0810.1000.1170.2190.2420.3190.4830.616 20.0990.0670.0750.0830.0900.1380.1560.2040.3460.496 30.0670.0750.0840.0870.1170.1250.1560.2750.415 40.0750.0870.0910.1120.1140.1370.2260.360 50.0870.0940.1090.1120.1240.1940.318 60.0940.1100.1100.1210.1750.286 70.1110.1090.1200.1600.258 80.1120.1110.1210.1500.240 90.1140.1130.1190.1460.224 100.1140.1130.1180.1410.207 120.1130.1200.1350.185 150.1220.1300.166 200.1300.151 250.1330.143 300.1360.140 400.139 500.141 600.143 542 Table3.8:DailyData,WeekendsMissing, i . i . d .Bootstrap,5%level iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.0510.0500.0500.0430.0470.0370.0480.0320.0210.004 20.0510.0510.0490.0440.0500.0400.0490.0390.0310.019 30.0510.0480.0460.0490.0420.0480.0400.0350.021 40.0480.0440.0500.0430.0490.0410.0360.029 50.0440.0500.0410.0490.0410.0380.030 60.0500.0420.0490.0410.0410.032 70.0410.0500.0420.0390.033 80.0410.0480.0410.0410.034 90.0400.0500.0420.0410.036 100.0400.0480.0420.0420.035 120.0480.0420.0410.038 150.0430.0400.037 200.0410.040 250.0410.041 300.0420.040 400.040 500.039 600.040 543 Table3.8:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0010.0480.0510.0520.0510.0510.0540.0530.0530.0530.052 20.0480.0500.0510.0520.0500.0510.0520.0530.0540.050 30.0500.0500.0510.0500.0520.0510.0530.0530.053 40.0500.0500.0510.0520.0510.0520.0510.053 50.0500.0510.0520.0510.0530.0510.052 60.0510.0530.0530.0540.0510.053 70.0520.0520.0520.0510.052 80.0510.0520.0510.0510.053 90.0510.0510.0510.0510.053 100.0510.0520.0510.0510.051 120.0510.0500.0530.051 150.0500.0530.053 200.0530.051 250.0510.053 300.0520.052 400.053 500.052 600.051 544 Table3.8:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.1010.0460.0510.0510.0520.0500.0550.0510.0580.0630.071 20.0460.0500.0500.0510.0490.0530.0510.0560.0580.061 30.0500.0500.0510.0510.0540.0510.0550.0540.059 40.0500.0510.0500.0540.0510.0540.0540.058 50.0510.0500.0530.0530.0550.0530.056 60.0500.0530.0530.0550.0530.057 70.0540.0510.0540.0530.055 80.0540.0510.0550.0540.055 90.0540.0510.0520.0550.055 100.0540.0510.0530.0540.054 120.0510.0520.0550.054 150.0520.0550.055 200.0540.054 250.0530.055 300.0540.055 400.054 500.054 600.053 545 Table3.8:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.3010.0470.0530.0500.0540.0500.0590.0500.0670.0870.124 20.0470.0520.0480.0530.0480.0570.0490.0580.0670.089 30.0520.0480.0550.0510.0570.0510.0560.0620.075 40.0480.0540.0530.0570.0520.0550.0600.068 50.0540.0510.0570.0510.0580.0570.064 60.0510.0580.0530.0580.0560.063 70.0570.0510.0580.0570.061 80.0580.0520.0580.0580.060 90.0580.0510.0570.0580.059 100.0580.0520.0570.0590.057 120.0510.0570.0590.057 150.0570.0590.058 200.0570.061 250.0570.060 300.0560.059 400.058 500.059 600.057 546 Table3.8:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.5010.0440.0530.0490.0570.0510.0690.0580.0850.1310.206 20.0440.0530.0490.0560.0520.0610.0530.0650.0880.135 30.0530.0490.0570.0530.0610.0520.0610.0760.105 40.0490.0570.0540.0590.0530.0610.0690.091 50.0570.0520.0590.0550.0620.0650.082 60.0520.0600.0560.0630.0630.077 70.0600.0550.0610.0630.072 80.0600.0540.0620.0640.070 90.0590.0550.0620.0650.067 100.0590.0560.0620.0650.066 120.0560.0610.0650.064 150.0610.0630.064 200.0620.065 250.0620.066 300.0620.064 400.063 500.063 600.063 547 Table3.8:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.8010.0500.0610.0620.0800.0800.1390.1380.1940.3200.452 20.0500.0610.0620.0710.0710.0970.0980.1270.2060.327 30.0610.0630.0720.0700.0870.0870.1040.1570.254 40.0630.0710.0710.0860.0820.0930.1340.209 50.0710.0700.0850.0820.0910.1180.180 60.0700.0870.0810.0900.1080.158 70.0870.0820.0910.1010.145 80.0870.0840.0910.0970.135 90.0880.0830.0920.0950.125 100.0880.0860.0920.0940.119 120.0860.0910.0940.110 150.0920.0940.100 200.0930.095 250.0950.094 300.0950.095 400.094 500.095 600.095 548 Table3.8:(cont'd) iidbootstrapcriticalvalue rq M valuesof G 234561012153060 0.9010.0570.0720.0950.1260.1450.2550.2730.3400.4900.616 20.0570.0720.0880.1010.1100.1650.1820.2290.3560.496 30.0720.0890.1030.1050.1390.1450.1770.2840.415 40.0890.1030.1090.1310.1350.1540.2360.360 50.1030.1100.1290.1280.1450.2060.318 60.1100.1280.1270.1390.1840.286 70.1270.1270.1360.1710.258 80.1300.1260.1350.1620.240 90.1310.1280.1340.1540.224 100.1310.1290.1340.1500.207 120.1300.1370.1430.185 150.1400.1390.166 200.1380.151 250.1410.143 300.1420.140 400.139 500.141 600.143 549 Figure3.1:SizeAdjustedPowerComparisionbasedon G = 60, M = 30case Forinterpretationofthereferencestocolorinthisandallotheres,thereaderisreferredtothe electronicversionofthisdissertation. 550 Figure3.1:(cont'd) 551 Figure3.1:(cont'd) 552 Figure3.2:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 1 553 Figure3.2:(cont'd) 554 Figure3.3:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 2 555 Figure3.3:(cont'd) 556 Figure3.4:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 3 557 Figure3.4:(cont'd) 558 Figure3.4:(cont'd) 559 Figure3.4:(cont'd) 560 Figure3.5:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 4 561 Figure3.5:(cont'd) 562 Figure3.5:(cont'd) 563 Figure3.6:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 5 564 Figure3.6:(cont'd) 565 Figure3.6:(cont'd) 566 Figure3.6:(cont'd) 567 Figure3.7:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 6 568 Figure3.8:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 7 569 Figure3.9:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.5, M = 8 570 Figure3.10:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 1 571 Figure3.10:(cont'd) 572 Figure3.10:(cont'd) 573 Figure3.11:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 2 574 Figure3.11:(cont'd) 575 Figure3.11:(cont'd) 576 Figure3.12:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 3 577 Figure3.12:(cont'd) 578 Figure3.12:(cont'd) 579 Figure3.12:(cont'd) 580 Figure3.13:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 4 581 Figure3.13:(cont'd) 582 Figure3.13:(cont'd) 583 Figure3.13:(cont'd) 584 Figure3.14:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 5 585 Figure3.14:(cont'd) 586 Figure3.15:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 6 587 Figure3.16:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 7 588 Figure3.17:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 8 589 Figure3.18:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.8, M = 9 590 Figure3.19:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 1 591 Figure3.19:(cont'd) 592 Figure3.19:(cont'd) 593 Figure3.19:(cont'd) 594 Figure3.20:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 2 595 Figure3.21:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 3 596 Figure3.21:(cont'd) 597 Figure3.22:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 4 598 Figure3.23:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 5 599 Figure3.24:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 6 600 Figure3.25:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 7 601 Figure3.26:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 8 602 Figure3.27:SizeAdjustedPowerComparision-ClusteringV.S.Smoothing, r = 0.9, M = 9 603 APPENDICES 604 AppendixA PROOFSFORCHAPTER1 Wewillusethefollowingnotation.Let f v # i = r 1 + l 2 s 2 f # i r 1 + l 2 s 2 , f # # i = 2 s f # i s 1 F # i l s , f p # i = pf v # i + ( 1 p ) f # # i ,ln L = å ln f p # i , m i = f # i l s 1 F # i l s , q = b 0 , l , s 2 , p 0 , b = k 1vector, q = ‹ b 0 , ‹ l , ‹ s 2 , ‹ p 0 ,where ‹ b = OLS, ‹ l = 0, ‹ s 2 = 1 n å ‹ # 2 i , ‹ # i = y i x 0 i ‹ b , ‹ p 2 [ 0,1 ] . _ indicatesmaximum. Result1. q isastationarypointoftheloglikelihoodfunction. Proof. Thederivativeofln L is: S ( q ) = 0 B B B B B B B B @ ¶ ln L ¶b ¶ ln L ¶l ¶ ln L ¶s 2 ¶ ln L ¶ p 1 C C C C C C C C A = 0 B B B B B B B B B B B B B B B B B B @ n å i = 1 pf v # i 1 + l 2 s 2 # i x i + ( 1 p ) f # # i # i x i s 2 + m i x i l s f p # i n å i = 1 pf v # i l 1 + l 2 l s 2 # 2 i ( 1 p ) f # # i 1 s m i # i f p # i n å i = 1 pf v # i 1 2 s 2 + 1 + l 2 2 s 4 # 2 i + ( 1 p ) f # # i 1 2 s 2 + 1 2 s 4 # 2 i + l 2 s 3 m i # i f p # i n å i = 1 f v # i f # # i f p # i 1 C C C C C C C C C C C C C C C C C C A . 605 When l = 0, S ( q ) j l = 0 = 0 B B B B B B B B B B @ 1 s 2 n å i = 1 # i x i ( 1 p ) q 2 p 1 s n å i = 1 # i n 2 s 2 + 1 2 s 4 n å i = 1 # 2 i 0 1 C C C C C C C C C C A . Itisstraightforwardthat S q = 0,since,with # i = ‹ # i , n å i = 1 ‹ # i = 0and n å i = 1 ‹ # i x i = 0. Therefore, q isastationarypoint. Result2. Evaluatedatthestationarypoint, q ,theHessianoftheloglikelihoodisnegative withtwozeroeigenvalues. Proof. TheHessianevaluatedatthestationarypoint q is H q = 0 B B B B B B B B B B @ 1 ‹ s 2 n å i = 1 x i x 0 i ( 1 ‹ p ) q 2 p 1 ‹ s n å i = 1 x i 00 ( 1 ‹ p ) q 2 p 1 ‹ s n å i = 1 x 0 i ( 1 ‹ p ) 2 2 n p 00 00 n 2 ‹ s 4 0 0000 1 C C C C C C C C C C A . When ‹ p = 1, H q = 0 B B B B B B B B B @ 1 ‹ s 2 n å i = 1 x i x 0 i 000 0000 00 n 2 ‹ s 4 0 0000 1 C C C C C C C C C A . Because 1 ‹ s 2 å n i = 1 x i x 0 i isanegativetematrix, H q isanegativee matrixwithtwozeroeigenvalues. 606 Nowsupposethat ‹ p 6 = 1.Notethattherowof H q , 1 ‹ s 2 n å i = 1 x 0 i , ( 1 ‹ p ) q 2 p n ‹ s , 0,0 ,islinearlydependentwiththe ( k + 1 ) th rowof H q .Multiplyingtherow by ( 1 ‹ p ) q 2 p ‹ s andaddingtothe ( k + 1 ) th rowresultsinarowvectorofzeros.Hence, H q ˘ 0 B B B B B B B B B @ 1 ‹ s 2 n å i = 1 x i x 0 i ( 1 ‹ p ) q 2 p 1 ‹ s n å i = 1 x i 00 0000 00 n 2 ‹ s 4 0 0000 1 C C C C C C C C C A , where ˘ standsforanelementaryrowoperation.Again,thecolumnandthe ( k + 1 ) th columnofthetransferredmatrixarelinearlydependent.Similarly,multiplyingthe columnby ( 1 ‹ p ) q 2 p ‹ s andaddingtothe ( k + 1 ) th columnresultsinacolumn vectorofzeros.Inotherwords, H q ˘ 0 B B B B B B B B B @ 1 ‹ s 2 n å i = 1 x i x 0 i ( 1 ‹ p ) q 2 p 1 ‹ s n å i = 1 x i 00 0000 00 n 2 ‹ s 4 0 0000 1 C C C C C C C C C A ˘ 0 B B B B B B B B B @ 1 ‹ s 2 n å i = 1 x i x 0 i 000 0000 00 n 2 ‹ s 4 0 0000 1 C C C C C C C C C A . Elementaryoperationspreservetherankofamatrix.Hence,therankof H q is k + 1, i.e., H q hastwozeroeigenvalues. Nowwewillshowthat H ( q ) isnegativeLet a = a 0 1 , a 2 , a 3 , a 4 0 beanarbitrarynon-zero ( k + 3 ) 1vector,where a 1 isa k 1vector,and a 2 , a 3 , a 4 are 607 scalars.Then, a 0 H q a = 1 ‹ s 1 p n a 0 1 n å i = 1 x i a 2 ( 1 p ) r 2 p p n ! 2 1 ‹ s 2 a 0 1 n å i = 1 x i x 0 i 1 n n å i = 1 x i n å i = 1 x 0 i ! a 1 n 2 ‹ s 4 a 2 3 0, because n å i = 1 x i x 0 i 1 n n å i = 1 x i n å i = 1 x 0 i = n å i = 1 0 @ x i 1 n n å j = 1 x j 1 A 0 @ x i 1 n n å j = 1 x j 1 A 0 ispositiveTherefore H q isnegative Result3. q with ‹ p 2 [ 0,1 ) isalocalmaximizeroftheloglikelihoodfunctionifandonlyif å n i = 1 ‹ # 3 i > 0 . Proof. FromResult2,weknowthattheHessianevaluatedat q isnegative Therefore,iftheloglikelihooddecreasesinthedirectionofthetwoeigenvectorsasso- ciatedwithzeroeigenvalues, q isalocalmaximizeroftheloglikelihood.Thetwo eigenvectorsthatareassociatedwiththetwozeroeigenvalueare 0 B B B B B B B B B B @ ( 1 ‹ p ) q 2 p ‹ s 0 1 0 0 1 C C C C C C C C C C A and 0 B B B B B B B B B B @ 0 0 0 0 1 1 C C C C C C C C C C A . 608 Let D q = m 0 B B B B B B B B B B @ ( 1 ‹ p ) q 2 p ‹ s 0 1 0 0 1 C C C C C C C C C C A + f 0 B B B B B B B B B B @ 0 0 0 0 1 1 C C C C C C C C C C A = 0 B B B B B B B B B B @ ( 1 ‹ p ) q 2 p ‹ sm 0 m 0 f 1 C C C C C C C C C C A . Because l 0, m > 0. D q hasonlythreenon-zeroarguments.Thus,relevantparame- terswouldbe b 0 , l ,and p .ByTaylor'sexpansion, L q + D q L q = 1 6 2 4 L b 0 b 0 b 0 ( 1 ‹ p ) r 2 p ‹ sm ! 3 + 3 L b 0 b 0 l ( 1 ‹ p ) r 2 p ‹ sm ! 2 m + 3 L b 0 ll ( 1 ‹ p ) r 2 p ‹ sm ! m 2 + 3 L b 0 b 0 p ( 1 ‹ p ) r 2 p ‹ sm ! 2 f + 3 L b 0 pp ( 1 ‹ p ) r 2 p ‹ sm ! f 2 + L lll m 3 + 3 L ll p m 2 f + 3 L l pp mf 2 + L ppp f 3 + 6 L b 0 p l ( 1 ‹ p ) r 2 p ‹ sm ! mf # + o ( m _ f ) 4 = ( 1 ‹ p ) 1 6 p r 2 p 1 ‹ s 3 4 ‹ p 2 + ‹ p ( 8 3 p )+ p 4 n å i = 1 ‹ # 3 i m 3 + o ( m _ f ) 4 . The1stordertermiszerobecause q isastationarypoint(Result1).The2ndorderterm iszerobytheoftheeigenvector.Notethat 4 ‹ p 2 + ‹ p ( 8 3 p )+ p 4 has itsmaximum, p 4 < 0,when ‹ p = 0.Since m > 0, L q + D q L q < 0ifandonly if å ‹ # 3 i > 0.Therefore, q with ‹ p 2 [ 0,1 ) isalocalmaximizerifandonlyif å ‹ # 3 i > 0. When ‹ p = 0,theexpressiongoesbacktotheoneinWaldman(1982). Result4. q with ‹ p = 1 isalocalmaximizerofthelikelihoodfunctionif å n i = 1 ‹ # 3 i > 0 . 609 Proof. Thetwoeigenvectorsassociatedwiththezeroeigenvaluesare 0 B B B B B B B @ 0 1 0 0 1 C C C C C C C A and 0 B B B B B B B @ 0 0 0 1 1 C C C C C C C A . Let D q = m 0 B B B B B B B @ 0 1 0 0 1 C C C C C C C A + f 0 B B B B B B B @ 0 0 0 1 1 C C C C C C C A = 0 B B B B B B B @ 0 m 0 f 1 C C C C C C C A . Because l 0and p 1, m > 0and f < 0. D q hasonlytwonon-zeroarguments.Thus, therelevantparameterswouldbe l and p .ByTaylor'sexpansion, L q + D q L q = 1 24 h L llll m 4 + 4 L lll p m 3 f + 6 L ll pp m 2 f 2 + 4 L l ppp mf 3 + L pppp f 4 i + o ( m _ f ) 5 = 1 4 m 4 + 1 3 ‹ s 3 r 2 p n å i = 1 ‹ # 3 i m 3 f n p m 2 f 2 + o ( m _ f ) 5 The1stordertermiszerobecause q isastationarypoint(Result1).The2ndor- dertermiszerobytheoftheeigenvector.Thethirdordertermiszerobe- cause L q + D q L q inResult3iszerowhen ‹ p = 1.Since f < 0and m > 0, 1 3 ‹ s 3 q 2 p n å i = 1 ‹ # 3 i m 3 f < 0when å ‹ # 3 i > 0.Therefore,if å ‹ # 3 i > 0, L q + D q L q < 0and q with ‹ p = 1isalocalmaximizer. 610 AppendixB PROOFSFORCHAPTER2 InthisSection,weprovethat ‹ W p ! W underAssumptionR 00 (footnoteonp10)andTheo- rem2.2.Weusethefollowingnotation.Givenblockresample w t =( y t , x 0 t ) 0 inSection 2.3.1,welet v 0 t = x t ( y t x 0 t b ) x t u 0 t and v t = x t ( y t x 0 t ‹ b ) x t u t .Follow- ingthenotationinGonçalvesandVogelsang(2011), p denotestheprobabilitymeasure inducedbythebootstrapresampling,conditionalonarealizationoftheoriginaltime series.Let Z T bebootstrapstatistics.Then,wewrite Z T = o p ( 1 ) inprobabilityor Z T p ! 0ifforany # > 0, d > 0,lim T ! ¥ p [ p ( j Z T j > d ) > # ]= 0.Similarlywe saythat Z T = O p ( 1 ) inprobabilityifforall # > 0thereexistsan M # < ¥ suchthat lim T ! ¥ p [ p ( j Z T j > M # ) > # ]= 0.Finally,wewrite Z T p ) Z inprobabilityifcon- ditionalonthesample,if Z T weaklyconvergesto Z under p ,forallsamplescontained inasetwithprobabilityconvergingtoone.,wewrite Z T p ) Z inprobabil- ityifandonlyif E [ f ( Z T )] ! E [ f ( Z )] inprobabilityforanyboundedanduniformly continuousfunction f . LemmaB1. Letr p 1 .Suppose k w t k r D < ¥ .Let f a t g bearandomsequence whichtakesvalueseither 0 or 1 .If f ( a t , # t ) g isa a -mixingsequencewith a m ofsize aand f w t g isL p NEDon f # t g with n m ofsize b,then f a t w t E ( a t w t ) , F t g isL p -mixingaleof size min f b , a r 2 2 r g withuniformlyboundedmixingaleconstantswhere F t isanondecreasing sequenceof s s X t , X t 1 ,... , X t = ( a t , # t ) . Proof. Westartbythefollowingnotation.Let X t = ( a t , # t ) , F t s = s X s , X s + 1 ,..., X t , G t s = s # s , # s + 1 ,..., # t .Firstweprovethat f a t w t E ( a t w t ) g is L p -mixingale.Notethat 611 E h a t w t E ( a t w t ) jF t m ¥ i p = E h a t w t a t E h w t jG t + k t k i + a t E h w t jG t + k t k i E a t E h w t jG t + k t k i + E a t E h w t jG t + k t k i E ( a t w t ) F t m ¥ i p E h a t w t E h w t jG t + k t k i F t m ¥ i p + E h a t E h w t jG t + k t k i F t m ¥ i E a t E h w t jG t + k t k i p + E a t E h w t jG t + k t k i E ( a t w t ) p * Minkowskiinequality a t w t E h w t jG t + k t k i p + E h a t E h w t jG t + k t k i F t m ¥ i E a t E h w t jG t + k t k i p + a t w t E h w t jG t + k t k i p * ConditionalJensen'sinequality 2 w t E h w t jG t + k t k i p + E h a t E h w t jG t + k t k i F t m ¥ i E a t E h w t jG t + k t k i p 2 d t n k + 6 a 1 p 1 r m a t E t + k t k w t r * f w t g is L p -NEDon f # t g with n m ofsize b and n a t E h w t jG t + k t k io is a -mixingwith a m ofsize a 2 d t n k + 6 a 1 p 1 r m k w t k r max d t , k w t k r 0 @ 2 n k + 6 a 1 p 1 r m 1 A c t y m 612 Alsonotethat ( a t w t E ( a t w t ) ) E h a t w t E ( a t w t ) j F t + m ¥ i p = a t w t E h a t w t j F t + m ¥ i p 2 a t w t E h a t w t j F t + m t m i p * Davidson(2002,Theorem10.28) = 2 a t w t a t E h w t j F t + m t m i p * a t is F t + m t m measurable 2 w t E h w t j F t + m t m i p 2 d t n m c t y m + 1 Therefore f a t w t E ( a t w t ) g is L p mixingalewith y m ofsize min n b , a r p pr o with c t << max d t , k w t k r .Nextweshowthatmixingaleconstantsareuniformlybounded. AccordingtotheMinkowskiandconditionalmodulusinequalities, w t E h w t jG t + m t m i p k w t k p + E h w t jG t + k t k i p k w t k p + k w t k p = 2 k w t k p Since k w t k p k w t k r isuniformlyboundedbyassumption,wecanset d t equaltoa constantforall t .Furthermore,byimposing d t = 2 k w t k p ,wecanset v m 1withoutloss ofgenerality.Thus,mixingaleconstant, c t << max d t , k w t k r max n 2 k w t k p , k w t k r o , isuniformlyboundedundertheassumedmomentconditions. LemmaB2. Letx t andw t beL p -NEDon # t with n x m and n w m ofrespectivesizes f x and f w . Then f x t w t g isL p / 2 -NEDofsize min f f x , f w g . Proof. WefollowtheproofsimilartothatofDavidson(2002,Theorem17.9). F t s = 613 s # s , # s + 1 ,..., # t .Notethat x t w t E h x t w t j F t + m t m i p 2 = x t w t x t E h w t j F t + m t m i + x t E h w t j F t + m t m i E h x t j F t + m t m i E h w t j F t + m t m i + E h x t j F t + m t m i E h w t j F t + m t m i E h x t w t j F t + m t m i p 2 x t w t x t E h w t j F t + m t m i p 2 + x t E h w t j F t + m t m i E h x t j F t + m t m i E h w t j F t + m t m i p 2 + E h x t j F t + m t m i E h w t j F t + m t m i E h x t w t j F t + m t m i p 2 * Minkowski'sInequality = x t w t E h w t j F t + m t m i p 2 + x t E h x t j F t + m t m i E h w t j F t + m t m i p 2 + E h x t E h x t j F t + m t m i w t E h w t j F t + m t m i F t + m t m i p 2 k x t k p w t E h w t j F t + m t m i p + x t E h x t j F t + m t m i p k w t k p + x t E h x t j F t + m t m i p w t E h w t j F t + m t m i p * Hölder'sinequalityandConditionalJensen'sinequality k x t k p d w t n w m + d x t n x m k w t k p + d x t n x m d w t n w m max n k x t k p d w t , k w t k p d x t , d x t d w t o n w m + n x m + n x m n w m d t n m Inotherwords, d t = max n k x t k p d w t , k w t k p d x t , d x t d w t o and n m = n w m + n x m + n x m n w m = O m min f f x , f w g . LemmaB3. f ( w ) : T 7! R , T ˆ R k ,afunctionofkrealvariables,and r ( w 1 , w 2 )= å k i = 1 w 1 i w 2 i thatmeasuresthedistancebetweenpointsw 1 andw 2 .Let f w t g beakdi- mensionalrandomsequence,ofwhicheachelementisL 2 NEDofsize bon f # t g .Suppose thatf ( w t ) isL 2 -bounded.Furtherassumethat f ( w 1 t ) f ( w 2 t ) B t ( w 1 t , w 2 t ) r ( w 1 t , w 2 t ) 614 a.s.where r () andB t () satisfythefollowingconditions:B t ( w 1 t , w 2 t ) : T T 7! R + for 1 q 2 , r w t , E h w t jG t + m t m i q < ¥ , B w t , E h w t jG t + m t m i q / ( q 1 ) < ¥ ,andfor r > 2 , B w t , E h w t jG t + m t m i r w t , E h w t jG t + m t m i r < ¥ .Then, f f ( w t ) g isL 2 NED on f # t g ofsize b ( r 2 ) / ( 2 ( r 1 )) . Proof. SeeDavidson(2002,Theorem7.16). LemmaB4. Forsomenondecreasingsequenceof s fF t g andforsomep > 1 ,let n w t F t o beanL p -mixingalewithmixingale y m andmixingaleconstantsc t .ThenlettingS j = å j t = 1 w t and Y = å ¥ m = 1 y m ,itfollowsthat max j T S j p K Y T å t = 1 c b t ! 1 b , b = min f p ,2 g forsomegenericconstantK. Proof. SeeHansen(1991),Hansen(1992). Proofoffootnoteonpage10: Firstweshowthat ˆ v t v 0 t + j E v t v 0 t + j ˙ is L ( 2 + d ) / 2 - mixingaleofsize 1withuniformlyboundedmixingaleconstants.Notethatunderthe Assumption R 00 -4, f v t g is L 2 + d -NEDon f # t g ofsize 1,whichimpliesthat f v t + j g is L 2 + d -NEDon f # t g ofsize 1aswell.SeeDavidson(2002,Theorem17.10).Then ˆ v t v 0 t + j ˙ is L ( 2 + d ) / 2 -NEDon f # t g ofsize 1byLemmaB2.Alsonotethatunderthe Assumption R 00 -5, f ( a t , # t ) g is a -mixingofsize ( 2 + d )( r + d ) / ( r 2 ) and v t v 0 t + j 2 + d 2 v t 2 + d v t + j 2 + d D 2 < ¥ . UsingLemmaB1,thisimpliesthat f a t v t v t + j E ( a t v t v t + j ) g is L ( 2 + d ) / 2 -mixingaleof size 1withuniformlyboundedmixingaleconstants.Inotherwords, f v t v t + j E ( v t v t + j ) g is L ( 2 + d ) / 2 -mixingaleofsize 1.Secondly,weshowthat Ÿ W p ! E Ÿ W .UsingLemmaB4,we 615 canwrite 1 T T j å t = 1 v t v 0 t + j E ( v t v 0 t + j ) ( 2 + d ) / 2 1 T K Y 0 @ T j å t = 1 c min f 2 + d 2 ,2 g t 1 A max f 2 2 + d , 1 2 g K 0 T 1 + max f 2 2 + d , 1 2 g uniformlyin T forsomeconstant K 0 .Thelastinequalityfollowsbythefactthat c t isuniformlyboundedconstantsand Y < ¥ whichisdueto y m beingofsize 1.Hence, 1 M T 1 max n 2 2 + d , 1 2 o Ÿ W E Ÿ W 2 + d 2 = 1 M T 1 max n 2 2 + d , 1 2 o T å j = T k j M 1 T T j å t = 1 v t v 0 t + j E v t v 0 t + j 2 + d 2 1 M T 1 max n 2 2 + d , 1 2 o T å j = T k j M 1 T T j å t = 1 v t v 0 t + j E v t v 0 t + j 2 + d 2 * Minkowski'sinequality K 0 Z R j k ( x ) j dx < ¥ , uniformlyin T ,where MT max n 2 2 + d , 1 2 o 1 = MT 1 2 q + 1 T 1 2 q + 1 + max f d 2 + d , 1 2 g = O ( 1 ) T max ˆ 2 ( 1 q d ) ( 2 q + 1 )( 2 + d ) , 1 2 q ( 2 q + 1 ) 2 ˙ = O ( 1 ) o ( 1 )= o ( 1 ) sincemax f 1 / 2 , 1 / d g < q .Therefore Ÿ W E Ÿ W 2 + d 2 ! 0,andthus Ÿ W p ! E Ÿ W byMarkov's inequality.Thirdly,weprovethat E Ÿ W p ! W sothatcombiningwithaboveresult, Ÿ W p ! W . 616 Bywecanwrite W E Ÿ W = ( 1 k ( 0 )) 1 T T å t = 1 E v t v 0 t + 1 T M å j = 1 1 k j M T j å t = 1 E v t v 0 t + j + E v t + j v 0 t + 1 T T 1 å j = M + 1 T j å t = 1 E v t v 0 t + j + E v t + j v 0 t Notethat v t v 0 t 2 + d 2 v t v 0 t 2 + d 2 v t 2 + d v t 2 + d D 2 < ¥ , whichimpliesthat 1 / T å T t = 1 E v t v 0 t = O p ( 1 ) .Therefore, k ( 0 ) ! 1impliesthatthe termvanishesas T ! ¥ .Showingthesecondtermbeing o p ( 1 ) isthesameasshowing thattheequationbelowis o p ( 1 ) . M å j = 1 1 k j M 1 T T j å t = 1 E v t v 0 t + l M å j = 1 1 k j M 1 T T j å t = 1 E v t v 0 t + l UsingLemmaB1, n v t , F t o is L 2 + d -mixingaleofsize 1withuniformlyboundedmixin- galeconstants,where F t s = s X t , X t 1 ,..., X s , X t = ( a t , # t ) .Then,wecanwrite E v t v 0 t + l = E E v t v 0 t + j F t + j [ j / 2 ] ¥ = E v t E v 0 t + j F t + j [ j / 2 ] ¥ k v t k 2 E v 0 t + j F t + j [ j / 2 ] ¥ 2 D d t n [ j / 2 ] K n [ j / 2 ] .(eqB.1) 617 Hence, M å j = 1 1 k j M 1 T T j å t = 1 E v t v 0 t + l M å j = 1 1 k j M 1 T T j å t = 1 E v t v 0 t + l M å j = 1 1 k j M 1 T T j å t = 1 K n [ j / 2 ] = T j T K M å j = 1 1 k j M n [ j / 2 ] (eqB.2) Ifweshowthat(eqB.2)convergestozerothenwearedonewiththesecondterm.Weuse thesameapproachasdoneintheproofofGallantandWhite(1995,Lemma6.6).First m tobeacountingmeasureonthepositiveintegers.Then,wecanwrite M å i = 1 1 k j M n [ j / 2 ] = Z ¥ 0 1 f j M g 1 k j M n [ j / 2 ] .(eqB.3) Notethatforeach j 2 N ,lim T ! ¥ k ( j / M ) ! 1implies lim T ! ¥ 1 f j M g 1 k j M n [ j / 2 ] d m ( j ) ! 0.(eqB.4) Alsonotethatsince j 1 k ( j / M ) j isbounded, 1 f j M g 1 k j M n [ j / 2 ] K n [ j / 2 ] forsomeeconstant K . K n [ j / 2 ] isintegrablebecause n m isofsize 1.Thereforebythe dominatedconvergencetheorem,(eqB.4)impliesthat(eqB.3)convergestozeroaswell. Thisinturnimpliesthat(eqB.2)convergestozeroas T ! ¥ .Hencethesecondterm vanishesas T ! ¥ .Nowconsiderthethirdterm.Itissuftoshowthat 1 T T 1 å j = M + 1 T j å t = 1 E v t v 0 t + j ! 0as T ! ¥ . 618 Using(eqB.1), 1 T T 1 å j = M + 1 T j å t = 1 E v t v 0 t + j 1 T T 1 å j = M + 1 T j å t = 1 E v t v 0 t + j 1 T T 1 å j = M + 1 T j å t = 1 K n [ j / 2 ] = 1 T T 1 å j = 1 T j å t = 1 K n [ j / 2 ] 1 T M å j = 1 T j å t = 1 K n [ j / 2 ] . Thetwotermsaboveconvergetothesamelimitas T ! ¥ bythesimilarargumentas above.Hence,thethirdtermconvergestozeroaswell.Thereforewe'veshownthat E Ÿ W p ! W .Combiningwithaboveresult, Ÿ W p ! W .Lastly,notethatgivenassumptions aresufforAndrews(1991,AssumptionB).Hence, p T / M ‹ W Ÿ W = O p ( 1 ) .See Andrews(1991,ProofofTheorem1(1)).Therefore ‹ W Ÿ W = o p ( 1 ) because M 1 + 2 q / T = O ( 1 ) , q 2 ( max f 1 / 2 , 1 / d g , ¥ ) .Therefore, ‹ W p ! W . LemmaB5. Supposethat f w t E ( w t ) g isaweaklystationaryL 2 mixingalewith k w t k p D < ¥ forsomep > 2 suchthatitsmixingale y m satisfy å ¥ 1 y m < ¥ andits mixingaleconstantsareuniformlybounded.Let f w t : t = 1,..., T g denoteanMBBresample of f w t : t = 1,..., T g withblocksizelsatisfyingeitherofthetwofollowingconditions:(a)lis asT ! ¥ ,or(b)l ! ¥ asT ! ¥ withl = o ( T ) .Then,forany h > 0 ,asT ! ¥ , p 0 @ sup r 2 [ 0,1 ] T 1 [ rT ] å t = 1 w t E w t > h 1 A = o p ( 1 ) . Proof. SeeGonçalvesandVogelsang(2011,ProofofLemmaA.4). LemmaB6. UnderAssumptionR 0 , (a) Foranylsuchthat 1 l < T,T ! ¥ , p lim T ! ¥ W T = G 0 + l å j = 1 1 j l G j + G 0 j W l , where G j = E v t v 0 t j . 619 (b) Letl = l T ! ¥ asT ! ¥ suchthat l 2 / T ! 0 .Then p lim T ! ¥ W T = G 0 + ¥ å j = 1 G j + G 0 j W , Proof. SeeGonçalvesandVogelsang(2011,ProofofLemmaA.2). LemmaB7. SupposeAssumptionR 00 holdsandlet W l and W asinLemmaB6bepositive matrices.Itfollowsthat (a) Foranylsuchthat 1 l < T,T ! ¥ , Z T ( r ) ) p L l W k ( r ) , inprobabilitywhere L l isthesquarerootmatrixof W l . (b) Letl = l T ! ¥ asT ! ¥ suchthat l 2 / T ! 0 .Then Z T ( r ) ) p L l W k ( r ) , inprobabilitywhere L isthesquarerootmatrixof W . Proof. SeetheproofofGonçalvesandVogelsang(2011,LemmaA.3).Thesufcondi- tionfortheproofisthat f v t g is L 2 + d mixingale withsize 1withuniformlybounded mixingcoefwhichisimpliedbyAssumption R 00 andLemmaB1. ProofofTheorem2.2 : thevector w t =( y t , x 0 t ) 0 thatcollectsdependentandex- planatoryvariables.Let l 2 N ( 1 l T ) beablocklengthandlet B t , l = f w t , w t + 1 ,..., w t + l 1 g betheblockof l consecutiveobservationsstartingat w t .Draw k 0 = T / l blocks randomlywithreplacementfromthesetofoverlappingblocks f B 1, l ,..., B T l + 1, l g to obtainabootstrapresampledenotedas w t =( y t , x 0 t ) 0 , t = 1,..., T .Wewanttoshow 1. T 1 å [ rT ] t = 1 x t x 0 t ) p rQ forsome Q 2. T 1/2 å [ rT ] t = 1 v t ) p L W k ( r ) forsome L 620 istrueunderAssumptionR 0 withAssumptionR 0 3-5strengthenedtoAssumptionR 00 3-5. Let p denotetheprobabilitymeasureinducedbythebootstrapresamplingconditional onarealizationoftheoriginaltimeseries.AssumptionR 0 1-2andLemmaB3implies that f x t x 0 t g is L 2 NED ofsize 1.ThenfromLemmaB1, f x t x 0 t Q g is L 2 -mixingale ofsize 1withuniformlyboundedmixingaleconstants.AlsoAssumptionR 0 1implies that x t x 0 t r D , r > 2.ThereforeLemmaB5appliesandtheconditionfollows straightforwardly.Nowweprovethesecondcondition.Givenour v 0 t and v t ,wecanwrite v t = v 0 t x t x 0 t ‹ b b , whichimpliesthat T 1 / 2 [ rT ] å t = 1 v t = T 1 / 2 [ rT ] å t = 1 v 0 t E v 0 t + T 1 / 2 [ rT ] å t = 1 E v 0 t T 1 / 2 [ rT ] å t = 1 x t x 0 t ‹ b b Z T ( r )+ A 1 T ( r ) A 2 T ( r ) . Weshowthesecondconditioninthefollowingtwosteps. Step1. Weshowthat Z T ( 1 ) ) L W k ( r ) . ProofofStep1.StraightforwardfromLemmasB6-B7andAssumption R 00 . Step2. Weshowthatsup r 2 [ 0,1 ] A 1 T ( r ) A 2 T ( r ) = o p ( 1 ) inprobability. ProofofStep2.Notethat 621 A 1 T ( r ) A 2 T ( r ) = T 1 / 2 [ rT ] å t = 1 E x t y t x 0 t ‹ b + x 0 t ‹ b x 0 t b T 1 / 2 [ rT ] å t = 1 x t x 0 t ‹ b b = T 1 / 2 [ rT ] å t = 1 E x t y t x 0 t ‹ b + T 1 / 2 [ rT ] å t = 1 E x t x 0 t ‹ b x t x 0 t b T 1 / 2 [ rT ] å t = 1 x t x 0 t ‹ b b = T 1 / 2 [ rT ] å t = 1 E v t T 1 / 2 [ rT ] å t = 1 x t x 0 t E x t x 0 t ‹ b b B 1 T ( r ) B 2 T ( r ) . Itissuftoshowthatsup r 2 [ 0,1 ] B 1 T ( r ) = o p ( 1 ) andsup r 2 [ 0,1 ] B 2 T ( r ) = o p ( 1 ) ,inprobability. Step2-1.Weprovethatsup r 2 [ 0,1 ] B 1 T ( r ) = o p ( 1 ) . B 1 T ( r )= T 1 / 2 [ rT ] å t = 1 E v t = T 1 / 2 M r å m = 1 å s = 1 E ‹ v I m + s = T 1 / 2 M r å m = 1 l å s = 1 E ‹ v I m + s T 1 / 2 l å s = B + 1 E ‹ v I M r + s b 1 T b 2 T , where M r = [ ([ rT ] 1 ) / l ] + 1and B = min f l , [ rT ] ( m 1 ) l g .Notethat M r 2f 1,..., k 0 g , B 2f 1,..., l g ,and I 1 ,..., I k 0 are i . i . d .uniformlydistributedon f 0,1,..., T l g (See 622 PaparoditisandPolitis(2003)). sup r 2 [ 0,1 ] b 1 T = sup r 2 [ 0,1 ] T 1 / 2 M r å m = 1 l å s = 1 E ‹ v I m + s = sup r 2 [ 0,1 ] T 1 / 2 M r å m = 1 l å s = 1 1 T l + 1 T l å j = 0 ‹ v j + s = sup r 2 [ 0,1 ] T 1 / 2 M r l å s = 1 1 T l + 1 T l å j = 0 ‹ v j + s T 1 / 2 k 0 1 T l + 1 l å s = 1 T l å j = 0 ‹ v j + s T 1 / 2 k 0 1 T l + 1 l T å t = 1 ‹ v t h ( l 1 ) ‹ v 1 +( l 2 ) ‹ v 2 + + ‹ v l 1 +( l 1 ) ‹ v T +( l 2 ) ‹ v T 1 + + ‹ v T l + 2 i = T 1 / 2 k 0 1 T l + 1 ( l 1 ) ‹ v 1 +( l 2 ) ‹ v 2 + + ‹ v l 1 +( l 1 ) ‹ v T +( l 2 ) ‹ v T 1 + + ‹ v T l + 2 * OLSFOC = T 1 / 2 k 0 O p ( l 2 T ) * ‹ v t isuniformlyboundedinprobability(Seebelow) = O p l p T = o p ( 1 ) * l isor l 2 T ! 0. Weshowthat ‹ v t isuniformlyboundedinprobability.Givenour ‹ v t = x t ( y t x 0 t ‹ b )= v t x t x 0 t ( ‹ b b ) . Firstnotethat v t and x t x 0 t areuniformly L q 2 bounded,whichimpliesthatbothareuni- 623 formlyboundedinprobability. k v t k q = a t v t q v t q D < ¥ x t x 0 t q 2 = a t x t x 0 t q 2 x t x 0 t q 2 x t q x 0 t q D 2 < ¥ Also,weknowthat ‹ b b = o p ( 1 ) .Hence ‹ b b isuniformlyboundedinprobability. Therefore ‹ v t isuniformlyboundedinprobability. 624 Finally,notethat sup r 2 [ 0,1 ] b 2 T = sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 E ‹ v I M r + s = sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 E v I M r + s x I M r + s x I M r + s 0 ( ‹ b b ) = sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 E v I M r + s T 1 / 2 l å s = B + 1 E x I M r + s x I M r + s 0 ( ‹ b b ) sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 E v I M r + s + sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 E x I M r + s x I M r + s 0 ( ‹ b b ) = sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 E v I M r + s + sup r 2 [ 0,1 ] T 1 l å s = B + 1 E x I M r + s x I M r + s 0 p T ( ‹ b b ) = sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 1 T l + 1 T l å j = 0 v j + s + sup r 2 [ 0,1 ] T 1 l å s = B + 1 1 T l + 1 T l å j = 0 x j + s x j + s 0 p T ( ‹ b b ) T 1 / 2 T l + 1 T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 v j + s + T 1 / 2 ( T l + 1 ) T l + 1 å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 x j + s x 0 j + s ‹ b b 625 Intermsoftheterm, E 0 @ k 1 2 0 T 1 / 2 T l + 1 T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 v j + s 1 A E 0 B B @ k 1 2 0 T 1 / 2 T l + 1 T l å j = 0 max 1 i l j + l å s = j + i v s 1 C C A k 1 2 0 T 1 / 2 T l + 1 T l å j = 0 max 1 i l j + l å s = j + i v s 2 + d k 1 2 0 T 1 / 2 T l + 1 T l å j = 0 K 0 l 1 2 = O ( 1 ) * k 0 l T ! 1 Theinequalityisobviousbecausefor r 2 [ 0,1 ] , B 2 f 1,..., l g .UsingLemmaB4 andthefactthat f v t g is L 2 + d -mixingaleofsize 1withuniformlyboundedmixingale constantsduetoLemmaB1,thethirdinequalityisalsostraightforward.Thereforebythe Markovinequality, T 1 / 2 T l + 1 T l å j = 0 sup r 2 [ 0,1 ] T 1 / 2 l å s = B + 1 v j + s = O p ( k 1 2 0 )= o p ( 1 ) . Nowweconsiderthesecondterm. T 1 / 2 ( T l + 1 ) T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 x j + s x 0 j + s ‹ b b = T 1 / 2 ( T l + 1 ) T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 n x j + s x 0 j + s Q + Q o ‹ b b T 1 / 2 ( T l + 1 ) T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 x j + s x 0 j + s Q + T 1 / 2 Q ‹ b b , 626 wherethesecondtermis O p T 1 = o p ( 1 ) because ‹ b b = O p ( T 1 / 2 ) ,andthe termis O p ( k 1 / 2 0 )= o p ( 1 ) becauseofthefollowing. E 0 @ k 1 2 0 T 1 / 2 ( T l + 1 ) T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 x j + s x 0 j + s Q 1 A E 0 B B @ k 1 2 0 T 1 / 2 ( T l + 1 ) T l å j = 0 max 1 i l j + l å s = j + i x j + s x 0 j + s Q 1 C C A k 1 2 0 T 1 / 2 ( T l + 1 ) T l å j = 0 max 1 i l j + l å s = j + i x j + s x 0 j + s Q 2 k 1 2 0 T 1 / 2 ( T l + 1 ) T l å j = 0 K 0 l 1 2 = O ( 1 ) * k 0 l T ! 1. Theinequalityisobviousbecausefor r 2 [ 0,1 ] , B 2 f 1,..., l g .Notethat f x t x 0 t g is L 2 NED withgivenassumptionsduetoLemmaB3andcombiningLemmaB1, f x t x 0 t Q g is L 2 -mixingaleofsize 1withuniformlyboundedmixingaleconstants.Thenusing LemmaB4,thethirdinequalityisstraightforward.ThereforebytheMarkovinequality, T 1 / 2 ( T l + 1 ) T l å j = 0 sup r 2 [ 0,1 ] l å s = B + 1 x j + s x 0 j + s Q = O p 0 @ k 1 2 0 1 A = o p ( 1 ) . Hencewehavesup r 2 [ 0,1 ] b 1 T = o p ( 1 ) andsup r 2 [ 0,1 ] b 2 T = o p ( 1 ) ,whichimplies thatsup r 2 [ 0,1 ] B 1 T = o p ( 1 ) inprobability. Step2-2.Weprovethatsup r 2 [ 0,1 ] B 2 T ( r ) = o p ( 1 ) .Wecanwrite sup r 2 [ 0,1 ] B 2 T ( r ) = sup r 2 [ 0,1 ] T 1 / 2 [ rT ] å t = 1 x t x 0 t E x t x 0 t ‹ b b = sup r 2 [ 0,1 ] T 1 [ rT ] å t = 1 x t x 0 t E x t x 0 t p T ( ‹ b b ) = o p ( 1 ) 627 Weknowthat j p T ( ‹ b b ) = O p ( 1 ) .FromLemmaB5, sup r 2 [ 0,1 ] T 1 [ rT ] å t = 1 x t x 0 t E x t x 0 t = o p ( 1 ) . Hencewehavethethirdequality. 628 AppendixC PROOFSFORAMPLITUDEMODULATEDSTATISTIC,NON-RANDOMMISSING DATA We l tobethetotalfractionofobserveddatapointsas l 2 C + 1 l 2 C + l 2 C 1 + l 1 = 2 C + 1 å j = 1 ( 1 ) j + 1 l j l . Also W k = 2 C + 1 å j = 1 ( 1 ) j + 1 W k l j WeprovealemmathatshowsthatAssumptions NR 0 implythatAssumptions NR hold. LemmaC1. AssumptionNR 0 isforAssumptionNR. Proof: UnderAssumptionNR 0 6thelocationsofmissingobservationsareHence, lim T ! ¥ T n T = l n ,for n = 0,...,2 C + 1, whereitholdstriviallythat l 0 = 0and l 2 C + 1 = 1.AssumptionsNR 0 1,2,and5,imply thatforall r 2 ( 0,1 ] , T 1 [ rT ] å t = 1 x t x 0 t ) rQ . AssumptionsNR 0 3,4,and5,implythatforall r 2 ( 0,1 ] , T 1/2 [ rT ] å t = 1 v t ) L W k ( r ) . AlsonotethatAssumptionNR 0 7impliesthatthereexists L suchthat L L 0 = W . Thenexttwolemmasestablishthelimitsofscaledsumsoftheamplitudem processes. 629 LemmaC2. UnderAssumptionNR 0 1,2,5,and6,forallr 2 ( 0,1 ] , T 1 [ rT ] å t = 1 x t x 0 t ) C å n = 0 1 n l 2 n < r l 2 ( n + 1 ) o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j Q , where l 0 = 0, l 2 C + 1 = 1 ,andCisthetotalnumberofmissingclusters. Proof: AssumptionsNR 0 1,2,5,and6aresufforAssumptionsNR1and2by LemmaC1.Hencewecanwritefor r 2 ( l 2 n , l 2 n + 1 ] , T 1 [ rT ] å t = 1 x t x 0 t = T 1 [ rT ] å t = 1 x t x 0 t T 1 l 2 n T å t = 1 x t x 0 t + T 1 h l 2 n 1 T i å t = 1 x t x 0 t ... + T 1 l 1 T å t = 1 x t x 0 t ) r l 2 n + l 2 n 1 ... + l 1 Q , whereasfor r 2 ( l 2 n + 1 , l 2 n + 2 ] wehave T 1 [ rT ] å t = 1 x t x 0 t = T 1 h l 2 n + 1 T i å t = 1 x t x 0 t T 1 l 2 n T å t = 1 x t x 0 t + T 1 h l 2 n 1 T i å t = 1 x t x 0 t ... + T 1 l 1 T å t = 1 x t x 0 t ) l 2 n + 1 l 2 n + l 2 n 1 ... + l 1 Q . Combiningthesetworesultswehavefor r 2 l 2 n , l 2 n + 2 i , T 1 [ rT ] å t = 1 x t x 0 t ) 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j Q . Itimmediatelyfollowsfor r 2 [ 0,1 ] that T 1 [ rT ] å t = 1 x t x 0 t ) C å n = 0 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j Q . 630 LemmaC3. UnderAssumptionNR 0 3,4,5,6,and7,forr 2 ( 0,1 ] , T 1/2 [ rT ] å t = 1 v t ) L C å n = 0 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j . Proof: AssumptionsNR 0 3,4,5and6aresufforAssumptionsNR1and3by LemmaC1.UsingsimilaralgebraastheproofinLemmaC2,wehavefor r 2 l 2 n , l 2 n + 1 i , T 1/2 [ rT ] å t = 1 v t = T 1/2 [ rT ] å t = 1 a t v t = T 1/2 [ rT ] å t = 1 v t T 1/2 l 2 n T å t = 1 v t + T 1/2 h l 2 n 1 T i å t = 1 v t ... + T 1/2 l 1 T å t = 1 v t ) L h W k ( r ) W k l 2 n + W k l 2 n 1 ... + W k l 1 i , andfor r 2 l 2 n + 1 , l 2 n + 2 i , T 1/2 [ rT ] å t = 1 v t = T 1/2 [ rT ] å t = 1 a t v t = T 1/2 v t T 1/2 l 2 n T å t = 1 v t + T 1/2 h l 2 n 1 T i å t = 1 v t ... + T 1/2 l 1 T å t = 1 v t ) L h W k l 2 n + 1 W k l 2 n + W k l 2 n 1 ... + W k l 1 i . Therefore,for r 2 l 2 n , l 2 n + 2 i , T 1/2 [ rT ] å t = 1 v t ) L 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j , anditimmediatelyfollowsfor r 2 ( 0,1 ] , T 1/2 [ rT ] å t = 1 v t ) L C å n = 0 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j . 631 ProofofTheorem2.3(a) .UsingLemmasC2andC3itfollowsthat T 1 T å t = 1 x t x 0 t ) 2 C + 1 å j = 1 ( 1 ) j + 1 1 ^ l j Q = l Q , T 1/2 T å t = 1 v t ) L 2 C + 1 å j = 1 ( 1 ) j + 1 W k 1 ^ l j = L W k , whichimplythat p T ‹ b b = T 1 T å t = 1 x t x 0 t ! 1 T 1/2 T å t = 1 v t ) l 1 Q 1 L W k . LemmaC4. LetT 1/2 ‹ S [ rT ] = T 1/2 å [ rT ] t = 1 ‹ v t .Let f l i g denotetheset f l 1 , l 2 ,..., l 2 C g . UnderAssumptionNR 0 ,forr 2 ( 0,1 ] ,asT ! ¥ , T 1/2 ‹ S [ rT ] ) L B k r , f l i g , where B k r , f l i g = C å n = 0 1 n l 2 n < r l 2 ( n + 1 ) o 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j r ^ l j l 1 W k . . Proof: For r 2 ( 0,1 ] wecanwrite T 1/2 ‹ S [ rT ] = T 1 / 2 [ rT ] å t = 1 ‹ v t = T 1/2 [ rT ] å t = 1 v t T 1 [ rT ] å t = 1 x t x 0 t p T ‹ b b ) L C å n = 0 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j C å n = 0 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j Q l Q 1 L W k = L C å n = 0 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 W k r ^ l j r ^ l j l 1 W k L B ( r , f l i g ) 632 wheretheweakconvergence, ) ,isstraightforwardgivenLemmasC2-C3andTheorem 2.3(a). ProofofTheorem2.3(b) :Wecanwrite ‹ W AM = T 1 T å t = 1 T å s = 1 k t s bT ‹ v t ‹ v 0 s = T 1 T 1 å t = 1 T 1 T 1 å s = 1 T 1/2 ‹ S t T 2 k t s bT k t s 1 bT k t s + 1 bT + k t s bT T 1 / 2 ‹ S 0 s , wherethesecondlinefollowsbyapplicationofsummationbypartstoeachsum.By LemmaC4andKieferandVogelsang(2005)itfollowsthat ‹ W AM ) L P b , B k ( r , f l i g L 0 . Giventhelemmas,wecannowsketchtheproofofTheorem2.3(c). ProofofTheorem2.3(c) :UsingTheorem2.3(a)andthedeltamethod,itdirectlyfollows that p Tr ( ‹ b ) ) R b 0 l Q 1 L W k where R ( b 0 )= ¶ r ( b ) / ¶b 0 j b = b 0 .Notethatthelimitis q linearcombinationsof k indepen- dentWienerprocesses.BecauseWienerprocessesareGaussian,linearcombinationsof WienerprocessesarealsoGaussian.Thus,wecanrewritethe q linearcombinationsof k independentWienerprocessesas q linearcombinationsof q independentWienerpro- cesses.the q q matrix D suchthat D D 0 = R b 0 l Q 1 W l Q 1 R b 0 0 . Anequivalentrepresentationfor R b 0 l Q 1 L W k isgivenby R b 0 l Q 1 L W k = D W q . 633 UsingLemmasC2andTheorem2.3(b),itfollowsthat W T = p Tr ( ‹ b ) 0 2 4 r ( ‹ b ) T 1 T å t = 1 x t x 0 t ! 1 ‹ W AM T 1 T å t = 1 x t x 0 t ! 1 r ( ‹ b ) 0 3 5 1 p Tr ( ‹ b ) ) h R ( b 0 ) l Q 1 L W k i 0 h R b 0 l Q 1 L P b , B k ( r , f l i g ) L 0 l Q 1 R b 0 0 i 1 h R ( b 0 ) l Q 1 L W k i = D W q 0 h D P b , B q ( r , f l i g D 0 i 1 D W q = W 0 q h P b , B q ( r , f l i g ) i 1 W q , andtheproofiscomplete.Notethatforthecaseofonerestriction, q = 1,itfollowsfor the t -statisticthat, t T ) W 1 q P b , B 1 ( r , f l i g ) . 634 AppendixD PROOFSFOREQUALSPACEDSTATISTIC,NON-RANDOMMISSINGDATA WesomerelevantfunctionssimilartothoseinKieferandVogelsang (2005).thefunctions k b ( x )= k x b , D 2 K a ts = k 0 B B B B @ t å i = 1 a i s å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t å i = 1 a i s å i = 1 a i 1 bT ES 1 C C C C A k 0 B B B B @ t å i = 1 a i s å i = 1 a i + 1 bT ES 1 C C C C A + k 0 B B B B @ t å i = 1 a i s å i = 1 a i bT ES 1 C C C C A , D T ES ( r ) = T 2 ES k b [ rT ES ]+ 1 T ES k b [ rT ES ] T ES k b [ rT ES ] T ES k b [ rT ES ] 1 T ES . When k ( x ) istwicecontinuouslydifferentiable, lim T ES ! ¥ D T ES ( r )= k 00 b ( r )= 1 b 2 k ( r b ) ,(eqD.1) wherethelimitholdsuniformlyin r bytheofthesecondderivativeandthe continuityof k 00 ( x ) .Let k 0 b ( b ) denotethederivativeof k b ( x ) fromtheleftat x = b . Then,by b 1 k 0 ( 1 ) = k 0 b ( b )= lim T ES ! ¥ T ES k b ( b ) k b b 1 T ES (eqD.2) = lim T ES ! ¥ T ES k b [ bT ES ] T ES k b [ bT ES ] T ES 1 T ES . 635 Throughoutthissectionweassumethat M ES = bT ES where b 2 ( 0,1 ] isxed.For notationalpurposesweletthesummationbezerowheneverthestartingvalueislarger thanthevalue.Forexample,forasequence f a k g ,thenwehave å 0 k = 1 a k = 0. LemmaD1. Anequivalentexpressionfor ‹ W ES isgivenby ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 ‹ S t D 2 K a ts ‹ S 0 s . Proof: Firstrewrite ‹ W ES usingsummationbyparts(seeKieferandVogelsang(2005)for details): ‹ W ES = 1 T ES T å t = 1 T å s = 1 k 0 B B B B @ t å i = 1 a i s å i = 1 a i bT ES 1 C C C C A ‹ v t ‹ v 0 s , = 1 T ES T 1 å t = 1 T 1 å s = 1 ‹ S t 2 6 6 6 6 4 k 0 B B B B @ t å i = 1 a i s å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t å i = 1 a i s + 1 å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t + 1 å i = 1 a i s å i = 1 a i bT ES 1 C C C C A + k 0 B B B B @ t + 1 å i = 1 a i s + 1 å i = 1 a i bT ES 1 C C C C A 3 7 7 7 7 5 ‹ S . 0 s 636 Notethatif a t + 1 = a s + 1 = 1,itfollowsthat k 0 B B B B @ t å i = 1 a i s å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t å i = 1 a i s + 1 å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t + 1 å i = 1 a i s å i = 1 a i bT ES 1 C C C C A + k 0 B B B B @ t + 1 å i = 1 a i s + 1 å i = 1 a i bT ES 1 C C C C A = k 0 B B B B @ t å i = 1 a i s å i = 1 a i M 1 C C C C A k 0 B B B B @ t å i = 1 a i s å i = 1 a i 1 M 1 C C C C A k 0 B B B B @ t å i = 1 a i s å i = 1 a i + 1 M 1 C C C C A + k 0 B B B B @ t å i = 1 a i s å i = 1 a i M 1 C C C C A D 2 K a ts . Howeverwhen a t + 1 = 0and/or a s + 1 = 0,itfollowsthat k 0 B B B B @ t å i = 1 a i s å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t å i = 1 a i s + 1 å i = 1 a i bT ES 1 C C C C A k 0 B B B B @ t + 1 å i = 1 a i s å i = 1 a i bT ES 1 C C C C A + k 0 B B B B @ t + 1 å i = 1 a i s + 1 å i = 1 a i bT ES 1 C C C C A = 0 6 = D 2 K a ts . Thisholdsbecauseif a t + 1 = 0,thenthetermcancelsoutthethirdtermandthe secondtermcancelsoutthefourthterm,andif a s + 1 = 0thenthetwotermscancel eachotheroutandthelasttwotermscanceleachotherout.Whenever a t + 1 = 0and/or a s + 1 = 0,werequiretheargumentofthesumtobezero.Thiscanbeaccomplishedby 637 scalingtheargumentofthesumby a t + 1 a s + 1 .Usingthisdeviceitfollowsthat ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 ‹ S t D 2 K a ts ‹ S 0 s , completingtheproof. Wenextproveacollectionoflemmasusedtoestablishthelimitof ‹ W ES .Theset oflemmasarealgebraicandmechanicalwhereasthesecondsetoflemmasworkoutthe limitsofcomponentsof ‹ W ES . LemmaD2. UnderAssumptionNR 0 6,,itfollowsasT ! ¥ , T 1 ES 0 @ [ rT ] å t = 1 a t [ uT ] å t = 1 a t 1 A ! l 1 0 @ C å n = 0 1 n l 2 n < r l 2 ( n + 1 ) o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j C å l = 0 1 n l 2 l < u l 2 ( l + 1 ) o 2 l + 1 å j = 1 ( 1 ) j + 1 u ^ l j 1 A . Proof: Itissuftoestablishthelimitof T 1 ES [ rT ] å t = 1 a t .Firstconsiderthebehaviorof T 1 [ rT ] å t = 1 a t .Therearetwopossibilitiesdependingonthevalueof r .Theiswhen r is inanintervalsuchthatdataareobservedat t =[ rT ] andthesecondissuchthatdatais missingat t =[ rT ] .Notethatdataareobservedat t =[ rT ] whenever r 2 l 2 n , l 2 n + 1 i , n = 0,..., C .Therefore,when r 2 l 2 n , l 2 n + 1 i ,wecanwrite T 1 [ rT ] å t = 1 a t = T 1 [ rT ] å t = 1 1 T 1 l 2 n T å t = 1 1 + T 1 h l 2 n 1 T i å t = 1 1 ... + T 1 1, ! r l 2 n + l 2 n 1 ... + l 1 , = r + 2 n å j = 1 ( 1 ) j + 1 l j (eqD.3) wheretheterm å 2 n j = 1 ( 1 ) j + 1 l j isremovingthemissingportionsfrom r .Incontrast, when r 2 l 2 n + 1 , l 2 n + 2 i ,dataismissingat t =[ rT ] .Hencewhen r 2 l 2 n + 1 , l 2 n + 2 ) i , 638 wecanwrite T 1 [ rT ] å t = 1 a t = T 1 h l 2 n + 1 T i å t = 1 1 T 1 l 2 n T å t = 1 1 + T 1 h l 2 n 1 T i å t = 1 1 ... + T 1 l 1 T å t = 1 1, ) l 2 n + 1 l 2 n + l 2 n 1 ... + l 1 , = l 2 n + 1 + 2 n å j = 1 ( 1 ) j + 1 l j (eqD.4) Thereasonthatwehaveadifferentexpressioncomparedto(eqD.3)isbecause r isnow locatedwheretheobservationsaremissingandwehavetoremovetheportionofmissing observationsfrom l 2 n + 1 ratherthanfrom r .From l 2 n + 1 to r ,thereisnoobserveddata andthus a t = 0for t intherange [ l 2 n + 1 T ] < t [ rT ] .Combining(eqD.3)and(eqD.4), thefollowingholdsfor r 2 ( l 2 n , l 2 n + 2 ] : T 1 [ rT ] å t = 1 a t = 1 n l 2 n < r l 2 n + 1 o T 1 [ rT ] å t = 1 a t + 1 n l 2 n + 1 < r l 2 n + 2 o T 1 [ rT ] å t = 1 a t ! 1 n l 2 n < r l 2 n + 1 o 0 @ r + 2 n å j = 1 ( 1 ) j + 1 l j 1 A + 1 n l 2 n + 1 < r l 2 n + 2 o 0 @ l 2 n + 1 + 2 n å j = 1 ( 1 ) j + 1 l j 1 A = 1 n l 2 n < r l 2 n + 2 o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j . Itimmediatelyfollowsforageneralvalueof r 2 ( 0,1 ] : T 1 [ rT ] å t = 1 a t ! C å n = 0 1 f l 2 n < r l 2 n + 2 g 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j .(eqD.5) Applyingtheresultgivenby(eqD.5)forthecaseof r = 1gives T ES T = T 1 T å t = 1 a t ! 2 C + 1 å j = 1 l j ( 1 ) j + 1 l .(eqD.6) 639 Using(eqD.5)and(eqD.6)itfollowsthat T 1 ES [ rT ] å t = 1 a t = T ES T 1 T 1 [ rT ] å t = 1 a t ! 1 l C å n = 0 1 n l 2 n < r l 2 ( n + 1 ) o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j , andthelemmaisestablished. LemmaD3. Thefollowingalgebraicrelationshipholds: a t + 1 a s + 1 = C å n = 0 C å l = 0 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o . Proof: Recallthatdataareobservedwhenthereexistsavalueof n suchthat T 2 n + 1 t T 2 n + 1 (see1).Therefore, a t + 1 = 1impliesthatthereisavalueof n such that T 2 n t + 1 T 2 n + 1 , orequivalently T 2 n t T 2 n + 1 1. If a t + 1 = 0,then t doesnotsatisfythisinequalityforanyvalueof n .Therefore,wemay write a t + 1 = C å n = 0 1 n T 2 n t T 2 n + 1 1 o , anditdirectlyfollowsthat a t + 1 a s + 1 = C å n = 0 1 n T 2 n t T 2 n + 1 1 o C å l = 0 1 n T 2 l s T 2 l + 1 1 o = C å n = 0 C å l = 0 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o . 640 LemmaD4. Thefollowingalgebraicrelationshipholds: a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i < bT ES ) = C å n = 0 C å l = 0 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o 1 8 < : j t s j < bT ES + 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k 9 = ; Proof: Notethatthenumberofmissingobservationsinthemissingclusteris ( T 2 T 1 ) ,thenumberofmissingobservationsinthesecondmissingclusteris ( T 4 T 3 ) ,andso forth.Hencethe n th missingclusterhas ( T 2 n T 2 n 1 ) missingobservations.Therefore, thetotalnumberofmissingobservationsinthe n missingclustersis n å k = 1 T 2 k T 2 k 1 .(eqD.7) Supposethat t isintherange T 2 n t T 2 n + 1 1.Wewanttocountthenumberof observeddatapointsuptotime t .Wefurtherdividethisintervalfor t intotwoparts becausewhen t isintherange T 2 n < t T 2 n + 1 1dataisobservedattime t whilefor t = T 2 n dataismissing.Firstconsiderthecase T 2 n < t T 2 n + 1 1.Inthiscasethere are n missingclustersbeforetime t .Hencethenumberofmissingobservationsuptotime t is å n k = 1 T 2 k T 2 k 1 from(eqD.7).Subtractingthisnumberofmissingobservations from t ,weobtainthenumberofobserveddatapointsuptotime t .Thereforeitfollows that t å i = 1 a i = t n å k = 1 T 2 k T 2 k 1 = t 2 n å k = 1 ( 1 ) k T k .(eqD.8) Next,considerthecaseof t = T 2 n .Becausedataisnotobservedat t = T 2 n ,instead ofcountingallthewayuptotime t ,weonlycountuptotime T 2 n 1 ,whichisthelast timeperiodwherethedataisavailable.Thereare ( n 1 ) missingclustersuptotime T 2 n 1 .Thenusing(eqD.7),thenumberofmissingobservationsinthose ( n 1 ) clusters 641 is å n 1 k = 1 ( T 2 k T 2 k 1 ) .Hencethenumberofobserveddatapointsuptotime T 2 n is T 2 n å i = 1 a i = T 2 n 1 n 1 å k = 1 ( T 2 k T 2 k 1 ) , whichcanbere-expressedas T 2 n 1 n 1 å k = 1 T 2 k T 2 k 1 = T 2 n ( T 2 n T 2 n 1 ) n 1 å k = 1 T 2 k T 2 k 1 = T 2 n n å k = 1 T 2 k T 2 k 1 = T 2 n 2 n å k = 1 ( 1 ) k T k , showingthatthe t = T 2 n casecanalsobeexpressedas(eqD.8).Therefore,when t fallsin therange T 2 n t T 2 n + 1 1,itfollowsthat t å i = 1 a i = t 2 n å k = 1 ( 1 ) k T k .(eqD.9) Nowconsidervaluesof t and s with t s suchthat T 2 n t T 2 n + 1 1, T 2 l s T 2 l + 1 1.Notethatbecause t s itfollowsthat n l .Using(eqD.9)gives, t å i = 1 a i s å i = 1 a i = t 2 n å k = 1 ( 1 ) k T k ! s 2 l å k = 1 ( 1 ) k T k ! = ( t s ) 2 n å k = 2 l + 1 ( 1 ) k T k .(eqD.10) Similarly,when t s , s å i = 1 a i t å i = 1 a i = ( s t ) 2 l å k = 2 n + 1 ( 1 ) k T k . Therefore,wecanwrite t å i = 1 a i s å i = 1 a i = j t s j 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k . 642 Usingthisexpressionwehave t å i = 1 a i s å i = 1 a i < [ bT ES ] isequivalentto j t s j < [ bT ES ]+ 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k . FromtheproofofLemmaD3weknowthat a t + 1 = 1and a s + 1 = 1ifandonlyifthereis avalueof n andavalueof l suchthat T 2 n t T 2 n + 1 1, T 2 l s T 2 l + 1 1, andwhenthisisthecase, å t i = 1 a i å s i = 1 a i < [ bT ES ] ifandonlyif j t s j < [ bT ES ]+ 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k , anditimmediatelyfollowsforthiscasethat a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i < bT ES ) = 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o 1 8 < : j t s j < bT ES + 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k 9 = ; . AswasdoneintheproofofLemmaD3wecanwriteforgeneralvaluesof t and s : a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i < bT ES ) = C å n = 0 C å l = 0 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o 1 8 < : j t s j < bT ES + 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k 9 = ; . Thiscompletestheproof. 643 LemmaD5. Supposethatt > s.Thenthefollowingalgebraicresultholds: a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES ) = C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 8 < : t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; Proof: FromtheproofofLemmaD3weknowthat a t + 1 = 1and a s + 1 = 1ifandonlyif thereisavalueof n andavalueof l suchthat T 2 n t T 2 n + 1 1, T 2 l s T 2 l + 1 1. From(eqD.10)inLemmaD4,wealsoknowthatwhen t å i = 1 a i s å i = 1 a i = bT ES itfollowsthat t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k because t > s .Pluggingthisformulafor t intotheinequality T 2 n t T 2 n + 1 1gives T 2 n s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k T 2 n + 1 1, whichcanberearrangedas T 2 n bT ES 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 bT ES 2 n å k = 2 l + 1 ( 1 ) k T k . Hence,given t > s ,theconditions: a t + 1 = 1, a s + 1 = 1and å t i = 1 a i å s i = 1 a i = bT ES holdifandonlyifthefollowingthreeconditionsareforsomevalueof n and 644 somevalueof l : T 2 n bT ES 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 bT ES 2 n å k = 2 l + 1 ( 1 ) k T k , T 2 l s T 2 l + 1 1, t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k . Intermsofindicatorfunctionsweexpressthisequivalenceas a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES ) = 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 8 < : t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; . WritingmoregenerallyasdoneintheproofofLemmaD3bycombiningtheaboveex- pressionforallpossiblevaluesof n and l with n l givesthedesiredrelationship: a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES ) = C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 8 < : t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; . LemmaD6. Supposethatt > s.Thenthefollowingalgebraicresultholds: a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES + 1 ) = C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 1 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 2 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 8 < : t = s + bT ES + 1 + 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 645 TheproofisessentiallythesameastheproofofLemmaD5. LemmaD7. Thefollowingalgebraicresultholds: a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = 0 ) = C å n = 0 1 n T 2 n t T 2 n + 1 1 o 1 f t = s g Proof: Notethatwhen a t + 1 = 1and a s + 1 = 1,itfollowsthat t å i = 1 a i s å i = 1 a i = 0 ifandonlyif t = s .Itisobviousthatthedifferenceinsumsiszerowhen t = s .The differenceinsumscannotbezeroif t and s aredifferent.Supposethat t > s .Thenwe have t å i = 1 a i s å i = 1 a i = a t + a t 1 + + a s + 2 + a s + 1 6 = 0, because a s + 1 = 1.Wehavethesameconclusionwhen t < s duetothefactthat a t + 1 = 1. Hence, a t + 1 = 1, a s + 1 = 1and å t i = 1 a i å s i = 1 a i = 0areifandonlyif t = s andthereisavalueof n suchthat T 2 n t T 2 n + 1 1. Intermsofindicatorfunctionswecanwritetheseconditionsas a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = 0 ) = 1 n T 2 n t T 2 n + 1 1 o 1 f t = s g . WritingmoregenerallyasdoneintheproofofLemmaD3wehavethedesiredresult: a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = 0 ) = C å n = 0 1 n T 2 n t T 2 n + 1 1 o 1 f t = s g . Thenextcollectionoflemmasestablishthelimitof ‹ W ES . LemmaD8. LetM ES = bT ES wherebisaconstantwithb 2 [ 0,1 ] .Whenk ( x ) istwice continuouslydifferentiable,underAssumptionsNR 0 ,asT ! ¥ ‹ W ES ) L P ES 1 b , B k ( f l g 2 C 1 ) L 0 646 where, P ES 1 b , B k ( f l g 2 C 1 ) = 1 b 2 l 3 C å n = 0 C å l = 0 Z l 2 n + 1 l 2 n Z l 2 l + 1 l 2 l k 00 0 @ ( l b ) 1 2 4 2 n + 1 å j = 1 ( 1 ) j + 1 ( r ^ l j ) 2 l + 1 å j = 1 ( 1 ) j + 1 ( u ^ l j ) 3 5 1 A B k ( r , f l i g ) B k ( u , f l i g ) 0 dudr. Proof: Usingtheatthebeginningofthisappendix,itisstraightforwardto showthat T 2 ES D 2 K a ts = D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! . FromLemmaD1weknowthat ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 ‹ S t D 2 K a ts ‹ S 0 s . Re-expressing ‹ W ES intermsof D T ES ( r ) gives ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 ‹ S t T 2 ES D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! ‹ S 0 s . PluggingintheexpressionfromLemmaD3gives, ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 C å n = 0 C å l = 0 1 f T 2 n t T 2 n + 1 1 g 1 f T 2 l s T 2 l + 1 1 g ‹ S t T 2 ES D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! ‹ S 0 s = T T ES 3 T 1 T 1 å t = 1 T 1 T 1 å s = 1 " C å n = 0 C å l = 0 1 f T 2 n t T 2 n + 1 1 g 1 f T 2 l s T 2 l + 1 1 g T 1/2 ‹ S t D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! T 1/2 ‹ S 0 s # . 647 Mapping t to [ rT ] and s to [ uT ] wecanwrite ‹ W ES = T T ES 3 Z 1 0 Z 1 0 " C å n = 0 C å l = 0 1 n l 2 n r < l 2 n + 1 o 1 n l 2 l u < l 2 l + 1 o T 1/2 ‹ S [ rT ] D T ES 0 @ T 1 ES 0 @ [ rT ] å i = 1 a i [ uT ] å i = 1 a i 1 A 1 A T 1/2 ‹ S 0 [ uT ] 3 5 dudr . UsingLemmaD2,LemmaC4,(eqD.1),(eqD.6)andthecontinuousmappingtheoremit followsthat ‹ W ES ) l 3 Z 1 0 Z 1 0 " C å n = 0 C å l = 0 1 n l 2 n r < l 2 n + 1 o 1 n l 2 l u < l 2 l + 1 o k 00 0 @ b 1 l 1 0 @ C å n = 0 1 n l 2 n < r l 2 ( n + 1 ) o 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j C å l = 0 1 n l 2 l < u l 2 ( l + 1 ) o 2 l + 1 å j = 1 ( 1 ) j + 1 u ^ l j 1 A 1 A L B k ( r , f l i g ) B k ( u , f l i g ) 0 L 0 dudr . Thelimitingexpressioncanbesimplbybreakinguptheintegralsintotheranges indicatedbytheindicatorfunctionsandusingthefactthatwhen l 2 n r < l 2 n + 1 : C å n = 0 1 f l 2 n < r l 2 ( n + 1 ) g 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j = 2 n + 1 å j = 1 ( 1 ) j + 1 r ^ l j . Thereforewehave ‹ W ES ) L 1 b 2 l 3 C å n = 0 C å l = 0 Z l 2 n + 1 l 2 n Z l 2 l + 1 l 2 l 2 4 k 00 0 @ ( l b ) 1 2 4 2 n + 1 å j = 1 ( 1 ) j + 1 ( r ^ l j ) 2 l + 1 å j = 1 ( 1 ) j + 1 ( u ^ l j ) 3 5 1 A B k ( r , f l i g ) B k ( u , f l i g ) 0 3 5 dudr L 0 , = L P ES 1 b , B k ( f l g 2 C 1 ) L 0 , 648 completingtheproof. LemmaD9. LetM ES = bT ES wherebisaconstantwithb 2 [ 0,1 ] .UnderAssumption NR 0 ,whenk ( x ) iscontinuous,k ( x )= 0 for j x j 1 ,andtwicecontinuouslydifferentiable everywhereexceptfor j x j = 1 ,asT ! ¥ , ‹ W ES ) L P ES 2 b , B k ( f l g 2 C 1 ) L 0 where, P ES 2 b , B k ( f l g 2 C 1 ) 1 b 2 l 3 C å n = 0 C å l = 0 Z l 2 n + 1 l 2 n Z l 2 l + 1 l 2 l 2 4 1 8 < : j r u j < b l + 2 ( n _ l ) å j = 2 ( n ^ l )+ 1 ( 1 ) j l j 9 = ; k 00 0 @ ( l b ) 1 2 4 2 n + 1 å j = 1 ( 1 ) j + 1 ( r ^ l j ) 2 l + 1 å j = 1 ( 1 ) j + 1 ( u ^ l j ) 3 5 1 A B k ( r , f l i g ) B k ( u , f l i g ) 0 3 5 drdu + k 0 ( 1 ) b l 2 C å n = 0 n å l = 0 Z l 2 l + 1 l 2 l 2 4 1 8 < : l 2 n b l 2 n å j = 2 l + 1 ( 1 ) j l j < u l 2 n + 1 b l 2 n å j = 2 l + 1 ( 1 ) j l j 9 = ; 8 < : B k 0 @ u + b l + 2 n å j = 2 l + 1 ( 1 ) j l j , f l i g 1 A B k ( u , f l i g ) 0 + B k ( u , f l g 2 C 1 ) B k 0 @ u + b l + 2 n å j = 2 l + 1 ( 1 ) j l j , f l i g 1 A 0 9 = ; 3 5 du. 649 Proof: Straightforwardcalculationsgive D 2 K a ts = 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : 1 T 2 ES D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! t å a i s å a i < [ bT ES ] k [ bT ES ] bT ES k [ bT ES ] 1 bT ES + k [ bT ES ] bT ES t å a i s å a i =[ bT ES ] k [ bT ES ] bT ES t å a i s å a i =[ bT ES ]+ 1 0otherwise . Werewrite ‹ W ES usingLemmaD1anddividingitintothethreenonzerocasesasdeter- minedby D 2 K a ts : ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 ‹ S t D 2 K a ts ‹ S 0 s = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i < bT ES ) 1 T 2 ES D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! ‹ S t ‹ S 0 s + 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES ) k [ bT ES ] bT ES k [ bT ES ] 1 bT ES + k [ bT ES ] bT ES ‹ S t ‹ S 0 s 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES + 1 ) k [ bT ES ] bT ES ‹ S t ‹ S 0 s . 650 Expandingthesecondtermgives ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i < bT ES ) 1 T 2 ES D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! ‹ S t ‹ S 0 s + 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES ) k [ bT ES ] bT ES k [ bT ES ] 1 bT ES ‹ S t ‹ S 0 s + 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES ) k [ bT ES ] bT ES ‹ S t ‹ S 0 s 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å i = 1 a i s å i = 1 a i = bT ES + 1 ) k [ bT ES ] bT ES ‹ S t ‹ S 0 s = z 1 + z 2 + z 3 + z 4. Firstconsider z 1 .PluggingintheexpressionfromLemmaD4gives, z 1 = 1 T ES T 1 å t = 1 T 1 å s = 1 C å n = 0 C å l = 0 h 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o 1 8 < : j t s j < bT ES + 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k 9 = ; 1 T 2 ES D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! ‹ S t ‹ S 0 s 3 5 = T T ES 3 1 T T 1 å t = 1 1 T T 1 å s = 1 C å n = 0 C å l = 0 h 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o 1 8 < : j t s j < bT ES + 2 ( n _ l ) å k = 2 ( n ^ l )+ 1 ( 1 ) k T k 9 = ; D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! T 1 / 2 ‹ S t T 1 / 2 ‹ S 0 s # 651 wherethesecondequalityholdsbyrescaling.Next,consider z 2 .Weusetheexpression fromLemmaD5when t > s .When s > t ,theexpressionisthesamewith t and s interchanged.When t = s , z 2 = 0.Thereforewehave z 2 = 1 T ES T 1 å t = 1 T 1 å s = 1 C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 8 < : t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; k [ bT ES ] bT ES k [ bT ES ] 1 bT ES ‹ S t ‹ S 0 s + ‹ S s ‹ S 0 t . Wefurthersimplify z 2 bypluggingin t = s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k directlyrather 652 thandenotingitasanindicatorfunction.Thedoublesumcollapsestoasinglesumgiving z 2 = 1 T ES T 1 å s = 1 C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o k [ bT ES ] bT ES k [ bT ES ] 1 bT ES ‹ S s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k ‹ S 0 s + ‹ S s ‹ S 0 s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k ! = T T ES 2 1 T T 1 å s = 1 C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o T ES k [ bT ES ] bT ES k [ bT ES ] 1 bT ES T 1/2 ‹ S s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k T 1/2 ‹ S 0 s + T 1/2 ‹ S s T 1/2 ‹ S 0 s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k ! . Finallyconsider z 3 and z 4 .Because [ bT ES ]+ 1 bT ES isbeyondthetruncationpoint,itfollows that k [ bT ES ]+ 1 bT ES = 0.Therefore,wehave k [ bT ES ] bT ES = k [ bT ES ] bT ES k [ bT ES ]+ 1 bT ES , andnoticethat ( bT ES ) k [ bT ES ] bT ES =( bT ES ) k [ bT ES ] bT ES k [ bT ES ]+ 1 bT ES ! k 0 + ( 1 )= 0. Weobtainzerobecause k 0 + ( 1 ) isthederivativefromtherightofthetruncationpoint. Usingsimilarargumentsasusedfor z 2 ,itfollowsthat z 3 = o p ( 1 ) and z 4 = o p ( 1 ) because 653 k 0 + ( 1 )= 0.Combiningtheresultsfor z 1 , z 2 , z 3 , z 4 allowsustowrite ‹ W ES = T T ES 3 T 1 T 1 å t = 1 T 1 T 1 å s = 1 C å n = 0 C å l = 0 1 n T 2 n t T 2 n + 1 1 o 1 n T 2 l s T 2 l + 1 1 o 1 8 < : j t s j < bT ES + 2 ( n _ l ) å k = 2 ( l ^ n )+ 1 ( 1 ) k T k 9 = ; D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! T 1 2 ‹ S t T 1 2 ‹ S 0 s + T T ES 2 1 T T 1 å s = 1 C å n = 0 n å l = 0 1 8 < : T 2 n bT ES 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 bT ES 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o T ES k [ bT ES ] bT ES k [ bT ES ] 1 bT ES 0 @ T 1/2 ‹ S s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k T 1/2 ‹ S 0 1 2 s + T 1/2 ‹ S s T 1/2 ‹ S 0 s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k ! + o p ( 1 ) . 654 UsingsimilarargumentsasintheproofofLemmaD8itfollowsthat ‹ W ES = 1 l 3 Z 1 0 Z 1 0 C å l = 0 C å n = 0 1 n l 2 n < r < l 2 n + 1 o 1 n l 2 l < u < l 2 l + 1 o 1 8 < : j r u j < b l + 2 ( n _ l ) å k = 2 ( l ^ n )+ 1 ( 1 ) k l k 9 = ; D T ES T 1 ES t å i = 1 a i s å i = 1 a i !! T 1 2 ‹ S [ rT ] T 1 2 ‹ S 0 [ uT ] dudr + 1 l 2 Z 1 0 C å n = 0 n å l = 0 1 n l 2 l < u < l 2 l + 1 o 1 8 < : l 2 n b l 2 n å k = 2 l + 1 ( 1 ) k l k < u < l 2 n + 1 b l 2 n å k = 2 l + 1 ( 1 ) k l k 9 = ; T ES k b ( b ) k b b 1 T ES 0 @ T 1/2 ‹ S h u + b l + å 2 n k = 2 l + 1 ( 1 ) k l k T i T 1/2 ‹ S 0 [ uT ] + T 1/2 ‹ S [ uT ] T 1/2 ‹ S 0 h u + b l + å 2 n k = 2 l + 1 ( 1 ) k l k T i 1 A du + o p ( 1 ) . Usinglim T ES ! ¥ T ES k b ( b ) k b ( b 1 / T ES ) = b 1 k 0 ( 1 ) ,LemmaD2,LemmaC4, (eqD.1),(eqD.6),thecontinuousmappingtheorem,andtheusedinthe 655 proofofLemmaD8,itfollowsthat ‹ W ES ) L " 1 b 2 l 3 C å l = 0 C å n = 0 Z l 2 n + 1 l 2 n Z l 2 l + 1 l 2 l 1 8 < : j r u j < 0 @ b 2 C + 1 å l = 0 l l ( 1 ) l + 1 + 2 ( n _ l ) å k = 2 ( l ^ n )+ 1 ( 1 ) k l k 1 A 9 = ; k 00 0 @ ( l b ) 1 0 @ 2 n + 1 å j = 1 ( 1 ) j + 1 ( r ^ l j ) 2 l + 1 å j = 1 ( 1 ) j + 1 ( u ^ l j ) 1 A 1 A B k ( r , f l i g ) B k ( u , f l i g ) 0 dudr + k 0 ( 1 ) b l 2 C å n = 0 n å l = 0 Z l 2 l + 1 l 2 l 1 8 < : l 2 n b l 2 n å k = 2 l + 1 ( 1 ) k l k < u < l 2 n + 1 b l 2 n å k = 2 l + 1 ( 1 ) k l k 9 = ; 8 < : B k 0 @ u + b l + 2 n å k = 2 l + 1 ( 1 ) k l k , f l i g 1 A B k ( u , f l i g ) 0 + B k ( u , f l i g ) B k 0 @ u + b l + 2 n å k = 2 l + 1 ( 1 ) k l k , f l i g 1 A 0 9 = ; du 3 5 L 0 L P ES 2 b , B k ( f l i g ) L 0 . LemmaD10. LetM ES = bT ES wherebisaconstantwithb 2 [ 0,1 ] .UnderAssumptions NR 0 ,whenk ( x ) istheBartlettkernel,asT ! ¥ , ‹ W ES ) L P ES 3 b , B k ( f l i g ) L 0 656 where, P ES 3 b , B k ( f l i g ) = 2 b 1 l 2 C å n = 0 Z l 2 n + 1 l 2 n B k r , f l i g B k r , f l i g 0 dr 1 b 1 l 2 C å n = 0 n å l = 0 Z l 2 l + 1 l 2 l 2 4 1 8 < : l 2 n b l 2 n å k = 2 l + 1 ( 1 ) k l k u l 2 n + 1 b l 2 n å k = 2 l + 1 ( 1 ) k l k 9 = ; 8 < : B k u , f l i g B k 0 @ u + b l + 2 n å k = 2 l + 1 l k ( 1 ) k , f l i g 1 A 0 + B k 0 @ u + b l + 2 n å k = 2 l + 1 l k ( 1 ) k , f l i g 1 A B k r , f l i g 0 9 = ; 3 5 du. Proof: Usingstraightforwardalgebrawehave D 2 K a ts = 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : 2 bT ES t å a i s å a i = 0 1 bT ES + 1 [ bT ES ] bT ES t å a i s å a i =[ bT ES ] 1 [ bT ES ] bT ES t å a i s å a i =[ bT ES ]+ 1 0otherwise . Werewrite ‹ W ES usingLemmaD1anddividingitintothethreenonzerocasesasdeter- minedby D 2 K a ts whileexpandingthesecondtermintotwopartsgiving ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å a i s å a i = 0 ) 2 bT ES ‹ S t ‹ S 0 s 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å a i s å a i =[ bT ES ] ) 1 bT ES ‹ S t ‹ S 0 s + 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å a i s å a i =[ bT ES ] ) 1 [ bT ES ] bT ES ‹ S t ‹ S 0 s 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å a i s å a i =[ bT ES ]+ 1 ) 1 [ bT ES ] bT ES ‹ S t ‹ S 0 s . 657 UsingsimilarargumentsasusedintheproofofLemmaD9for z 3 and z 4 ,itiseasyto showthatthethirdandfourthtermsare o p ( 1 ) .Therefore,wehave ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å a i s å a i = 0 ) 2 bT ES ‹ S t ‹ S 0 s 1 T ES T 1 å t = 1 T 1 å s = 1 a t + 1 a s + 1 1 ( t å a i s å a i =[ bT ES ] ) 1 bT ES ‹ S t ‹ S 0 s + o p ( 1 ) UsingLemmasD5andD7wecanwrite ‹ W ES = 1 T ES T 1 å t = 1 T 1 å s = 1 C å n = 0 1 n T 2 n t T 2 n + 1 1 o 1 f t = s g 2 bT ES ‹ S t ‹ S 0 s 1 T ES T 1 å t = 1 T 1 å s = 1 C å n = 0 n å l = 0 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 8 < : t = s + bT ES + 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 bT ES ‹ S t ‹ S 0 s + ‹ S s ‹ S 0 t + o p ( 1 ) . Wecansimplify ‹ W ES bypluggingin t = s and t = s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k into theandsecondtermsrespectivelyinsteadofusingtheindicatorfunctionstogive ‹ W ES = 1 T ES T 1 å t = 1 C å n = 0 1 n T 2 n t T 2 n + 1 1 o 2 bT ES ‹ S t ‹ S 0 t 1 T ES T 1 å s = 1 C å n = 0 n å l = 0 2 4 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o 1 bT ES ‹ S s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k ‹ S 0 s + ‹ S s ‹ S 0 s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k !# + o p ( 1 ) 658 = 2 b T T ES 2 T 1 T 1 å t = 1 C å n = 0 1 n T 2 n t T 2 n + 1 1 o T 1 2 ‹ S t ‹ S 0 t 1 b T T ES 2 T 1 T 1 å s = 1 C å n = 0 n å l = 0 2 4 1 8 < : T 2 n [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k s T 2 n + 1 1 [ bT ES ] 2 n å k = 2 l + 1 ( 1 ) k T k 9 = ; 1 n T 2 l s T 2 l + 1 1 o T 1 2 ‹ S s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k T 1 2 ‹ S 0 s + T 1 2 ‹ S s T 1 2 ‹ S 0 s + bT ES + å 2 n k = 2 l + 1 ( 1 ) k T k !# + o p ( 1 ) , = 2 b T T ES 2 Z 1 0 C å n = 0 1 n l 2 n r < l 2 n + 1 o T 1 2 ‹ S [ rT ] T 1 2 ‹ S 0 [ rT ] dr 1 b T T ES 2 Z 1 0 C å n = 0 n å l = 0 2 4 1 8 < : l 2 n b l 2 n å k = 2 l + 1 ( 1 ) k l k u l 2 n + 1 b l 2 n å k = 2 l + 1 ( 1 ) k l k 9 = ; 1 n l 2 l u < l 2 l + 1 o T 1 2 ‹ S [( u + b l + å 2 n k = 2 l + 1 ( 1 ) k l k ) T ] T 1 2 ‹ S 0 [ uT ] + T 1 2 ‹ S [ uT ] T 1 2 ‹ S 0 [( u + b l + å 2 n k = 2 l + 1 ( 1 ) k l k ) T ] !# du Furthercanbeobtainedbydenotingtheindicatorfunctionsastheranges oftheintegrals: 659 ‹ W ES = 2 b T T ES 2 C å n = 0 Z l 2 n + 1 l 2 n T 1 2 ‹ S [ rT ] T 1 2 ‹ S 0 [ rT ] dr 1 b T T ES 2 C å n = 0 n å l = 0 Z l 2 l + 1 l 2 l 2 4 1 8 < : l 2 n b l 2 n å k = 2 l + 1 ( 1 ) k l k u l 2 n + 1 b l 2 n å k = 2 l + 1 ( 1 ) k l k 9 = ; T 1 2 ‹ S [( u + b l + å 2 n k = 2 l + 1 ( 1 ) k l k ) T ] T 1 2 ‹ S 0 [ uT ] + T 1 2 ‹ S [ uT ] T 1 2 ‹ S 0 [( u + b l + å 2 n k = 2 l + 1 ( 1 ) k l k ) T ] !# du + o p ( 1 ) Then,byLemmaC4andthecontinuousmappingtheorem, ‹ W ES ) L 8 < : 2 b 1 l 2 C å n = 0 Z l 2 n + 1 l 2 n B k r , f l i g B k r , f l i g 0 dr 1 b 1 l 2 C å n = 0 n å l = 0 Z l 2 l + 1 l 2 l 2 4 1 8 < : l 2 n b l 2 n å k = 2 l + 1 ( 1 ) k l k u l 2 n + 1 b l 2 n å k = 2 l + 1 ( 1 ) k l k 9 = ; 8 < : B k u , f l i g B k 0 @ u + b l + 2 n å k = 2 l + 1 l k ( 1 ) k , f l i g 1 A 0 + B k 0 @ u + b l + 2 n å k = 2 l + 1 l k ( 1 ) k , f l i g 1 A B k u , f l i g 0 9 = ; 3 5 du 9 = ; L 0 = L P ES 3 b , B k ( f l i g ) L 0 . ProofofTheorem2.5(a) :Theorem2.5(a)directlyfollowsfromLemmasD8-D10. ProofofTheorem2.5(b) :Recallthatthenullhypothesisis H 0 : r ( b 0 )= 0with q 660 restrictions.TheWaldstatisticisas W ES T = r ‹ b ES 0 R ‹ b ‹ V ES R ‹ b ES 0 1 r ‹ b ES , where ‹ V ES = T ES 0 @ T ES å t = 1 x ES t x ES 0 t 1 A 1 ‹ W ES 0 @ T ES å t = 1 x ES t x ES 0 t 1 A 1 . Using ‹ b ES = b b and å T ES t = 1 x ES t x ES 0 t = å T t = 1 x t x 0 t ,wecanwrite W ES T = r ‹ b ES 0 R ‹ b ES ‹ V ES R ‹ b ES 0 1 r ‹ b ES = r ‹ b ES 0 2 6 4 R ‹ b ES T ES 0 @ T ES å t = 1 x ES t x ES 0 t 1 A 1 ‹ W ES 0 @ T ES å t = 1 x ES t x ES 0 t 1 A 1 R ‹ b ES 0 3 7 5 1 r ‹ b ES = p Tr ‹ b 0 2 4 T ES T R ‹ b T 1 T å t = 1 x t x 0 t ! 1 ‹ W ES T 1 T å t = 1 x t x 0 t ! 1 R ‹ b 0 3 5 1 p Tr ‹ b . FromtheproofofTheorem2.3(a),weknowthat p Tr ( ‹ b ) ) R b 0 l Q 1 L W k . Thereexistsa q q matrix D suchthat D D 0 = R b 0 l Q 1 W l Q 1 R b 0 0 , anditfollowsthat R b 0 l Q 1 L W k = D W k . 661 UsingthisresultandLemmasD8-D10(dependingonthetypeofkernel)itfollowsthat W T ) R ( b 0 ) l j Q 1 L W k 0 h l R b 0 l Q 1 L P ES b , B k ( f l i g ) L 0 l Q 1 R b 0 0 i 1 h R ( b 0 ) l Q 1 L W k i = D W q 0 h l D P ES b , B q ( f l i g ) D 0 i 1 D W q = W 0 q h l P ES b , B q ( f l i g ) i 1 W q . Forthespecialcaseof q = 1,wehavethefollowinglimitforthe t -statistic: t T ) W 1 q l P ES b , B 1 ( f l i g ) . Notethattheparticularformof P ES b , B q ( f l i g ) isgivenbyLemmasD8-D10depend- ingontheformofkernel. 662 AppendixE PROOFSFORFIXED- G ,LARGE- n G CASEWHEN G EVENLYDIVIDES T ProofofTheorem3.2(a)(AsymptoticLimitofOLS): Pluggingin n G = T / G to(3.2)in Section3.2,wecanwrite ‹ b b = 0 B @ G å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t 1 C A 1 G å g = 1 gn G å t =( g 1 ) n G + 1 v t = 0 B B B @ G å g = 1 ( g G ) T å t =( g 1 G ) T + 1 x t x 0 t 1 C C C A 1 G å g = 1 ( g G ) T å t =( g 1 G ) T + 1 v t . Itdirectlyfollowsthat p T ‹ b b = 0 B B B @ G å g = 1 T 1 ( g G ) T å t =( g 1 G ) T + 1 x t x 0 t 1 C C C A 1 G å g = 1 T 1 2 ( g G ) T å t =( g 1 G ) T + 1 v t .(eqE.1) AssumptionBimplies T 1 ( g G T ) å t = 1 x t x 0 t ) g G Q , and T 1 / 2 ( g G T ) å t = 1 v t ) L W k ( g G ) . Thereforefrom(eqE.1)itfollowsthat p T ‹ b b ) 0 @ G å g = 1 g G g 1 G Q 1 A 1 G å g = 1 L W k g G W k g 1 G = 0 @ G å g = 1 Q G 1 A 1 L W k ( 1 ) = Q 1 L W k ( 1 ) . 663 b S g = g å j = 1 b v j .Wenowestablishthefollowinglemmaabout b S g . LemmaE2. Let f W k ( r )= W k ( r ) r W k ( 1 ) .UnderAssumptionBasT ! ¥ , T 1/2 b S g ) L f W k g G . Proof: Pluggingin n G = T / G ,gives T 1/2 b S g = T 1/2 g å j = 1 b v j = T 1/2 g å j = 1 jn G å t =( j 1 ) n G + 1 ‹ v t = T 1/2 g å j = 1 ( j G ) T å t =( j 1 G ) T + 1 ‹ v t (eqE.2) Notethat ‹ v t = x t ( y t x 0 t ‹ b )= v t x t x 0 t ( ‹ b b ) .Thus,wecanwrite T 1/2 b S g = T 1/2 g å j = 1 ( j G ) T å t =( j 1 G ) T + 1 v t x t x 0 t ‹ b b = g å j = 1 0 B B B @ T 1/2 ( j G ) T å t =( j 1 G ) T + 1 v t T 1 ( j G ) T å t =( j 1 G ) T + 1 x t x 0 t p T ‹ b b 1 C C C A . AssumptionBimpliesthat T 1/2 j G T å t = j 1 G T + 1 v t ) L W k j G W k j 1 G , and T 1 j G T å t = j 1 G T + 1 x t x 0 t ) j G j 1 G Q . 664 FromTheorem3.2(a),weknowthat p T ( ‹ b b ) ) Q 1 L W k ( 1 ) . Thereforefrom(eqE.2), T 1/2 b S g ) g å j = 1 ˆ L W k j G W k j 1 G j G j 1 G QQ 1 L W k ( 1 ) ˙ , = L W k g G g G W k ( 1 ) , L f W k ( g G ) . ProofofTheorem3.2(b)(AsymptoticDistributionofCHAC): P ( G , M , f W k )= G 1 å g = 1 G 1 å h = 1 f W k g G 2 k j g h j M k j g h + 1 j M k j g h 1 j M f W k h G 0 . Usingsummationbypartswecanrewrite G T b W as G T b W = G 1 å g = 1 G 1 å h = 1 T 1/2 b S g 2 k j g h j M k j g h + 1 j M k j g h 1 j M T 1/2 b S 0 h ) L 2 4 G 1 å g = 1 G 1 å h = 1 f W k g G 2 k j g h j M k j g h + 1 j M k j g h 1 j M f W k h G 0 # L 0 L P ( G , M , f W k ) L 0 WeakconvergencefollowsfromLemmaE2.Theaboveexpressionisvalidforanykernel butinthecaseoftheBartlettkernelwecanfurthersimplify P ( G , M , f W k ) asfollows.Note 665 thatfortheBartlettkernelwehave 2 k j g h j M k j g h + 1 j M k j g h 1 j M = 8 > > > > > > < > > > > > > : 2 M when g = h 1 M when j g h j = M 0otherwise. . Therefore P ( G , M , f W k )= 2 M G 1 å g = 1 f W k g G f W k g G 0 1 M G M 1 å g = 1 f W k g G f W k g + M G 0 + f W k g + M G f W k g G 0 ! . ProofofTheorem3.2(c) (AsymptoticDistributionof W CHAC and t CHAC ):Using theof W CHAC andTheorem3.2(a,b)itfollowsfromstandardcalculations that W CHAC = R ‹ b r 0 h R b V CHAC R 0 i 1 R ‹ b r = p T R ‹ b r 0 h RT b V CHAC R 0 i 1 p T R ‹ b r = h R p T ‹ b b i 0 2 4 R 1 T T å t = 1 x t x 0 t ! 1 G T b W 1 T T å t = 1 x t x 0 t ! 1 R 0 3 5 1 R p T ‹ b b ) h RQ 1 L W k ( 1 ) i 0 h RQ 1 L P ( G , M , f W k ) L 0 1 R i 1 RQ 1 L W k ( 1 ) . (eqE.3) Thereexistsa q q matrix D suchthat D D 0 = RQ 1 W Q 1 R 0 , anditfollowsthat RQ 1 L W k = D W q . 666 Thenfrom(eqE.3), W CHAC ) h RQ 1 L W k ( 1 ) i 0 h RQ 1 L P ( G , M , f W k ) L 0 1 R i 1 RQ 1 L W k ( 1 ) = W 0 q ( 1 ) D 0 [ D P ( G , M , f W q ) D 0 ] 1 D W q ( 1 ) = W 0 q ( 1 ) P ( G , M , f W q ) 1 W q ( 1 ) When q = 1, t CHAC ) W 1 ( 1 ) q P ( G , M , f W 1 ) . 667 AppendixF PROOFSFORFIXED- G ,LARGE- n G CASEWHENTHENUMBEROF OBSERVATIONSARENOTTHEEXACTMULTIPLEOF G Inthisappendixweobtainthe G limitsforthecasewherethenumberofclusters doesnotevenlydividethesample.Supposethatthereare n G observationsin G 1 clustersand n l n G observationsinthelastcluster.Henceitfollowsthat T = n G ( G 1 )+ n l .Assumethat n l T ! l l as T ! ¥ . AsymptoticLimitofOLS: Firstnoticethat ‹ b canberewrittenas ‹ b = 0 B @ G 1 å g = 1 gn G å t =( g 1 ) n G + 1 x t x 0 t + T å t = T n l + 1 x t x 0 t 1 C A 1 0 B @ G 1 å g = 1 gn G å t =( g 1 ) n G + 1 x t y t + T å t = T n l + 1 x t y t 1 C A . Also,notethat T = n G ( G 1 )+ n l , n G = T n l G 1 , 1 G 1 1 n l T T .(eqF.1) 668 Thenwith(eqF.1)andAssumptionB, p T ‹ b b = 0 B @ G 1 å g = 1 T 1 gn G å t =( g 1 ) n G + 1 x t x 0 t + T 1 T å t = T n l + 1 x t x 0 t 1 C A 1 0 B @ G 1 å g = 1 T 1/2 gn G å t =( g 1 ) n G + 1 v t + T 1/2 T å t = T n l + 1 v t 1 C A = 0 B B B @ G 1 å g = 1 T 1 g G 1 ( 1 n l T ) T å t = g 1 G 1 ( 1 n l T ) T + 1 x t x 0 t + T 1 T å t =( 1 n l T ) T + 1 x t x 0 t 1 C C C A 1 0 B B B @ G 1 å g = 1 T 1/2 g G 1 ( 1 n l T ) T å t = g 1 G 1 ( 1 n l T ) T + 1 v t + T 1/2 T å t =( 1 n l T ) T + 1 v t 1 C C C A ) 0 @ G 1 å g = 1 g G 1 g 1 G 1 ( 1 l l ) Q + 1 ( 1 l l ) Q 1 A 1 L 2 4 G 1 å g = 1 W k g ( 1 l l ) G 1 W k ( g 1 )( 1 l l ) G 1 + W k ( 1 ) W k 1 l l 3 5 = Q 1 L W k ( 1 ) LemmaF2. Let f W k ( r )= W k ( r ) r W k ( 1 ) .UnderAssumptionBasT ! ¥ ,wheng G 1 , T 1/2 b S g ) L f W k g ( 1 l l ) G 1 . Wheng = G, T 1 2 b S g = 0. 669 Proof: When g G 1,itfollowsbysimplealgebra T 1/2 b S g = T 1/2 g å j = 1 jn G å t =( j 1 ) n G + 1 ‹ v t = g å j = 1 T 1/2 jn G å t =( j 1 ) n G + 1 v t g å j = 1 T 1 jn G å t =( j 1 ) n G + 1 x t x 0 t p T ‹ b b = g å j = 1 T 1/2 j G 1 ( 1 n l T ) T å t = j 1 G 1 ( 1 n l T ) T + 1 v t g å j = 1 T 1 j G 1 ( 1 n l T ) T å t = j 1 G 1 ( 1 n l T ) T + 1 x t x 0 t p T ‹ b b * n g = T n l G 1 (eqF.1) ) g å j = 1 L W k j ( 1 l l ) G 1 W k ( j 1 )( 1 l l ) G 1 g å j = 1 j ( 1 l l ) G 1 Q ( j 1 )( 1 l l ) G 1 Q Q 1 L W k ( 1 ) = L W k g ( 1 l l ) G 1 g ( 1 l l ) G 1 W k ( 1 ) L f W k g ( 1 l l ) G 1 When g = G , T 1 2 b S G = 0 becauseitistheorderconditionfortheOLSestimator.Notethatwhen l l = 0,we obtainthesameresultasinLemmaE2asexpected. 670 AsymptoticLimitof G T b W :RecallingthealgebrausingtheproofofTheorem3.2(b): G T b W = G 1 å g = 1 G 1 å h = 1 T 1/2 b S g 2 k j g h j M k j g h + 1 j M k j g h 1 j M T 1/2 b S 0 h ) L 2 4 G 1 å g = 1 G 1 å h = 1 f W k g ( 1 l l ) G 1 2 k j g h j M k j g h + 1 j M k j g h 1 j M f W k h ( 1 l l ) G 1 0 # L 0 L P l G , M , f W k , l l L 0 , whichfollowsfromLemmaF2.FortheBartlettkernelwehave G T b W = G 1 å g = 1 G 1 å h = 1 T 1/2 b S g 2 k j g h j M k j g h + 1 j M k j g h 1 j M T 1/2 b S 0 h = 2 M G 1 å g = 1 T 1/2 b S g T 1/2 b S 0 g 1 M G M 1 å g = 1 T 1/2 b S g T 1/2 b S 0 g + M + T 1/2 b S g + M T 1/2 b S 0 g ) L 2 4 2 M G 1 å g = 1 f W k g ( 1 l l ) G 1 f W k g ( 1 l l ) G 1 0 1 M G M 1 å g = 1 f W k g ( 1 l l ) G 1 f W k ( g + M )( 1 l l ) G 1 0 + f W k ( g + M )( 1 l l ) G 1 f W k g ( 1 l l ) G 1 0 !# L 0 L P l ( G , M , f W k , l l ) L 0 Notethatwhen l l = 0,theasymptoticapproximationisthesameasinTheorem3.2(b). AsymptoticLimitof W CHAC : UsingsimilarargumentsasintheproofofTheorem3.2 671 (c),itfollowsfromthepreviousresultsinthisappendixthat W CHAC = R ‹ b r 0 h R b V CHAC R 0 i 1 R ‹ b r = p T R ‹ b r 0 h RT b V CHAC R 0 i 1 p T R ‹ b r = h R p T ‹ b b i 0 2 4 R T 1 T å t = 1 x t x 0 t ! 1 T 1 G b W T 1 T å t = 1 x t x 0 t ! 1 R 0 3 5 1 R p T ‹ b b ) h RQ 1 L W k ( 1 ) i 0 h RQ 1 L P l ( G , M , f W k , l l ) L 0 1 R i 1 RQ 1 L W k ( 1 ) = W q ( 1 ) 0 P l ( G , M , f W q , l l ) 1 W q ( 1 ) . When q = 1, t CHAC = R ‹ b r q R ‹ V CHAC R 0 ) W 1 ( 1 ) q P l ( G , M , f W 1 , l l ) . 672 BIBLIOGRAPHY 673 BIBLIOGRAPHY DennisAigner,C.A.KnoxLovell,andPeterSchmidt.Formulationandestimationof stochasticfrontierproductionfunctionmodels. JournalofEconometrics ,6:21Œ37,1977. HirotuguAkaike.Anewlookatthestatisticalmodelidentication. I.E.E.E.Transactionson AutomaticControl ,19:716Œ723,1974. AntonioAlvarez,ChristineAmsler,LuisOrea,andPeterSchmidt.Interpretingandtest- ingthescalingpropertyinmodelswhereinefdependsoncharacteristics. JournalofProductivityAnalysis ,25:201Œ212,2006. DonaldW.K.Andrews.Testingwhenaparameterisontheboundaryofthemaintained hypothesis. Econometrica ,69:683Œ734,2001. DonaldW.K.Andrews.Heteroskedasticityandautocorrelationconsistentcovariancema- trixestimation. Econometrica ,59:817Œ858,1991. GeorgeE.BatteseandTimJ.Coelli.Predictionoftechnicalefwith ageneralizedfrontierproductionfunctionandpaneldata. JournalofEconometrics ,38: 387Œ399,1988. GeorgeE.BatteseandTimJ.Coelli.Amodelfortechnicalinefeffectsinastochas- ticfrontierproductionfunctionforpaneldata. EmpiricalEconomics ,20:325Œ332,1995. C.AlanBester,TimothyG.Conley,andChristianB.Hansen.Inferencewithdependent datausingclustercovarianceestimators. JournalofEconometrics ,165:137Œ151,2011. PeterSpectralanalysiswithrandomlymissingobservations. Journalofthe RoyalStatisticalSociety ,32:369Œ380,1970. StevenB.Caudill.Estimatingamixtureofstochasticfrontierregressionmodelsviathe emalgorithm:Amultiproductcostfunctionapplication. EmpiricalEconometrics ,28: 581Œ598,2003. StevenB.CaudillandJonM.Ford.Biasesinfrontierestimationduetoheteroskedasticity. EconomicsLetters ,41:17Œ20,1993. StevenB.Caudill,JonM.Ford,andDanielM.Gropper.Frontierestimationand inefmeasuresinthepresenceofheteroscedasticity. JournalofBusiness andEconomicStatistics ,13:105Œ111,1995. YongChenandKung-YeeLiang.Ontheasymptoticbehaviourofthepseudolikelihood ratioteststatisticwithboundaryproblems. Biometrika ,97:603Œ620,2010. TimothyJ.Coelli,D.S.PradadaRao,ChristopherJ.O'Donnell,andGeorgeE.Battese. An introductiontoandproductivityanalysis .Springer,NewYork,2005. 674 DeepaDhumeDattaandWenxinDu.NonparametricHACestimationfortimeseries datawithmissingobservations. WorkingPaper ,2012. JamesDavidson. StochasticLimitTheory .AdvancedTextsinEconometrics.OxfordUni- versityPress,2002. JohnC.DriscollandAartC.Kraay.Consistentcovariancematrixestimationwithspatially dependentpaneldata. TheReviewofEconomicsandStatistics ,80:549Œ560,1998. WilliamDunsmuirandPeterRobinson.Asymptotictheoryfortimeseriescontaining missingandamplitudemodulatedobservations. TheIndianJournalofStatistics ,43:260Œ 281,1981. A.RonaldGallantandHalbertWhite. ATheoryofEstimationandInferenceforNon- linearDynamicModels .BasilBlackwell,1995. SílviaGonçalvesandTimothyJ.Vogelsang.BlockbootstrapHACrobusttests:theso- phisticationofthenaivebootstrap. EconometricTheory ,27:745Œ791,2011. ChristianGouriérouxandAlainMonfort. StatisticsandEconometricModels .Numberv. 2inStatisticsandEconometricModels:Testing,eRegions,ModelSelection, andAsymptoticTheory.CambridgeUniversityPress,NewYork,1995. ChristianGouriéroux,AlbertoHolly,andAlainMonfort.Likelihoodratiotest,Waldtest, andKuhn-Tuckertestinlinearmodelswithinequalityconstraintsontheregression parameters. Econometrica ,50:63Œ80,1982. LucaGrassetti.Anovelmixturebasedstochasticfrontiermodelwithapplicationtohos- pitalef. unpublishedmanuscript,UniversityofUdine ,2011. WilliamGreene.Reconsideringheterogeneityinpaneldataestimatorsofthestochastic frontiermodel. JournalofEconometrics ,126:269Œ303,2005. E.J.HannanandB.G.Quinn.Thedeterminationoftheorderofanautoregression. Journal oftheRoyalStatisticalSociety.SeriesB(Methodological) ,41:190Œ195,1979. BruceE.Hansen.Stronglawsfordependentheterogeneousprocesses. EconometricTheory , 7:213Œ221,1991. BruceE.Hansen.Erratum. EconometricTheory ,8:421Œ422,1992. CliefJ.HuangandJin-TanLiu.Estimationofanon-neutralstochasticfrontierproduction function. JournalofProductivityAnalysis ,5:171Œ180,1994. MichaelJansson.Consistentcovariancematrixestimationforlinearprocesses. Economet- ricTheory ,18:1449Œ1459,2002. JamesJondrow,C.A.KnoxLovell,IvanS.Materov,andPeterSchmidt.Ontheestimation oftechnicalinefinthestochasticfrontierproductionfunctionmodel. Journalof Econometrics ,19:233Œ238,1982. 675 NicholasM.KieferandTimothyJ.Vogelsang.Anewasymptotictheoryfor heteroskedasticity-autocorrelationrobusttests. EconometricTheory ,21:1130Œ1164,2005. SubalC.Kumbhakar,SoumendraGhosh,andJ.ThomasMcGuckin.Ageneralizedpro- ductionfrontierapproachforestimatingdeterminantsofinefinU.S.dairy farms. JournalofBusinessandEconomicStatistics ,9:279Œ286,1991. SubalC.Kumbhakar,ChristopherF.Parmeter,andEfthymiosG.Tsionas.Azeroinef ciencystochasticfrontiermodel. JournalofEconometrics ,172:66Œ76,2013. WimMeeusenandJulienvandenBroeck.EfestimationfromCobb-Douglaspro- ductionfunctionwithcomposederror. InternationalEconomicReview ,18:435Œ444,1977. HenryR.Neave.Spectralanalysisofastationarytimeseriesusinginitiallyscarcedata. Biometrika ,57:111Œ122,1970. WhitneyK.NeweyandKennethD.West.Asimple,positiveheteroskedas- ticityandautocorrelationconsistentcovariancematrix. Econometrica ,55:703Œ708,1987. LuisOreaandSubalC.Kumbhakar.Efmeasurementusingalatentclassstochas- ticfrontiermodel. EmpiricalEconometrics ,29:169Œ183,2004. EfstathiosPaparoditisandDimitrisN.Politis.Residual-basedblockbootstrapforunit roottesting. Econometrica ,71:813Œ855,2003. EmanuelParzen.Onspectralanalysiswithmissingobservationsandamplitudemodu- lation. TheIndianJournalofStatistics ,25,1963. PeterC.B.PhillipsandStevenN.Durlauf.Multipletimeseriesregressionwithintegrated processes. ReviewofEconomicStudies ,53(4):473Œ95,1986. DavidReifschneiderandRodneyStevenson.Systematicdeparturesfromthefrontier:a frameworkfortheanalysisofinef. InternationalEconomicReview ,32:715Œ 723,1991. AlanJ.Rogers.lagrangemultipliertestsforproblemswithone-sidedalterna- tives. JournalofEconometrics ,31:341Œ361,1986. PerryA.Scheinok.Spectralanalysiswithrandomlymissedobservations:Thebinomial case. TheAnnalsofMathematicalStatistics ,36(3):971Œ977,1965. GideonSchwarz.Estimatingthedimensionofamodel. TheAnnalsofStatistics ,6:461Œ464, 1978. StevenG.SelfandKung-YeeLiang.Asymptoticpropertiesofmaximumlikelihoodesti- matorsandlikelihoodratiotestsundernonstandardconditions. JournaloftheAmerican StatisticalAssociation ,82:605Œ610,1987. YixiaoSun,PeterC.B.Phillips,andSainanJin.Optimalbandwidthselectionin heteroskedasticity-autocorrelationrobusttesting. Econometrica ,76:175Œ194,2008. 676 TimothyJ.Vogelsang.Heteroskedasticity,autocorrelation,andspatialcorrelationrobust inferenceinlinearpanelmodelswithfects. JournalofEconometrics ,166:303Œ319, 2012. DonaldM.Waldman.Astationarypointforthestochasticfrontierlikelihood. Journalof Econometrics ,18:275Œ279,1982. Hung-JenWang.Heteroscedasticityandnon-monotonicefeffectsofastochastic frontiermodel. JournalofProductivityAnalysis ,18:241Œ253,2002. HalbertWhite,editor. AsymptoticTheoryforEconometricians .AcademicPress,2001.Re- visedEdition. 677