3 \‘tt . . V n: .. . : k . ) . 3.. V ‘ ‘ ‘ ‘ J ’x. . . ‘ ‘ . . . ‘ A . , . . . . bun... . :11”. u , .14: 1.. (15..." ll. . ”Rana. tundxélz: .. . akin... NW. bran 3‘95». 9. .2. u. . £3.71? b. .. ‘ .49.? . 2, V4 x ..v;x . , 513.“! . .i . » s '- IA “$511! ill-i '2. Sun... 7 . . . ".13 1.3%: ‘ 3 .10. 2.1.3. , - 3p i K 21.4.1. 9 4.. View»; 12-5.2. s V . 3 i... he» guy‘s...) .3»... 3:8 , 7.3 If. : at .2 a flu a. :1. xv? . 5.3.17 3 l .2: ~TAMvnx.’ "71"”? | "r‘ ‘93 609 This is to certify that the dissertation entitled KPSS AND LEYBOURNEchCABE AUTOCORRELATION CORRECTIONS IN STATIONARITY TESTS presented by Yongsu Cho has been accepted towards fulfillment of the requirements for Ph . D degree in ECODOEiCS 3&1»ka Major professor Date 8/22/02 MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 LIBRARY Michigan State University PLACE IN RETURN Box to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 cJClRC/DatoDuepGS—sz KPSS AND LEYBOURNE-MCCABE AUTOCORRELATION CORRECTIONS IN STATIONARITY TESTS By Yongsu Cho A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Economics 2002 ABSTRACT KPSS AND LEYBOURNE-MCCABE AUTOCORRELATION CORRECTIONS IN STATIONARITY TESTS By Yongsu Cho We consider tests of the null hypothesis that a time series is stationary that were proposed by Kwiatkowski et al.(1992) and Leyboume and McCabe (1994, 1999). We identify a problem with the Leyboume and McCabe (1999) test and suggest two modifications of the test to solve it. We provide consistent model selection rules to pick the number of lags used in the tests. Then we conduct simulations to compare the size and power characteristics of the tests under different data generating processes and different treatments of the number of lags. Generally speaking, the results are favorable to the use of formal model selection rules, and they are unfavorable to the (unmodified) Leyboume and McCabe (1999) test. Dedicated to my parents, And to my wife, Eunj i iii Ft would lil and inter grateful 1 Robert d l Institute helping their en 1 Slippon Heeyeo Support ACKNOWLEDGEMENTS For the completion of my dissertation, I am indebted to many people. First of all, I would like to thank my dissertation supervisor, Professor Peter Schmidt for his excellent and invaluable advice throughout the course of writing this dissertation. I am especially grateful for his patience and intimate guidance on my dissertation. I also thank Professor Robert de J ong and Professor Jeffrey Wooldridge for their helpful comments. I gratefully acknowledge the financial support of the LG Economic Research Institute for my study in the United States. I also would like to thank many friends for helping me in different ways. I am grateful to Dr. Chirok Han and Jongbyung Jun for their critical advice on computer simulations. Finally, I would like to thank my parents and parents-in-law for their invaluable support. My deepest gratitude goes to my wife, Eunji and my beloved two daughters, Heeyeon and Heeseo, for their encouragement and patience. Without their love and support, I could not have completed this dissertation. iv LIST ( C HAP lNIRl C HA PERI “Til CPL: PER TES CH, PEF \m TABLE OF CONTENTS LIST OF TABLES .................................................................................. vii CHAPTER 1 INTRODUCTION .................................................................................... 1 1. Preliminaries .............................................................................. 1 2. The “local level model” and score tests .............................................. 2 3. LM99 test and its modifications ........................................................ 7 4. Short-run dynamics ..................................................................... 10 5. Organization of the thesis .............................................................. 16 CHAPTER 2 PERFORMANCE OF THE KPSS AND LEYBOURNE-McCABE TESTS WITH A FD(ED NUMBER OF LAGS ........................................................... 18 1. Introduction ............................................................................. 18 2. Simulations .............................................................................. 19 3. Conclusions ............................................................................. 28 Appendix I ................................................................................... 31 CHAPTER 3 PERFORMANCE OF THE KPSS AND LEYBOURNE-MCCABE TESTS WHEN THE NUMBER OF LAGS INCREASES WITH THE SAMPLE SIZE ............................................................................... 51 1. Introduction ............................................................................ 51 2. Theoretical issues ..................................................................... 52 3. Simulations ............................................................................. 54 4. Conclusions ............................................................................ 60 CHAPTER 4 PERFORMANCE OF THE KPSS AND LEYBOURNE-McCABE TESTS WITH MODEL SELECTION RULES .......................................................... 77 1. Introduction ............................................................................ 77 2. A consistent model selection rule for the Leyboume-McCabe tests. . . . . . ....78 3. A consistent model selection rule for the KPSS test 8O 4. Simulations .............................................................................. 82 C HA CON Bibli 5. Conclusions ............................................................................. 91 Appendix II .................................................................................. 93 CHAPTER 5 CONCLUDING REMARKS ..................................................................... 124 Bibliography ......................................................................................... 128 vi CHAP' Table I Table I Table j Table Table Table Table Table Table Tab]: Tablr Tabll Table LIST OF TABLES CHAPTER 2 Table 2.1: Size of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) ....................................................... 33 Table 2.2: Power of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) ....................................................... 34 Table 2.3: Actual Critical Values of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) ....................................................... 35 Table 2.4: Size-Adjusted Power of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) ....................................................... 36 Table 2.5: Size and Power of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) ....................................................... 37 Table 2.6: Actual Critical Values of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) ................................................ 39 Table 2.7: Size-Adjusted Powe of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) ................................................ 40 Table 2.8: Size and Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP. Yr: DYt-l'l' 8:, P=1/3) ............................................................ 42 Table 2.9: Actual Critical Values of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: yt = PYt-1+ 8t, p=1/3) .................................................... 43 Table 2.10: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: yt = py,_1+ 8,, p=1/3) ................................................... 44 Table 2.11: Size and Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP. Y! = aft-98H, 9:0.5) ............................................................ 45 Table 2.12: Actual Critical Values of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: y. = 8t+98t-1, 9:0.5) .................................................... 46 Table 2.13: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: yt= mesa, 9=O.5) .................................................... 47 vii Table 2.. Table 2. Table 2. CRAFT Table 3 Table 3 Table 3 Table 2 Table 1 Table Table Table Table Table Table Table 2.14: Size and Power of KPSS and Leyboume-McCabe Tests with ARMA(1, 1) Errors (DGP: yt= pyt 1+ was. 1, p=1/3, 0=1/2) .................................... 48 Table 2.15: Actual Critical Values of KPSS and Leyboume-McCabe Tests with ARMA(1, 1) Errors (DGP: yt= pyt- 1+ et+08t. 1, p=1/3, 0=1/2) ............. 49 Table 2.16: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with ARMA(1, 1) Errors (DGP: yt= pyt. 1+ 5t+93t-1, p=1/3, 0=1/2) ................... 50 CHAPTER 3 Table 3.1: Size and Power of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) .............. 63 Table 3.2: Actual Critical Values of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) ........... 65 Table 3.3: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) ........... 66 Table 3.4: Size and Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y,= pyt-1+ 8t, p=1/3) ............................................................. 68 Table 3.5: Actual Critical Values of KPSS and Leyboume- -McCabe with AR(1) Errors (DGP: yt= pyt-1+ st, p=1/3) ............................................................ 69 Table 3.6: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y. = pyt-1+ at, p=1/3) .................................................... 70 Table 3.7: Size and Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: yt=81+981-1, 0=0. 5) ............................................................. 71 Table 3.8: Actual Critical Values of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: yt= e.+03,- 1, 0=O. 5) ..................................................... 72 Table 3.9. Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: yt= 8t+98(-1, 0=O. 5) .................................................... 73 Table 3.10: Size and Power of KPSS and Leyboume-McCabe Tests with ARMA(1,1) Errors (DGP: yt= pyt 1+ was..- 1, p=1/3, 0=1/2) ..................................... 74 Table 3.11: Actual Critical Values of KPSS and Leyboume-McCabe Tests with ARMA(1,1) Errors (DGP: yt= pyH+ et+08.-1, p=1/3, 0=1/2) ................... 75 viii C HAP' Table -‘ Table . Table Table Tablc Tabl Tab] Tab Tab Tal Tal Ta‘ Table 3.12: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with ARMA(l,l) Errors (DGP: y,= py,-1+ St'l'GSH, p=1/3, 0=1/2)....................76 CHAPTER 4 Table 4.1: Table 4.2: Table 4.3: Table 4.4: Table 4.5: Table 4.6: Table 4.7: Table 4.8: Table 4.9: Frequency of Lag Selection: KPSS Test Under the Null (DGP: iid errors, no time trend) (lmax=3, 10% significance level for pretest) ......................................... 96 Frequency of Lag Selection: KPSS Test Under the Alternative (DGP: iid errors, no time trend) (Im=3, 10% significance level for pretest) ......................................... 97 Size of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ........................................................ 98 Power of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ........................................................ 99 Actual Critical Values of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ..................................................... 100 Size-Adjusted Power of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ...................................................... 101 Frequency of Lag Selection: KPSS Test Under the Null (DGP: iid errors, no time trend) (1,....=3, c.v=(T/100)m) ............................................................... 102 Frequency of Lag Selection: KPSS Test Under the Alternative (DGP: iid errors, no time trend) . (1m..=3, c.v=(T/1OO)”4) ............................................................... 103 Size of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ..................................................... 105 Table 4.10: Power of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ..................................................... 106 Table 4.11: Actual Critical Values of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) .................................................... 107 Table 4.12: Size-Adjusted Power of KPSS Test with Model Selection Rule (DGP: iid errors, no time trend) ..................................................... 108 ix Table 4 Table 4 Table - Table Table Table Table Tabl: Tabl Tab] Tabi Tab Tab Tab Tab Table 4.13: Size and Power of Leyboume-McCabe Tests with Model Selection Rule (Pmax=3) (DGP: iid errors, no time trend) ........................................ 109 Table 4.14: Actual Critical Values of Leyboume-McCabe Tests with Model Selection Rule (pmax=3) (DGP: iid errors, no time trend) ................................... 110 Table 4.15: Size-Adjusted Power of Leyboume-McCabe Tests with Model Selection Rule (pmx=3) (DGP: iid errors, no time trend) ................................... 111 Table 4.16: Size and Power of Leyboume-McCabe Tests with Model Selection Rule (pmaX=3) (DGP: iid errors, no time trend) ................................... 112 Table 4.17: Actual Critical Values of Leyboume-McCabe Tests with Model Selection Rule (pmax=3) (DGP: iid errors, no time trend) ................................... 113 Table 4.18: Size-Adjusted Power of Leyboume-McCabe Tests with Model Selection Rule (pmax=3) (DGP: iid errors, no time trend) ................................... 114 Table 4.19: Size and Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: Yr: Pyt4+ 8t, p=1/3) .......................................................... 115 Table 4.20: Actual Critical Values of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: yt= pyt-1+ at, p=1/3) ................................................. 116 Table 4.21: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: yt= pyt-1+ st, p=1/3) .................................................. 117 Table 4.22: Size and Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: y, = 8:498“, 0=O.5) ............................................................ 118 Table 4.23: Actual Critical Values of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: yt= ewes“, 0:0.5) ................................................... , 119 Table 4.24: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: y, = ewes“, 0:0.5) .................................................. 120 Table 4.25: Size and Power of KPSS and Leyboume-McCabe Tests with ARMA(1,1) Errors (DGP: yt= py,-1+ s,+03t.1, p=1/3, 0=1/2) .................................. 121 Table 4.26: Actual Critical Values of KPSS and Leyboume-McCabe Tests with ARMA(1,1) Errors (DGP: yt= pyH-l- 8t+98t-|, p=l/3, 0=1/2) .................. 122 Table 4.27: Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with ARMA(1,1) Errors (DGP: Yr: PYM+ ewes“, p=1/3, 0=l/2) ................. 123 1. Pre Fr nonsu owing neces hwm EVide Us. empii tests data DejO: tl'lIS a Static Stati C Kwi a Chapter 1 Introduction 1. Preliminaries From a statistical point of view, the correct treatment of the stationary or nonstationary nature of time series data is quite crucial for valid statistical inference, owing to the spurious regression phenomenon. However, standard unit root tests are not necessarily very powerful against relevant alternatives. A unit root is typically the null hypothesis being tested, and the null hypothesis is accepted unless there is strong enough evidence against it. Since the influential work of Nelson and Plosser (1982), which found that most US. macroeconomic time series contain a unit root, it has been a well-established empirical fact that standard unit root testing methods such as Dickey-Fuller tests, ADF tests and Phillips-Perron tests do not clearly determine whether the observed time series data contains a unit root or not. Dejong et al.(1989), Diebold and Rudebusch (1990), Dejong and Whiteman (1991) and Phillips (1991) provide empirical evidence supporting this argument. These studies suggest that, in trying to decide whether macroeconomic data are stationary or integrated, it would be useful to perform tests of the null hypothesis of stationarity as well as tests of the null hypothesis of a unit root. Tanaka (1990), Kwiatkowski, Phillips, Schmidt and Shin (1992), hereafter KPSS, Saikkonen and Luukkon score—ha: of a unit of the m tests and 2. The “ par amel nonstat; the tim and a s 66raI-1d0: Value I Luukkonen (1993), and Leyboume and McCabe (1994), hereafier LM94, have proposed score-based tests of the null hypothesis of stationarity against the alternative hypothesis of a unit root. Leyboume and McCabe (1999), hereafter LM99, have also proposed a test of the null hypothesis of stationarity. This thesis will propose some extensions of these tests and will analyze their properties, mainly through a large number of simulations. 2. The “local level model” and score tests The KPSS and Leyboume-McCabe stationarity tests were derived from a parameterization which provides a plausible representation of both stationary and nonstationary variables. The “local level model” is 3 components representation in which the time series under study is written as the sum of a deterministic trend, a random walk, and a stationary error. See, e.g., Harvey (1989, pp. 31-32), who also refers to this as the “random walk plus noise” model. If y, is the observed series, we write it as follows: y,=flt+,u,+u,. (1) Here y, is a random walk: ,U, = ,U,_l + V, , where the v, are iid (0, 03,2) and the initial value [10 is treated as fixed, and serves as an intercept. Also u, is iid (0, of); later, the iid assumption will be relaxed. The term ,6! allows for deterministic linear trend. Define It = 0'3 / 0'3 . Then the null hypothesis of stationarity corresponds to i=0 (hence of = 0 , so no random walk component exists). The unit root alternatives are indexed by 2&0. Thus i=0 corresponds to stationarity around a constant level (if (i=0) or around at of a pure ' l) (I,2 is ' L the rand U1 test (LBI't u Le}bou 1}, be tl (IE fine LBI St; The ] deriv; by T3 “’her around a trend (if 0:0). Cases with 7t>0 have a unit root. As k—wo, we approach the case of a pure random walk. 1) 6..2 is known Under the further assumptions that the stationary error u, is normal white noise, the random walk innovation v, is normal, and the variance of is known, the one-sided LM test statistic for the stationarity hypothesis is the same as the locally best invariant (LBI) test statistic. Nyblom (1986), Nabeya and Tanaka (1988), KPSS (1992), and Leyboume and McCabe (1994) all consider a model equivalent to the model above. Let :2, be the residual from an OLS regression of y, on the intercept and time trend. Then we I define the partial sum process of the residuals: S, =2 13,- , t= 1,2,. . .,T. Then the LM and i =1 LBI statistic is T LM = ZS,2 /af. (2) (=1 The LBI derivation is given by Nyblom and Nabeya and Tanaka, while the LM derivation is given by KPSS. We will follow the notation of KPSS, with a normalization by T'Z: r 77, = T‘ZZS,2 /03, (3) t=l where the subscript “r” indicates that we have allowed for linear deterministic trend. In the case that we wish to test the hypothesis of level stationarity (i.e., we impose (i=0) inst: intercept N Where tl level st. the Bro Under C: H ‘a Where CI‘iti 166‘] 0:0) instead of trend stationarity, we define 12, as the residual from regression of y on an intercept only (ti, = y, —- )7) instead of the above, and the rest of the test is unaltered. Now we write T 2 ,7” = T‘ZZS, m}, (4) (=1 where the subscript “it” indicates that we have extracted a mean but not a trend from y. The asymptotics for the two tests are similar. First we will discuss the test for level stationarity (77”). Let W(r) be a Wiener process (Brownian motion), and let V(r) be the Brownian bridge: V(r)=W(r)-rW(I), 0 S r S 1. (5) Under the null hypothesis, y, = #0 + u, where ya is fixed and u, is iid. Then :2, = y, — 7 = u, - r7 , and cumulations of the 1?, converge to a Brownian bridge: T "WSW => 0..V(r), 0 s r 51, (6) where [rT] denotes the integer part of rT and “:>” denotes weak convergence. Then it follows that l 77” :> IV(r)2dr (7) 0 Critical values based on this distribution have been widely tabulated; e. g., KPSS (1992, p. 166) Now consider the alternative that DO. Let _Pl_’(r) be the demeaned Wiener process: 1(r)=W(r)—thb)db. (s) Then KPSS and cone. andd KPSS note LM disr no Then KPSS (1992, p.168) show that (for 0.50) T—3/ZS[rT] 2) av Il(S)dS (9) 0 and correspondingly T 2 l r 7-2,” = r425, m: => (of /af)j(l_n_/(s)ds)2)dr., (10) 1:] 0 0 The analysis for the test of trend stationarity is very similar. Under the null, we simply replace the Brownian bridge V(r) by the “second-level Brownian bridge” 1 V2(r) = W(r)+(2r—3r2 )W(1)+(6r2 -6r) [W(s)ds, as given by KPSS (1992, equation (16)). 0 Under the alternative, we replace the demeaned Wiener process E0) by the “demeaned l l and detrended Wiener process” W (r)=W(r)+(6r—4) IW(S)dS+ IS W(s)ds, as given by 0 0 KPSS (1992, equation (26)). The essential point of this discussion is that 77,, (or 77,) is Op(1) under the null, but Op(T2) under the unit root alternative. Thus these tests are consistent. It should also be noted that the normality assumption for u, and V, was made to allow the derivation of the LM or LBI test. However, the consistency of the tests and the validity of the asymptotic distribution results given above do not depend on these normality assumptions. The tests may have certain optimal properties under normality, but they are valid without the normality assumption. 1I chuu \ the rand; 3. 0,1513 level-st Th5- static CORE mm. 2) 0,2 is unknown Now we continue to assume that the stationary error u, is normal white noise, and the random walk innovation v, is normal, but we relax the assumption that the variance 02,2 is known. Let a“: be an estimate of 0': that is consistent under the null. Then in the level-stationary case we define the statistic T 2 i7", =T‘ZZS, /&f. (11) (=1 This differs from 77,, in (4) only because (if replaces 0'}. Similarly, in the trend- stationary case, we define fir by replacing of in (3) by (if , an estimate of a: that is consistent under the null. Replacing a: by a consistent estimate 6'3 does not alter the distribution theory under the null. For the case we are currently considering (iid u,, of unknown), both KPSS and LM94 would suggest the following estimate of a: : T 6-3 =T"Zu‘}. (12) r=l This is indeed a consistent estimate of 0': under the null. However, under the unit root alternative, 63 is Op(T). Specifically, for the level-stationary case we have: 7' I We: =T'ZZfiE = of lizards, (13) 1:1 0 where 22(3) is demeaned Wiener process of equation (8). As a result 17” is Op(T) under the alternative (instead of Op(T2), as 27,, was, with known 0': ). Specifically, T. the deme place of change alternat place 0 3. L31 Static null “Sin He T l r 2 1 T4,)” = 21-423} /T"aj => Jl flaws] dr/ [[0de (14) i=1 0 o 0 The analysis for the fit test is essentially the same. We just replace E0) by W‘(r), the demeaned and detrended Wiener process. The essential point is still that using 6': in place of of does not alter the asymptotic distribution theory under the null, but it does change the distribution theory under the alternative. The test is Op(T2) under the alternative with of known, but only Op(T) under the alternative when 6': is used in place of 03. 3. LM99 test and its modifications Leyboume and McCabe (1999) proposed a new version of the KPSS/LM94 stationarity test. The idea is to find an estimate a": that is consistent for a: under the null of stationarity, and that is Op(1) under the unit root alternative. Then fill (or fi,) using this estimate will be 0,,(T2), not 0,,(T), under the unit root alternative. It is well known that the model (1) is second-order equivalent in moments to the ARIMA(O,1,1) process: (l—L)y,=,6+(1—6L)§,, 0<6<1. (15) Here 4, is white noise with mean zero and variance of. The correspondence between the parameterizations (15) and (1) is as follows: ag=aj /6 (16A) g=(,1+2-(,t2 +42)“2)/2 (16B) where as before A = of / of. Here the null hypothesis of stationarity is of = 0 (or 2:0) in (1), and corresponds to 0=1 in (15). It implies that y, is stationary. The alternative of > 0 corresponds to OSG<1 and implies that ' y, has a unit root. It is important for later development to stress that model (1) implies 03051 in (15); negative 0 are not consistent with (1). Also the pure random walk corresponds to k=oo in (l ), or 0=0 in (15). LM99 use the relationship (16A) to obtain their estimates of a": . Let ~ -_-,a§é (17) where 6'; and 19 are the quasi-ML estimates of the ARIMA(O,1,1) process (15). “Quasi- ML” refers to the fact that the form of the likelihood assumes normality, but the consistency of the estimates does not depend on this assumption being correct. LM94 note that a"; and 9 are consistent under the null and Op(1) under the unit root alternative. Therefore, 5: is also consistent under the null and Op(l) under the alternative. Ifwe use 6‘: as the denominator of the test statistic instead of 6': as in (11), we have the LM99 stationarity test 77),: T‘ZZS . (18) Obviously, f)", is 0,,(1) under the null hypothesis, and 0,,(T2) under the alternative. This suggests that it may be more powerful than the KPSS/LM94 test 77” , which is only 0,,(T) under the alternative. We will now proceed to suggest two modifications of the LM99 test. These are based on the following observation. The LM99 test, like KPSS and LM94, is an upper tail test. However, the LM99 estimate 5: can be negative. Even though 0<0 is not consistent with the local level model (1), é < 0 is possible, and 5": = a": 6’ is negative if B is negative. In this case we will have 5” < 0 and the test will not reject. This will be a very rare occurrence under the null (0=1), but it may not be rare under the alternative. Note especially that in the pure random walk case (0=O) we will have é < 0 with a probability that approaches 0.5 as T—-)oo, and the power of the LM99 test against pure random walk alternatives will be close to 0.5, not 1.0, for large T. Our simulations will confirm this, and will show that correspondingly the LM99 test will have poor power against unit root alternatives that are close to random walks (i.e., for large values of 7c, correspondingly, small values of 0). LM99 specifically assume 0>0, thus avoiding this problem in terms of asymptotics, but still it is odd and not desirable to have a test whose power is low against a random walk. This ought to be the easiest alternative to detect. To avoid this problem, we propose two modifications of the LM99 test statistic. The first, which we will call LMMl, uses the variance estimator E: = 6'2. This is a consistent estimator of a: under the stationary null, since 0=1 under the null. Under the alternative, it is not a consistent estimator of a": , but it is Op(1). Therefore, LMMl is 0,,(1) under the null and 0,,(T2) under the alternative. This modification of the LM99 test may cost some power, because (3: > 06': when O<é <1, and we expect 0<é <1 when 9 is not close to zero. However, we may gain power when 0 is close to zero since if can not be negative. We also propose another modification of the LM99 test statistic, which we will call LMM2. This is based on the estimator 6': = '19 A 2 o . e 0'; , wh1ch 15 also consrstent under the null and Op(1) under the alternative. For 0 close to one, we expect 9 > 0 with high probability, and so a: should equal 5: with high probability. Thus we do not expect substantial size distortions, and the power of LM99 and LMM2 should be similar when 0 is close to one (i.e., when power is low). However, for 0 close to zero (large A), we may expect LMM2 to be more powerful than LM99, and LMMZ (unlike LM99) is consistent against the pure random walk alternative. 4. Short-run dynamics The time series data to which a stationary test is applied are typically highly dependent over time, and so the iid error assumption under the null is unrealistic. Empirically, it is important to allow the stationary errors u, to be correlated. The essential assumption for the u, is that they satisfy a functional CLT, so that their cummulations follow a Wiener process. That is, we assert rT T‘“2 u, => a'W(r) (19) l=l T 2 for O < a < oo . Here 02 = lim T “'E[Z u,) is the “long run variance”, and the assertion ado l=l that it is finite is an assertion of “short memory” of the process 14,. Assumptions on u, that 10 guarantee (19) include the regularity conditions of Phillips-Perron (1988), which involve mixing plus existence of certain moments, or the Phillips—Solo (1989) linear process assumptions. 1) KPSS test If (19) holds, then the numerator of the KPSS statistic follows: T l T‘ZZS} :> 0'2 lV(r)2dr (20) r=l 0 Therefore KPSS use an estimate of 0'2 for the denominator of the statistic, to cancel the 0'2 in the numerator. A consistent estimator of the long—run variance 0'2 is constructed from the residuals 12,: T T 32(1) = T"Zii,2 + 2T"w(s,l)212,12,_5 , (21) (=1 t=s+l where w(s,1) is an optional weighting function that corresponds to the choice of a spectral window. KPSS use the “Bartlett window”, which is 1- s/(I+1) as in Newey and West (1987) to guarantee the nonnegativity of 32(1). For the consistency of 32(1), it is necessary that the lag truncation number I—>oo but I / T —) 0 as T—->oo. The rate l=o(T”2) will usually satisfactory under both the null and the alternative. Let 7')” (I) be the KPSS statistic that uses 1 lags in estimation of the long run variance. (Or, in the case of testing for trend stationarity, 71(1) is defined similarly). Under the null it has the same asymptotic distribution as in the cases previously considered: 11 T l ,7#(1)=T‘ZZ 5,2/s2(1)—> [V(rydr (22) (=1 0 Under the alternative, the numerator is 0,,(T2) as before. However, KPSS (1992, p. 168) show that 52(1) is 0,,(IT) under the unit root alternative. Therefore, under the unit root alternative, I?” (l) is only Op(T/I). Recall that this compares to Op(T2) when the u, are white noise and of is known, and to Op(T) when the u, are white noise but of is not known. So we expect the allowance for autocorrelation of the u, to cause a loss of power. A possibility that is not noted in the existing literature is that we can make the KPSS test Op(T) under the alternative, under the assumption that the u, are MA(1), where l is known, or where we have an upper bound for I that is “fixed” (does not depend on T). Then the maximum non-zero autocorrelation is I, and we can estimate 02 consistently using the ‘unweighted’ variance estimator T 32(1) = T“ 2:23 + ZZT‘l 12,12”, (23) where I is a fixed number. The unweighted variance estimator 32(1) is consistent under the null hypothesis (32(0—902), and Op(T) under the alternative, with I fixed. Note, however, that under the MA(1) assumption, with I fixed, the asymptotic distribution of the KPSS statistic under the alternative does depend on I. The constant K '=(1+21) would appear in the denominator of the expression for the distribution of T "I?” (I). In that sense the KPSS statistic is still Op(T/l) ; but with I fixed, this does not contradict the fact that it is 0,,(T). We can summarize this discussion simply, in a way that relates to the remainder of the thesis. We may let l—)oo as T—>oo, as in the original KPSS article. In this case the 12 Final]; select tatist aSSLm Wher aulor Whit: Whic mod: test is valid for very general forms of autocorrelation of the u,, and the statistic is Op(T/I) under the alternative. This version of the test will be analyzed further in Chapter 3. Alternatively, we may assume that u, is MA(1) with I known, in which case the statistic is Op(T) under the alternative. This version of the test will be analyzed in Chapter 2. Finally, we may assume that u, is MA(1) for finite but unknown 1, and use some model selection procedure to choose I. If the model selection procedure is consistent, the statistic is again 0,,(T) under the alternative. This version of the test will be considered in Chapter 4. 2) LM94 test Leyboume and McCabe (1994) based their test (which we call LM94) on the assumption that u, is an autoregressive process with known order p. Their model is (L)y, = ,Bt + ,u, + a, , (24) where ,u, is a random walk as in (1), (L) = 1- ¢1L — ...... — (15pr is a pth order autoregressive polynomial in the lag operator L with roots outside the unit circle, and s, is white noise. Thus the stationary error in the solution for y, would be u, = oo as T—>oo. The asymptotic properties of LM94 with this method of choosing p are unknown. Finally, we may use a model selection procedure to choose the relevant AR order p. This will be discussed in Chapter 4. ' For simplicity, assume p=l, and a=B=O. In that case, if we regress y, on its lagged term y,.. to get residuals that could be a basis for the test, then, under the unit root alternative, the parameter ,3. _) land the residuals are approximately stationary, since y, and y,., are 1(1). If the stationarity test is based on these stationary residuals, then the statistic will behave as if the null of stationarity is true. This will cause the test to lose power under the unit root alternative. To avoid this problem, we estimate the parameters by ML estimation on differenced y,. In this case, the parameter estimtes are consistent under both the null and the alternative, and the residual is nonstationary under the alternative. See Leyboume and McCabe (1994) for details. 14 3) LM99 and LMMl, LMM2 The LM99 test (and its modifications LMMl, LMM2) handles short-run dynamics in the same way as the LM94 test. Treating p as known, we estimate the ARIMA(p,1,1) model (26) and filter the data as in (27). Having done so, we calculate the LM99 statistic in the same way as was done with white noise errors. When p is known, the LM99, LMMl and LMM2 test statistics are 0,,(T2) under the alternative and their distribution does not depend on the value of p. We consider this case in Chapter 2. When p is unknown, we can let p—)oo as T—>oo. The asymptotic properties of the test in this case are unknown. We analyze this case further in Chapter 3. Finally, when p is finite but unlmown, we can also use a model selection procedure to choose it. Leyboume and McCabe (1999) suggest a consistent model selection rule for LM99 and they show that the model selection rule does not affect the distribution of the test statistic. We consider model selection rules further in Chapter 4. 4) Overfitting and near cancellation The LM94 and LM99 test are based on estimation of the ARIMA(p,1,1) model given in the equation (25) above. Here we rewrite this as: (L) and @(L). That is, if we factor 0 and our results depend on 20 A = of / of as well as T and I. The unit root component grows as 2 grows, and we expect power to be higher when 2. or T is larger and when I is smaller. Table 2.2 gives the poWer of the KPSS test as a function of T, 2., and I, for the same values of T and l as in Table 2.1, and for 2. ranging fiom 0.001 to 10,000. As expected, the power of the test increases with T and 2. for all values of I. Conversely, given T and 2., power of the test decreases rapidly as we increase the number of lags (I). (There are a few exceptions to this statement, for small T and 2., but these are cases of significant size distortions, and these exceptions will disappear when we consider size- adjusted power, in Table 2.4.) The loss in power from choosing a needlessly large value of I can be substantial. For example, for T=50 and 2t=0.1, compare power of 0.721 with I=0 (the “true value” since the errors are white noise) to 0.432 with I=3. This loss of power seems to be less for larger T, even though theoretically it does not disappear asymptotically. For example, with T=500 and 2.=0.001, we have power of 0.788 with [=0 and 0.751 with I=3, a much smaller power loss (at a comparable level of power with I=O) than with T=50 and 2.=0. 1. We now proceed to consider size-adjusted power. As usual, the motivation is to quantify the intrinsic ability of the test statistics to distinguish the null hypothesis and the alternative. Table 2.3 gives the “actual critical values”, by which we mean the critical values that would yield correct size (5%) in our simulations under the null hypothesis. Table 2.4 provides the size-adjusted power of the KPSS test based on the actual critical values in Table 2.3. The results are quite similar to those in Table 2.3. Power increases with T and 2., and decreases with I. The main change is that we have now 21 removed the increase in power as 1 increases, for small values of T and 2.. We conclude that this anomaly was due to size distortion, the effects of which have now been removed. 2) The Leyboume-McCabe tests (LM94, LM99, LMMl and LMM2) with iid errors In this section we provide simulation results on the size and power of the various Leyboume-McCabe tests in presence of iid errors. Our experimental design is very similar to the design for the KPSS simulations just reported. We consider T=100, 200 and 500, values of 2. ranging from zero (the null) to 100, and number of lags (p) from zero to three. To make the calculations simpler and faster, we used only 10,000 replications, and we dropped T=50, because we encountered an annoyingly large number of failures of the MLE algorithm when T=50 and p=2 or 3. We first consider the size of the tests. This corresponds to the entries in Table 2.5 with 2t=0. Note that p=0 is the true value of p, since the errors are white noise, and that for p=0, LM94 is the same as KPSS with I=0. For p=0, there are no substantial size distortions though the LMMl test rejects too seldom. For larger values of p, the tests reject too often under the null, and unsurprisingly these size distortions are larger when T is smaller and p is larger. Most notably, the size distortions (overrejection) are worse for all of the LM tests than they were for the KPSS test (with I for the KPSS test equal to p for the Leyboume—McCabe tests), and they are worse for the LM99 and LMM2 tests than for the LM94 and LMMl tests. For example, for T=100, compare size of 0.056 for KPSS (with I=3), to 0.077 and 0.074 for LM94 and LMMl, respectively (with p=3), and to 0.100 for both LM99 and LMM2 (with p=3). Any power gains of the LM99 test or its 22 modifications would have to be weighed against its size problem when T is moderate and p is overspecified. We note in passing that LM99 and LMM2 are essentially identical in our simulations, under the null hypothesis. The probability of obtaining a negative estimate of 0, when the true 0 equals one, is negligible. Except for the LM99 test, the power of the tests increases with T and generally with 2.. For the LM99 test, we have the disturbing feature that power increases with 2. for small 2., but then decreases with further increases in 2. This is a reflection of the fact that, for the pure random walk case of 2e=co, the probability of 19 < O approaches 0.5 as T—+oo. Correspondingly the power of LM99 is close to 0.5, not 1.0, for large T and 21., and this is what our simulations show. In our view, this is a serious defect of the LM99 test, but it is easily solved by using the LMMZ test instead (that is, by taking the absolute value of (9 ). For LM94, LMMl and LMMZ, power essentially always increases as we increase T for a given 2. (and p), as we would expect, given the consistency of the tests. But, for large 2., power does not necessarily increase with 2. for a given T (and p). This is a reflection on the “near cancellation” problem discussed in Chapter 1. It does not occur with the KPSS test. Because we had some substantial size distortions, and these varied across tests and the value of p, we will avoid a detailed comparison of power of the tests and the way that it depends in p, and turn to a discussion of size-adjusted power. Table 2.6 presents the “actual critical values”, as Table 2.3 did for KPSS. Then Table 2.7 gives size-adjusted power (power using the actual critical values from Table 2.6) for the various LM tests. 23 We note first that the LM99 test has poor power (approximately 0.5) when T and 2. are large. This is the same phenomenon that was commented on above, and it is probably the most striking result in either Table 2.5 or Table 2.7. The other obvious and striking result in Table 2.7 is that, for given values of T, 2 and p, all of the various Leyboume-McCabe tests have very similar size adjusted power. (The exception, as note, is the LM99 test, for large T and 2.) There is simply not much difference between these tests. It is generally true, perhaps, that LMM2 has greater size- adjusted power than LM94, but the differences are small. This is perhaps surprising, in light of the fact that the LM94 statistic is only Op(T), while the others are Op(T2). Size-adjusted power usually falls as p increases, for a given T and 2. This is as expected since we are estimating needlessly many parameters. However, this decrease in size-adjusted power is not terribly large. For example, for T=100, 2=0.01, for LM94 we have size-adjusted power of 0.606, 0.579, 0.549 and 0.530 for p=0, 1, 2, 3, respectively. For LMMZ, the size-adjusted powers follow a similar pattern: 0.621, 0.590, 0.556, 0.524. It is revealing to compare these to the size-adjusted power of KPSS, from Table 4, where for T=100, 2.=0.01, we find 0.590, 0.535, 0.504, 0.451, respectively, for [=0, 1, 2, 3. Obviously overspecifying p in the Leyboume-McCabe tests does not cause as much ”of a power loss as overspecifying I in the KPSS test. This is as suspected from the asymptotic theory. More generally, the KPSS test with [=0 is identical to LM94 with p=0, and it has size-adjusted power that is very similar to that of the other Leyboume-McCabe tests with p=0. However, KPSS with a positive number of lags “I” is generally less powerful than the LM tests with p=[. 24 Empirically, one is unlikely to know the “correct” number of lags. Then the main advantage of the KPSS test over the Leyboume-McCabe tests is that overspecifying the number of lags causes less size-distortion for KPSS than for Leyboume-McCabe. Conversely, the main advantage of the Leyboume-McCabe tests over the KPSS test is that they are more powerful when the number of lags is overspecified. 3) The KPSS and Leyboume-McCabe tests with AR(1) errors Here we perform simulations with AR(1) errors of the form: u,=pu,-1+e, , where s, is normal white noise. We set the coefficient value p to be 1/3 to have the “long-run variance” of the AR(1) series be equal to that of the MA(1) error series (which we will consider in the following section) with coefficient 0=0.5. For details see Appendix I. We consider the KPSS test with [=1 and the Leyboume-McCabe test with p=l. The Leyboume-McCabe tests are based on a correctly specified model, while the KPSS test is not (since AR(1) corresponds to the MA(oo)) and we want to see how much difference this makes. Table 2.8 gives the size and power of the various tests, for values of T and 2 similar to those considered previously. Table 2.9 gives the “actual critical values”, while Table 2.10 gives size-adjusted power. The KPSS test shows moderate size distortions (e.g., size=0.78 for T=500). This should be expected since its long run variance calculation does not take into account the correlations of order greater than one. (Presumably the size distortion would be larger for larger values of p in the AR(1) DGP.) Its power compares favorably to the power of the Leyboume-McCabe tests, but this is only due to the size distortion. From Table 2.10, the 25 size-adjusted power of KPSS is lower than that of the Leyourne-McCabe tests. This is also as expected. We can also note that the LM99 test has low power compared to all of the other tests when 2:1 as well as when 2=100. This is a reflection of a “near cancellation” between the AR root of 1/3 and the MA root of 0.389, which causes the estimates of p and 0 to be imprecise. Apparently this imprecision is sufficient to cause a substantial number of negative estimates of 0. LMM2, which takes the absolute value, does not suffer from this problem. When we compare the various Leyboume-McCabe tests, we first notice that none of them show any substantial size distortions. Power and size-adjusted power are therefore more or less equivalent to compare. As in the previous section, the various Leyboume-McCabe tests are all more or less equally powerful (except that, as before, LM99 is very poor when 2 is large). LMM2 is a little more powerful than LM94, but the difference is small. The size and power characteristic of the Leyboume-McCabe tests with p=l are very similar whether the DGP is AR(1) with p=1/3 (Table 2.8-2.10) or white noise (Table 2.5-2.7). Of course, white noise is AR(1) with p=0, so this is evidence supporting the conjecture that, if p is correctly specified, the precise values of the AR parameters are not too important. 4) The KPSS and Leyboume-McCabe Tests with MA(1) errors Now we perform simulations with MA(1) errors of the form: y,=s,+08,-., where e, is normal white noise. We pick 9=0.5 to make the long run variance equal to that of the AR(1) process of the previous section. As in the previous section, we consider the KPSS 26 3- test with [=1 and the LM tests with p=l. Now, however, the KPSS test is based on a correctly specified model while the Leyboume-McCabe tests are not. Our results are given in Tables 2.11-2.13. These have the same format as Tables 2.8-2.10 of the previous section. Now the KPSS test has the correct size, whereas the Leyboume-McCabe tests suffer from size distortions. The Leyboume-McCabe tests underreject under the null hypothesis. This causes their power to be low. In terms of size-adjusted power, the Leyboume-McCabe tests are roughly similar to each other (again, except for LM99 when 2 is large), and they generally, but not always, have slightly lower size-adjusted power than the KPSS test. The size-adjusted power of the KPSS test with [=1 is lower when the errors are MA(1) with 0=0.5 than when the errors are white noise (i.e., MA(1) with 0=0). This is most noticeable when power is relatively low. 5) The KPSS and Leyboume-McCabe tests with ARMA(1,1) errors Here we perform simulations using ARMA(1,1) errors of the form: y,=py,-,+ s,+08,-,, where p=l/3 and 0:1/2. We choose these specific values of the AR and MA parameters to equate the contributions of the AR and MA terms to the “long-run variance” of the ARMA(1,1) error series. (See Appendix I for details.) In doing so we are trying to ensure a fair comparison of the KPSS test with [=1 and the Leyboume-McCabe tests with p=1, none of which are based on a correctly specified model. Our results are given in Tables 2.14-2.16, which have the same format as the previous tables for the AR and MA cases. 27 All of the tests show considerable size distortions even for large sample sizes such as T=500. The KPSS test overrejects while the Leyboume-McCabe tests underreject the null. The power of the KPSS test is apparently greater than that of the Leyboume- McCabe tests for small values of 2, but generally less for large values of 2 (except that, as before, LM99 has low power for large 2). Given the size distortions of the tests, and especially since these are in different directions for different tests, size-adjusted power is a fairer comparison. The size-adjusted power of the KPSS test and Leyboume-McCabe tests is similar when 2<1, but the power of the Leyboume-McCabe tests is greater than that of the KPSS test when 221. The exception is still the LM99 test which is not powerful when 2 is large. An interesting detail is the very low power of the LM99 test when 2=1; e.g., size- adjusted power is 0.002 for 2=1, T=500. This is again a reflection of a “near cancellation” problem. The AR root of 1/3 nearly cancels the MA root of 0.389 implied by 2=1, leaving the MA root of —0.5 from the ARMA process. Therefore the estimate of 0 is nearly always negative, and LM99 has virtually no power. LMMZ, which takes the absolute value of the estimate of 0, does not suffer from this problem. 3. Conclusions In this chapter we considered the KPSS and Leyboume-McCabe tests that use a fixed number of lags. We investigated the size and power characteristics of the tests via simulations. In these simulations our data generating processes included white noise, 28 '— ARll discus laller one 1 stair: (list 5011 Ch: COT (10‘. ha P6 W11 ml. the: AR(1) errors, MA(1) errors, and ARMA(1,1) errors. We gave our conclusions as we discussed the simulations, but we will repeat some of them here. 1. The LM99 test is not recommended. It has poor power for large values of 2 (alternatives close to a pure random walk). In fact, for 2=oo, as T—)oo power approaches one half, not one. The LMM2 test, which simply uses the absolute values of the LM99 statistic, solves this power problem, and it does so without causing any noticeable size distortions, since the probability of a negative test statistic under the null is negligible. 2. The LM94 test and the modified versions of the LMMl and LMM2 sometimes show a loss in power due to the “near cancellation” problem, identified in Chapter 1, that occurs when the value of 0 is close to one of the AR roots. This commonly occurs for large values of 2 when p is overspecified, so that 0 is close to zero and one of the AR roots equals zero. In these circumstances the LM tests generally are dominated by the KPSS test, which does not suffer from this problem. 3. There is not much difference in power between the LM94 test, on the one hand, and the LM99 test or its modifications (LMMl, LMM2), on the other hand. This is perhaps surprising because the LM statistic is Op(T) under the alternative while the others are Op(T2). ‘ 4. The white noise case was argued to be a fair setting for comparison of the KPSS and Leyboume-McCabe tests, since it satisfies both the MA(1) and AR(p) assumptions. With white noise errors, the KPSS test with [=0 is the same as the LM94 with p=0, and it performs similarly to the other LM tests with p=0. Ifthe errors are white noise, but we overspecify [ (for the KPSS test) or p (for the Leyboume-McCabe tests), there is a trade-off between size and power considerations. Overspecifying I in the KPSS 29 [8515 C IESIS. 165111 are 5 KPS ltSlE tests causes smaller size distortions than overspecifying p in the Leyboume-McCabe tests, but it also results in a greater loss of power for the KPSS test than for the Leyboume-McCabe test. 5. Our simulations with AR(1), MA(1) and ARMA(1,1) errors show that there are size distortions and loss of size-adjusted power if one underspecifies I or p. So, the KPSS test with fixed [ does not do well if the DGP is AR(p), and the Leyboume-McCabe tests with fixed p do not do well if the DGP is MA(I). 30 Appendix I We wish to perform simulations with AR(1) errors and also with MA(1) errors. We want to pick values for the AR parameter ‘ p and the MA parameter “0” that yield equal values for the long-run variance of the process. Here we define the long-run variance (1'2 as: 02=7o+271+272+273+ ...... =70+227j, (A1) [=1 where Yj is the jth autocovariance. 1. MA(1) case: y, = u, = a, + 98H where a, is white noise. Then 70 = (1+ 02 )0’3, where 0': is variance ofe, 7, = 60’: , and 7)- = O forj>l. Therefore, in the case of an MA(1) process, the long-run variance equals 02 = 70 +2yl = (1+6?2 +2600: = (1+6)zaf. (A2) 2. AR(1) case : y, = ,oy,_I + a, , where a, is white noise. Then 2 ,0 2 1 2 '0 ——2—0'5...... (l-p ) 2 = 0' , = = a", = = (1_p2) 1: 71 p70 (l_p2) 72 p71 70 31 7—=,0}’~-= 5 ....... ’ " (l-pz) Therefore, long-run variance can be calculated as 02 =70+2y, +272+2y3+ ..... . :[1+2p+2p2+ ...... ]0’f G-pD O-p0 0-p0 = (1 _1p2)[1+2p+2p2 + ...... ]a52 = 12[(1+p+p2+ ...... )+(p+p2+ ...... )]rr52 (1-p) 1 [ 1 p ] = + 05 (l-pz) (l-p) (l-p) = 1 [(1+p)]52 (1-p2)(1-p) ‘ = 1 20.2. (A3) (l—p) For our MA(1) process with parameter 0 to have the same long mm variance as our AR(1) process with parameter p, we require 1 (1—p)2' (1+ 6)2 = (A4) For 0=1/2, this is satisfied for p=1/3. 32 v— Table 2.1 Size of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level T [=0 [=1 =2 [=3 50 0.050 0.050 0.061 0.077 100 0.049 0.051 0.052 0.056 200 0.051 0.051 0.051 0.050 500 0.050 0.047 0.050 0.050 Simulation results based on 20,000 replications. 33 Table 2.2 Power of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level T 2 [=0 L=l [=2 [=3 50 0.0001 0.051 0.057 0.067 0.075 0.001 0.075 0.081 0.084 0.091 0.01 0.287 0.261 0.234 0.203 0.1 0.721 0.615 0.524 0.432 1 0.924 0.741 0.607 0.502 100 0.958 0.762 0.625 0.509 10000 0.959 0.758 0.625 0.504 100 0.0001 0.063 0.062 0.066 0.066 0.001 0.168 0.161 0.153 0.151 0.01 0.587 0.543 0.51 1 0.473 0.1 0.927 0.845 0.757 0.681 1 0.989 0.909 0.809 0.723 100 0.994 0.921 0.812 0.723 10000 0.998 0.918 0.813 0.721 200 0.0001 0.097 0.095 0.097 0.099 0.001 0.399 0.280 0.373 0.370 0.01 0.846 0.814 0.782 0.746 0.1 0.990 0.963 0.919 0.872 1 0.999 0.980 0.942 0.891 100 1.000 0.983 0.944 0.896 10000 1.000 0.980 0.941 0.898 500 0.0001 0.307 0.305 0.305 0.298 0.001 0.788 0.774 0.764 0.751 0.01 0.997 0.979 0.971 0.955 0.1 1.000 0.998 0.993 0.984 1 1.000 0.979 0.995 0.985 100 1.000 0.999 0.996 0.987 10000 1.000 0.999 0.995 0.985 Simulation results based on 20,000 replications. 34 Table 2.3 Actual Critical Values of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level T =0 [=1 [=2 =3 50 0.4770 0.4937 0.5048 0.5365 100 0.4599 0.4733 0.4661 0.4877 200 0.4509 0.4753 0.4744 0.4586 500 0.4622 0.4585 0.4587 0.4646 Simulation results based on 10,000 replications 35 Table 2.4 Size-Adjusted Power of KPSS Test with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level T 2 [=0 [=1 [=2 =3 50 0.0001 0.047 0.047 0.050 0.050 0.001 0.074 0.067 0.065 0.059 0.01 0.288 0.235 0.203 0.131 0.1 0.722 0.591 0.482 0.333 1 0.922 0.720 0.581 0.406 100 0.957 0.745 0.597 0.414 10000 0.960 0.736 0.590 0.418 100 0.0001 0.063 0.056 0.062 0.056 0.001 0.166 0.150 0.154 0.135 0.01 0.590 0.535 0.504 0.451 0.1 0.927 0.834 0.753 0.665 1 0.990 0.904 0.804 0.699 100 0.995 0.916 0.808 0.712 10000 0.994 0.915 0.817 0.709 200 0.0001 0.102 0.091 0.088 0.098 0.001 0.403 0.382 0.364 0.365 0.01 0.853 0.811 0.770 0.742 0.1 0.991 0.958 0.919 0.878 1 0.999 0.980 0.936 0.894 100 1.000 0.982 0.944 0.896 10000 1.000 0.982 0.940 0.896 500 0.0001 0.310 0.31 1 0.305 0.296 ' 0.001 0.790 0.776 0.765 0.748 0.01 0.987 0.981 0.971 0.953 0.1 1.000 0.999 0.993 0.984 1 1.000 0.999 0.996 0.986 100 1.000 0.999 0.995 0.985 10000 1.000 0.999 0.996 0.986 Simulation results based on 20,000 replications. 36 Table 2.5 Size and Power of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level p =0 T 2 LM94 LM99 LMMl LMM2 100 0 0.048 0.054 0.040 0.054 0.001 0.170 0.179 0.156 0.179 0.01 0.601 0.626 0.590 0.626 1 0.988 0.999 0.996 1.000 100 0.994 0.537 0.999 1.000 200 0 0.049 0.050 0.044 0.050 0.001 0.398 0.404 0.386 0.404 0.01 0.854 0.870 0.853 0.870 1 1.000 1.000 1.000 1.000 100 0.998 0.552 1.000 1.000 500 0 0.050 0.051 0.045 0.051 0.001 0.782 0.788 0.779 0.778 0.01 0.987 0.991 0.988 0.991 1 1 .000 1 .000 l .000 1 .000 100 1.000 0.582 1.000 1.000 p =1 T 2 LM94 LM99 LMMl LMM2 100 0 0.055 0.063 0.050 0.063 0.001 0.165 0.180 0.154 0.180 0.01 0.594 0.620 0.586 0.620 . 1 0.974 0.902 0.981 0.985 100 0.908 0.417 0.913 0.917 200 0 0.054 0.058 0.049 0.058 0.001 0.391 0.400 0.379 0.400 0.01 0.847 0.861 0.848 0.861 1 0.998 0.970 0.999 0.999 100 0.947 0.448 0.947 0.948 500 0 0.054 0.055 0.050 0.055 0.001 0.784 0.791 0.780 0.791 0.01 0.986 0.989 0.987 0.989 1 1.000 0.998 1.000 1.000 100 1.000 0.486 0.979 0.979 Simulation results based on 10,000 replications. 37 Table 2.5 (Continued) Size and Power of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level p =2 T 2. LM94 LM99 LMMl LMM2 100 0 0.070 0.085 0.064 0.086 0.001 0.187 0.205 0.180 0.205 0.01 0.588 0.618 0.584 0.618 1 0.931 0.705 0.937 0.941 100 0.915 0.433 0.922 0.926 200 0 0.060 0.065 0.055 0.065 0.001 0.400 0.412 0.390 0.412 0.01 0.839 0.855 0.839 0.855 1 0.982 0.805 0.983 0.984 100 0.952 0.456 0.952 0.953 500 0 0.050 0.051 0.047 0.051 0.001 0.780 0.787 0.778 0.787 0.01 0.986 0.989 0.987 0.989 1 1 .000 0.924 1 .000 1 .000 100 0.983 0.485 0.983 0.983 p = T 2 LM94 LM99 LMMl LMMZ 100 0 0.077 0.100 0.074 0.100 0.001 0.194 0.217 0.188 0.218 0.01 0.576 0.608 0.576 0.609 1 0.914 0.685 0.920 0.926 ' 100 0.919 0.432 0.927 0.930 200 0 0.062 0.072 0.058 0.072 0.001 0.405 0.420 0.399 0.420 0.01 0.832 0.847 0.834 0.847 1 0.975 0.800 0.976 0.976 100 0.985 0.469 0.959 0.959 500 0 0.057 0.058 0.054 0.058 0.001 0.781 0.789 0.780 0.789 0.01 0.985 0.988 0.987 0.988 1 0.999 0.950 0.999 0.999 100 0.983 0.498 0.983 0.983 Simulation results based on 10,000 replications. 38 Table 2.6 Actual Critical Values of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level T LM94 LM99 LMMl LMMZ p= 0 100 0.4575 0.4709 0.4323 0.4709 200 0.4578 0.4629 0.4371 0.4629 500 0.4618 0.4657 0.4476 0.4657 p=l 100 0.4870 0.5160 0.4621 0.5160 200 0.4805 0.4922 0.4584 0.4922 500 0.4756 0.4804 0.4613 0.4804 p=2 100 0.5285 0.5863 0.5158 0.5873 200 0.4946 0.5141 0.4767 0.5141 500 0.4631 0.4691 0.4491 0.4685 p=3 100 0.5501 0.6502 0.5461 0.6515 200 0.5040 0.5332 0.4897 0.5332 500 0.4885 0.4955 0.4774 0.4955 Simulation results based on 10,000 replications. 39 Table 2.7 Size-Adjusted Power of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level p =0 T 2 LM94 LM99 LMMl LMMZ 100 0.001 0.172 0.175 0.171 0.175 0.01 0.606 0.621 0.610 0.621 1 0.988 1.000 0.997 1.000 100 0.994 0.537 1.000 1.000 200 0.001 0.401 0.405 0.404 0.404 0.01 0.855 0.870 0.863 0.870 1 1.000 1.000 1.000 1.000 100 0.998 0.552 1.000 1.000 500 0.001 0.783 0.787 0.785 0.787 0.01 0.987 0.991 0.989 0.991 1 l .000 1 .000 1 .000 l .000 100 1.000 0.582 1.000 1.000 p =1 T 2, LM94 LM99 LMMl LMMZ 100 0.001 0.153 0.153 0.154 0.153 0.01 0.579 0.590 0.586 0.590 1 0.972 0.901 0.981 0.984 100 0.905 0.413 0.913 0.914 200 0.001 0.380 0.381 0.383 0.381 - 0.01 0.841 0.854 0.850 0.854 1 0.997 0.970 0.999 0.999 100 0.945 0.447 0.948 0.947 500 0.001 0.777 0.782 0.781 0.782 0.01 0.985 0.988 0.987 0.988 1 1.000 0.998 1.000 1.000 100 0.979 0.486 0.979 0.979 40 Table 2.7 (Continued) Size-Adjusted Power of Leyboume-McCabe Tests with Fixed Number of Lags (DGP: iid errors, no time trend) 5% significance level p =2 T 2, LM94 LM99 LMMl LMM2 100 0.001 0.156 0.152 0.153 0.152 0.01 0.549 0.556 0.554 0.556 1 0.923 0.699 0.933 0.935 100 0.908 0.425 0.918 0.919 200 0.001 0.377 0.378 0.391 0.378 0.01 0.827 0.838 0.833 0.838 1 0.981 0.804 0.983 0.983 100 0.950 0.454 0.952 0.951 500 0.001 0.780 0.784 0.783 0.784 0.01 0.986 0.989 0.988 0.989 1 1 .000 0.924 1 .000 1 .000 100 0.983 0.485 0.983 0.983 p = T 2 LM94 LM99 LMMl LMM2 100 0.001 0.153 0.143 0.152 0.143 0.01 0.530 0.524 0.533 0.524 1 0.901 0.671 0.912 0.912 100 0.911 0.423 0.922 0.921 200 0.001 0.380 0.379 0.383 0.379 - 0.01 0.817 0.826 0.823 0.826 1 0.973 0.798 0.975 0.975 100 0.956 0.467 0.958 0.957 500 0.001 0.770 0.775 0.773 0.775 0.01 0.983 0.987 0.986 0.987 1 0.999 0.950 0.999 0.999 100 0.982 0.497 0.983 0.982 Simulation results based on 10,000 replications. 41 Table 2.8 Size and Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y1=pyt-l+819 p=1/3) 5% significance level KPSS LM94 LM99 LMMl LMM2 T 2 [=1 p=1 p=1 p=l p=1 100 0 0.071 0.053 0.056 0.055 0.056 0.001 0.199 0.171 0.182 0.172 0.182 0.01 0.588 0.595 0.618 0.601 0.618 1 0.918 0.936 0.627 0.942 0.945 100 0.924 0.989 0.468 0.996 0.997 200 0 0.078 0.051 0.052 0.052 0.052 0.001 0.436 0.409 0.418 0.409 0.418 0.01 0.839 0.845 0.860 0.849 0.860 1 0.983 0.982 0.758 0.982 0.982 100 0.984 1.000 0.497 1.000 1.000 500 0 0.078 0.047 0.047 0.047 0.047 0.001 0.811 0.787 0.792 0.788 0.792 0.01 0.983 0.985 0.988 0.986 0.988 1 0.999 0.999 0.956 0.999 0.999 100 1.000 1.000 0.510 1.000 1.000 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leybotune- McCabe tests. 42 Table 2.9 Actual Critical Values of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: YtszI-I'l'st, p=1/3) 5% significance level KPSS LM94 LM99 LMMl LMM2 T [=1 p=l p=l p=1 p=l 100 0.5241 0.4710 0.4806 0.4780 0.4811 200 0.5501 0.4672 0.4711 0.4728 0.4711 500 0.5448 0.4519 0.4533 0.4514 0.4533 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 43 Table 2.10 Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y,=py,-1+8,, p=l/3) 5% significance level KPSS LM94 LM99 LMMl LMM2 T 2 [=1 p=l p=1 p=1 p=l 100 0.001 0.169 0.166 0.173 0.164 0.173 0.01 0.545 0.591 0.611 0.592 0.610 1 0.899 0.935 0.626 0.941 0.944 100 0.903 0.989 0.467 0.996 0.997 200 0.001 0.388 0.405 0.412 0.402 0.412 0.01 0.806 0.844 0.858 0.845 0.857 1 0.973 0.982 0.758 0.982 0.982 100 0.974 1.000 0.497 1.000 1.000 500 0.001 0.775 0.792 0.796 0.793 0.796 0.01 0.977 0.986 0.988 0.987 0.988 1 0.999 0.999 0.956 0.999 0.999 100 0.999 1.000 0.510 1.000 1.000 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. Table 2.11 Size and Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP. Y1: 8g + 98p], 9:0.5) 5% significance level KPSS LM94 LM99 LMMl LMM2 T 7, [=1 p=1 p=1 p=1 p=1 100 0 0.051 0.029 0.028 0.032 0.039 0.001 0.095 0.054 0.058 0.051 0.058 0.01 0.384 0.314 0.325 0.309 0.325 1 0.900 0.877 0.848 0.883 0.892 100 0.918 0.910 0.419 0.916 0.920 200 0 0.053 0.025 0.025 0.026 0.030 0.001 0.242 0.171 0.174 0.166 0.174 0.01 0.685 0.630 0.642 0.629 0.642 1 0.980 0.971 0.969 0.972 0.974 100 0.981 0.947 0.451 0.947 0.949 500 0 0.052 0.022 0.024 0.022 0.025 0.001 0.622 0.546 0.548 0.543 0.548 0.01 0.946 0.932 0.937 0.932 0.937 1 0.999 1.000 1 .000 1.000 1 .000 100 0.999 0.980 0.496 0.980 0.980 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 45 Table 2.12 Actual Critical Values of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: y,= 8, + 08,.,, 0=0.5) 5% significance level T 100 200 500 KPSS LM94 LM99 [=1 p=l p=1 0.4460 0.3794 0.3704 0.4532 0.3647 0.3580 0.4503 0.3583 0.3591 LMMl LMM2 p=1 p=1 0.3846 0.4129 0.3589 0.3766 0.3499 0.3629 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 46 Table 2.13 Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: y,= 8, + 08,-1, 0=0.5) 5% significance level KPSS LM94 LM99 LMMl LIVIMZ T 2 [=1 p=l p=1 p=1 p=l 100 0.001 0.108 0.088 0.095 0.079 0.075 0.01 0.397 0.373 0.391 0.360 0.356 1 0.906 0.894 0.863 0.897 0.899 100 0.925 0.921 0.428 0.927 0.925 200 0.001 0.244 0.234 0.242 0.232 0.228 0.01 0.696 0.695 0.704 0.692 0.693 1 0.980 0.976 0.973 0.977 0.977 100 0.984 0.953 0.457 0.954 0.954 500 0.001 0.621 0.622 0.624 0.623 0.621 0.01 0.949 0.954 0.957 0.956 0.956 1 0.999 1.000 1 .000 l .000 1.000 100 0.999 0.982 0.499 0.983 0.982 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 47 Table 2.14 Size and Power of KPSS and Leyboume-McCabe Tests with ARMA (1,1) Errors (DGP: y, = py,-1+ 8,+08,-,, p=1/3, 0=1/2) 5% significance level KPSS LM94 LM99 LMMl LMM2 T 2 [=1 p=1 p=1 p=1 p=1 100 0 0.081 0.014 0.015 0.016 0.016 0.001 0.148 0.041 0.043 0.043 0.044 0.01 0.456 0.282 0.282 0.286 0.294 1 0.911 0.981 0.099 0.987 0.988 100 0.924 0.989 0.450 0.996 0.996 200 0 0.086 0.017 0.017 0.017 0.017 0.001 0.313 0.109 0.111 0.110 0.112 0.01 0.741 0.573 0.580 0.574 0.581 1 0.981 0.999 0.030 0.999 0.999 100 0.985 1.000 0.457 1.000 1.000 500 0 0.089 0.014 0.014 0.014 0.015 0.001 0.682 0.497 0.501 0.498 0.501 0.01 0.962 0.911 0.915 0.912 0.915 1 0.999 1 .000 0.002 1 .000 1.000 100 0.999 1.000 0.444 1.000 1.000 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 48 Table 2.15 Actual CriticalValues of KPSS and Leyboume-McCabe Tests with ARMA (1, 1) Errors (DGP: yt= pyt-1+ 8t+98t-1, p=1/3, 0=1/2) 5% significance level KPSS LM94 LM99 LMMl LMM2 T [=1 p=1 p=1 p=1 p=1 100 0.5643 0.3208 0.3210 0.3208 0.3236 200 0.5839 0.3188 0.3201 0.3218 0.3201 500 0.5844 0.3103 0.3125 0.3110 0.3125 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 49 Table 2.16 Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with ARMA (1, 1) Errors (DGP: yt= pyH+ 8t+68t-1, p=1/3, 0=1/2) 5% significance level KPSS LM94 LM99 LMMl LMMZ T x [=1 p=1 p=1 p=1 p=1 100 0.001 0.104 0.100 0.101 0.103 0.100 0.01 0.389 0.396 0.391 0.399 0.401 1 0.870 0.988 0.101 0.989 0.989 100 0.888 0.995 0.450 0.997 0.997 200 0.001 0.236 0.189 0.189 0.189 0.189 0.01 0.678 0.664 0.668 0.661 0.669 1 0.963 0.999 0.030 0.999 0.999 100 0.969 1.000 0.457 1.000 1.000 500 0.001 0.617 0.617 0.617 0.616 0.617 0.01 0.941 0.949 0.950 0.949 0.950 1 0.998 1.000 0.002 1.000 1.000 100 0.998 1.000 0.444 1.000 1.000 Simulation results based on 20,000 replications for the KPSS test, 10,000 replications for the Leyboume- McCabe tests. 50 Chapter 3 Performance of the KPSS and Leyboume—McCabe Tests when the Number of Lags Increases with the Sample Size 1. Introduction In this chapter, we consider the properties of the KPSS and Leyboume-McCabe tests when we allow the number of lags to increase with the sample size. Here, as in the previous chapters, the “number of lags” is the parameter “I”, the number of lagged terms in the long-run variance estimate, for KPSS; and it is the parameter “p”, the assumed order of the AR polynomial, for the various Leyboume-McCabe tests. In the original KPSS (1992) paper, the number of lags l was required to satisfy the requirements that, as T—>oo, I-—>oo, but l/T—>0. This ensured consistency of the long- run variance estimator s2(l) , so long as certain regularity conditions are satisfied. The Leyboume and McCabe papers (1994 and 1999) assumed a finite-order AR model. Our treatment of the Leyboume-McCabe tests in this chapter lets p->oo, p/T—)0, as T—>oo, and is analogous to the way that p is treated in the Said-Dickey (1984) ADF unit root tests. Intuitively, we expect that an AR(p) model with large p can approximate any stationary process, subject to some regularity conditions. In this chapter we perform simulations to see how the size and power of the various tests are affected when the number of lags grows with sample size. Our data generating processes (DGPs) will be essentially the same as in the previous chapter. We 51 will follow Schwert (1989) and many subsequent papers and consider three rules for choosing the number of lags: [0=0, l4=integer[4(T/100)]“4 and I 12=integer[12(T/ 100)] U 4. 2. Theoretical issues In this section, we discuss the known properties of the KPSS and Leyboume- McCabe tests when the number of lags is a function of the sample size. KPSS (1992) have already addressed the distribution theory of the KPSS test when l—>oo as T-—)oo. They use the ‘weighted’ long run variance estimator 32(1) to construct the KPSS test, where 52(1) is defined in Chapter 1, equation (21). They show that s2(l) is a consistent estimate of the long mm variance 02 when the lag truncation number 1 satisfies the condition that l—>oo but [IT—>0, as T—>oo. Then, under the null hypothesis of stationarity, the KPSS statistic rip is 0,,(1) and it has the asymptotic distribution given in equation (22) of Chapter 1. Under the unit root alternative, I?” is 0,,(T/l). Thus the rate at which I grows affects the power of the test, even asymptotically. Correspondingly we might expect that the power of the KPSS test will grow slowly compared to other cases, where the lag truncation number I is assumed to be fixed (as in Chapter 2), or determined by lag selection rules (as in Chapter 4). The distribution theory for the Leyboume-McCabe tests when we allow the number of lags to increase with the sample size is unknown. We can make an analogy, at least at an intuitive level, to the augmented Dickey-Fuller (ADF) tests of Said and Dickey (1984). They also let p—>oo, p/T—>O, as T—)oo, and they show that the ADF test is valid (in 52 the sense that it has the same asymptotic distribution as the standard DF test does with white noise errors), provided that the errors satisfy some regularity conditions. Specifically, they assume that the errors follow a finite order ARMA(p,q) model, but this is a stronger assumption than necessary. Intuitively, the assumed AR(p) model approximates any sufficiently regular stationary error very well, if p is large enough, and the asymptotic distribution theory is unaffected by letting p—>oo so long as p does not grow too quickly. We conjecture that similar results hold for the Leyboume-McCabe tests; that is, that the Leyboume-McCabe tests are valid, as long as the errors satisfy some regularity conditions and the number of lags increases to infinity but sufficiently slowly relative to the sample size. Unfortunately we have no proof of this conjecture. The difficulty, relative to the Said-Dickey analysis, is that the Leyboume-McCabe tests depend on the results of a numerical optimization and thus cannot be written as an explicit closed-form fimction of the data. Under the alternative, the situation is even less clear. For the ADF test, Said and Dickey do not provide any results under the alternative, and it is apparently not known whether the power of the ADF test is different when p—mo than when a fixed value of p is correct and is used. For the Leyboume-McCabe tests, we note simply that, for fixed p, the asymptotic distribution under the alternative does not depend on p. Whether this carries over to the case that p-)oo is not clear. 53 51; 88 Ill: C0 M. for MC 3. Simulations In this section we provide some Monte Carlo evidence on the size and power of the KPSS and Leyboume-McCabe tests when the number of lags grows with the sample size. The design of the simulations is very similar to that of Chapter 2. The DGP is essentially equation (1) of Chapter 1, with (i=0. Thus y, = ,u, +u, , ,u, = ,u,_, + v,, where the u, are N(0, a: ), the v, are N(0,o'v2 ), and u and v are independent. As in Chapter 2, we consider the cases that the u, are iid (white noise), but also cases where they are AR(1), MA(1) and ARMA(1,1) errors. The data contain no deterministic trend and we consider only the tests that allow for level but not trend (e.g., KPSS ii” but not 77. , and similarly for the Leyboume- McCabe tests). The number of replications is given below, but is generally 20,000 for KPSS and 10,000 for Leyboume-McCabe tests. Simulations were performed using GAUSS 3.2.25 and the Maxlik optimization procedure. We let the number of lags (I or p) follow the rules': [0=0, l4=integer[4(T/100)m], 112=integer[12(T/100)”4]. ' The number of lagged terms according to the above rule is as follows. Lag truncation number or Order of AR polynomials T =p=10=0 I=p=14 =p=112 50 0 3 10 100 0 4 12 200 0 4 14 500 0 5 l7 54 1) The KPSS and Leyboume-McCabe tests with iid errors We first consider the size of the KPSS and Leyboume-McCabe tests in the presence of iid errors. The null hypothesis is of = 0(7t=0) and then y, = u,, so y, is white noise. The tests are set at the 5% nominal significance level, and the results are based on 20,000 replications for the KPSS test and 10,000 replications for the Leyboume-McCabe tests. Table 3.1 gives the size and power of the KPSS and the Leyboume-McCabe tests with various sample sizes (T) and values of A = of / 0'3. Size corresponds to the entries for 7t=0. All of the tests have more or less correct size when l=p=10. For the case that the number of lags is 14 or 112, there are size distortions in opposite directions: the KPSS test rejects too seldom while the Leyboume-McCabe tests reject too often. For the KPSS test, the size distortions disappear fairly rapidly as T increases, even for [=112. For the various Leyboume-McCabe tests, the size distortions get smaller as T increases, which is consistent with our conjecture that the Leyboume-McCabe tests are valid when p grows with sample size. However, the decrease in the size distortions of the Leyboume-McCabe tests as T grows is not very rapid. Correspondingly, the KPSS test with 1:112 hasvery much smaller size distortions than the Leyboume-McCabe tests with p=112. For example, for T=500, compare: KPSS, 0.046; LM94, 0.171; LM99, 0.190, The power of the tests increases with T and generally with 7», except for the LM99 test. For l= =10 (=0), we have already discussed the results in Chapter 2. There is not much difference between tests. It is obvious that increasing the number of lags to 14 or 112 costs power. However, it is hard to separate changes in power from changes in size distortion in Table 3.1, so we will move on to a discussion of size-adjusted power. Table 55 3.2 gives the “actual” critical values, which would lead to size of 0.05 under the null in our simulations, and Table 3.3 gives size-adjusted power (power using the “actual” critical values). Size-adjusted power increases with T and generally with k. In the cases of 14 or 112 lags, the KPSS test generally has greater size-adjusted power when A is small, while the LM94, LM] and LMM2 tests have greater size-adjusted power for larger values of A. The LM99 test still does poorly when A is large. The other Leyboume-McCabe tests sometimes show evidence of the “near cancellation” problem discussed in Chapter 1; power decreases as we move to the largest values of 7.. Increasing the number of lags from 10 to 14 to 112 causes size-adjusted power to decrease, often substantially. That is, there is a loss in size-adjusted power from using too many lags, as was also found in Chapter 2. There is no uniform comparison of tests in terms of the power loss from increasing the number of lags. As we move from 10 to 14 to l 12 lags, sometimes the loss in the size-adjusted power is larger for the KPSS test than for the Leyboume-McCabe tests (e.g., T=500, i=1) and sometimes the reverse is true (e.g., T=200, 1:001). This is perhaps surprising, since in Chapter 2 it was more or less unifome true that using too many lags affected the power of the KPSS test more than the power of the Leyboume-McCabe tests. However, in Chapter 2 we never had more than three lags, while now we have as many as 17 (for l or p=12, T=500). 2) The KPSS and Leyboume-McCabe tests with AR(1) errors Here we perform simulations with AR(1) errors of the form : ut=put-1+e¢, where at is normal white noise. We set p=1/3 as in Chapter 2. We consider the KPSS test with 56 [=14 and the LM tests with p=l4. The KPSS test should in principle be able to accommodate AR(1) errors, if T is large enough, since the test is asymptotically valid under the null and consistent under the alternative when l grows with T. However, we presume that an AR error favors the Leyboume-McCabe tests, which are based on an AR specification. Table 3.4 gives the size and power of the various tests, for values of T and 71 similar to those considered previously. Table 3.5 gives the actual critical values, while Table 3.6 gives size-adjusted power. The KPSS test shows moderate size distortions. Interestingly, they do not decrease noticeably as T increases (they should vanish asymptotically). The Leyboume- McCabe tests do not show substantial size distortions, and this is perhaps surprising since they did in the white noise case. The power of the KPSS test compares favorably to the power of the LM tests (especially for small A), but this is only due to the size distortion. When we look at size-adjusted power, the KPSS test is dominated by the Leyboume- McCabe tests, all of which are fairly similar. (The exception to these statements is that the LM99 test still does badly when k is large.) For example, for 7L=0.001, the power of the KPSS test is 0.193, 0.422, and 0.785 for T=100, 200, and 500, respectively. However, its size-adjusted power drops to 0.163, 0.379 and 0.746, and now is less than for its Leyboume-McCabe counterparts. For example, for the LMM2 test, size-adjusted power is 0.193, 0.396 and 0.777 for k=0.001 and T=100, 200 and 500; and the advantage of LMMZ over KPSS is greater when 7. is larger. However, LM99 still does poorly for very large k, and the other Leyboume-McCabe tests show modest decreases in power for very large 7., due to the “near cancellation” problem. 57 Comparing the various Leyboume-McCabe tests, it seems that LMM2 test generally has the largest size-adjusted power, but the differences are small. This is similar to what was found in Chapter 2, with p fixed. With p fixed, this was a surprising result since LM94 is Op(T) while the other Leyboume-McCabe tests are Op(T2). When p increases with T, the asymptotic properties of the tests are unknown, so there is no theory for our results to disagree with. However, at an intuitive level, the degree of similarity between LM94 and the other Leyboume-McCabe tests is still surprising. 3) The KPSS and Leyboume-McCabe tests with MA(1) errors Now we perform simulations with MA(1) errors of the form : y,=st+08.-1, where e, is normal white noise. We pick 0=0.5 from the same reason we discussed in Chapter 2. As in the previous section, we consider the KPSS test with [=14 and the Leyboume- McCabe tests with p=l4. The KPSS test is asymptotically valid under the null, and consistent under the alternative, while the asymptotic properties of the Leyboume- McCabe tests are unknown. We presume that an MA error favors the KPSS test over the Leyboume-McCabe tests, as explained in Chapter 2. Our results are given in Tables 3.7- 3.9, with the same format as before. Consider first the size of the tests (results for i=0 in Table 7). The KPSS test has a modest size distortion (rejection rate of 0.06 instead of 0.05), which does not clearly diminish as T increases. The Leyboume-McCabe tests have more substantial size distortions, but these do clearly diminish as T increases. For T=500, the degree of size distortion is very minor for all of the tests (KPSS and all versions of Leyboume- McCabe). S8 In terms of size-adjusted power (Table 9), all of the Leyboume-McCabe tests are quite similar to each other (except, again, LM99 when 7» is large). For T=500, the size- adjusted power of the KPSS test is also quite similar. For smaller values of T, the KPSS test is better than the LM tests when power is low, and worse when power is high. The latter result is somewhat unexpected, since in Chapter 2 (with the number of lags fixed) the KPSS test dominated the Leyboume-McCabe tests with MA(1) errors. The difference may be that in Chapter 2 we used an unweighted long-run variance estimator, whereas here we use the Newey-West weights. With MA(1) errors, only the first autocorrelation is non—zero, and the Newey-West weights downweight this inappropriately except when 1 is quite large. 4) The KPSS and Leyboume-McCabe tests with ARMA(1,1) errors Here we perform simulations using ARMA(1,1) errors of the form: y,=pyH +8t+981-1, where p=1/3 and 0=1/2. As before, we choose these specific values of the AR and MA parameter to equate the contribution of the AR and MA terms to the “long-run variance” of the ARMA(1,1) error series. We consider the KPSS test with [=14 and the Leyboume-McCabe tests with p=l4. Our results are given in Tables 3.10—3.12, which have the same format as the previous tables for the AR and MA cases. The KPSS test shows moderate size distortion (rejection rate of about 0.08 instead of 0.05), and this does not clearly decrease as T increases. This size distortion is only very slightly smaller than was found in Chapter 2, with the same DGP, for the KPSS test with [=1. The Leyboume-McCabe tests have slightly greater size distortions than the KPSS test when T=100, but these size distortions do decrease as T increases, and have 59 disappeared for T=500. This results is quite different from what was found in Table 14 of Chapter 2, where the Leyboume-McCabe tests with p=1 underrejected considerably (rejection rate of about 0.015) and increasing T did not improve things. These results are also more optimistic than the corresponding results in Table 3.1 for the case of white noise errors. The latter result is surprising and deserves further study. The size-adjusted power of the KPSS test with [=14 is less than that of the Leyboume-McCabe tests with p=[4 (again, except LM99 when A. is large). This difference in size-adjusted power is larger than was found in Table 2.16 of Chapter 2. 4. Conclusions In this chapter we considered the KPSS and Leyboume-McCabe tests that use a number of lags that increases with the sample size T. We investigated the size and power characteristics of the tests via simulations. In the simulations our data-generating processes included white noise, AR(1), MA(1), and ARMA(1,1) errors. Our main conclusions are as follows. 1. The LM99 test is still not recommended, due to its poor power when A is large. This case (LM99 test, 2. large) is an exception to the remaining conclusions for the Leyboume-McCabe tests. 2. There is not much difference in power between the LM94 test and the LM99 test or its modifications (LMMl, LMMZ). This is similar to the results we have seen in Chapter 2. 60 3. When p increases with T, the asymptotic properties of the Leyboume-McCabe tests are unknown. However, our results seem to be consistent with the conj ecture that the Leyboume-McCabe tests are asymptotically valid under the null and consistent under the alternative. 4. Once again we can argue that the white noise case is a fair setting for comparison of the KPSS test to the Leyboume-McCabe tests. All of the tests have size distortions in the 14 or [12 cases - the KPSS test underrejects while the Leyboume- McCabe tests overrej ect. The size distortions are much more severe for the Leyboume- McCabe tests, however. In our opinion, they are severe enough to argue against the use of the Leyboume-McCabe tests with the number of lags increasing with T (at least, at the rate we consider, which is proportional to T1“). The Leyboume-McCabe tests are more powerful but this is mostly due to the size distortions. Size-adjusted power favors the Leyboume-McCabe tests in general, but not uniformly. 5. For all of the tests, an unnecessarily large number of lags costs power. This is generally but not uniformly more true for the KPSS test than for the Leyboume-McCabe tests. 6. The cases with autocorrelated errors are generally speaking more favorable to the Leyboume-McCabe tests than the white noise case. For reasons that we do not understand, the Leyboume-McCabe tests show smaller size distortions with autocorrelated errors than with white noise errors. This is true even when the errors are not AR. Power considerations favor the KPSS test in the MA case, but favor the Leyboume-McCabe tests in the AR and ARMA cases. The ARMA cases, like the white noise cases, were set up to be a fair setting for comparison of the KPSS and Leyboume- 61 McCabe tests, and the generally superior performance of the Leyboume-McCabe tests balances its poor performance in the case of white noise errors. Clearly more work is needed to determine which types of errors favor which tests. 62 Table 3.1 Size and Power of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) 5% significance level 1=p=10=0 r 1. KPSS LM94 LM99 LMMl LMM2 100 0 0.049 0.048 0.054 0.040 0.054 0.001 0.168 0.170 0.179 0.156 0.179 0.01 0.587 0.601 0.626 0.590 0.626 1 0.989 0.988 0.999 0.996 1.000 100 0.994 0.994 0.539 0.999 1.000 200 0 0.051 0.049 0.050 0.044 0.050 0.001 0.399 0.398 0.404 0.386 0.404 0.01 0.846 0.854 0.870 0.853 0.870 1 0.999 1.000 1.000 1.000 1.000 100 1.000 0.998 0.552 1.000 1.000 500 0 0.050 0.050 0.051 0.045 0.051 0.001 0.788 0.782 0.788 0.779 0.778 0.01 0.997 0.987 0.991 0.988 0.991 1 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 0.582 1.000 1.000 [=p=[4=integer[4(T/100)]/4] r 2. KPSS LM94 LM99 LMMl LMM2 100 0 0.043 0.103 0.129 0.104 0.131 0.001 0.147 0.215 0.243 0.211 0.244 0.01 0.508 0.572 0.597 0.573 0.601 1 0.818 0.912 0.669 0.919 0.924 100 0.826 0.917 0.433 0.923 0.927 200 0 0.049 0.074 0.084 0.071 0.084 0.001 0.372 0.402 0.418 0.397 0.418 0.01 0.776 0.833 0.846 0.833 0.846 1 0.943 0.976 0.800 0.976 0.977 100 0.945 0.964 0.464 0.965 0.966 500 0 0.048 0.062 0.065 0.059 0.065 0.001 0.757 0.779 0.786 0.777 0.786 0.01 0.962 0.983 0.986 0.984 0.986 1 0.992 0.999 0.970 0.999 0.999 100 0.992 0.985 0.500 0.985 0.985 63 Table 3.1 (Continued) Size and Power of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) 5% significance level 1=p=112=integer[12*(T/100)m] T A KPSS LM94 LM99 LMMl LMM2 100 0 0.029 0.278 0.293 0.292 0.322 0.001 0.100 0.322 0.341 0.329 0.356 0.01 0.367 0.581 0.566 0.587 0.606 1 0.579 0.895 0.652 0.902 0.907 100 0.584 0.884 0.395 0.889 0.892 200 0 0.041 0.228 0.252 0.235 0.259 0.001 0.314 0.474 0.487 0.476 0.490 0.01 0.626 0.771 0.759 0.774 0.780 1 0.725 0.987 0.810 0.978 0.979 100 0.726 0.940 0.445 0.941 0.941 500 0 0.046 0.171 0.190 0.174 0.190 0.001 0.682 0.776 0.783 0.776 0.783 0.01 0.865 0.960 0.960 0.960 0.962 1 0.901 0.999 0.971 0.999 0.999 100 0.901 0.979 0.510 0.978 0.978 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 5% significance level Table 3.2 (DGP: iid errors, no time trend) Actual Critical Values of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size 1=p=10=0 r KPSS LM94 LM99 LMM1 LMM2 100 0.459 0.457 0.471 0.432 0.471 200 0.451 0.457 0.462 0.437 0.463 500 0.462 0.462 0.465 0.447 0.465 [=p=14=integer[4(T/100)T4] 100 0.459 0.667 0.876 0.709 0.892 200 0.441 0.548 0.602 0.543 0.602 500 0.460 0.511 0.521 0.499 0.521 1=p=112=integer[12(T/100)”‘T 100 0.431 1.797 6.518 1.827 13.863 200 0.463 1.523 2.774 2.239 3.366 500 0.449 1.026 1.296 1.134 1.296 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 65 Table 3.3 Size-Adjusted Power of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) 5% significance level L=p=10=0 r 2. KPSS LM94 LM99 LMM1 LMM2 100 0.001 0.166 0.172 0.175 0.171 0.175 0.01 0.590 0.606 0.621 0.610 0.621 1 0.990 0.988 1.000 0.997 1.000 100 0.995 0.994 0.537 1.000 1.000 200 0.001 0.403 0.401 0.405 0.404 0.404 0.01 0.853 0.855 0.870 0.863 0.870 1 0.999 1.000 1.000 1.000 1.000 100 1.000 0.998 0.552 1.000 1.000 500 0.001 0.790 0.783 0.787 0.785 0.787 0.01 0.987 0.987 0.991 0.989 0.991 1 1.000 1.000 1.000 1.000 1.000 100 1.000 1.000 0.582 1.000 1.000 [=p=[4=integer[4(T/l00)m] T 7. KPSS LM94 LM99 LMM1 LMM2 100 0.001 0.143 0.130 0.118 0.123 0.116 0.01 0.511 0.478 0.451 0.474 0.449 1 0.818 0.881 0.643 0.899 0.898 100 0.830 0.890 0.414 0.909 0.907 200 0.001 0.475 0.351 0.341 0.349 0.341 0.01 0.849 0.804 0.808 0.809 0.808 1 0.974 0.973 0.797 0.975 0.975 100 0.975 0.961 0.461 0.963 0.963 500 0.001 0.761 0.758 0.762 0.761 0.762 0.01 0.963 0.980 0.982 0.981 0.982 1 0.991 0.999 0.970 0.999 0.999 100 0.992 0.984 0.499 0.984 0.984 66 Table 3.3 (Continued) Size-Adjusted Power of KPSS and Leyboume-McCabe Tests when Number of Lags Increases with Sample Size (DGP: iid errors, no time trend) 5% significance level l=p=112=integer[12(T/100)m] T 7L KPSS LM94 LM99 LMM1 LMMZ 100 0.001 0.139 0.088 0.056 0.058 0.043 0.01 0.425 0.298 0.194 0.225 0.166 1 0.618 0.659 0.475 0.691 0.647 100 0.623 0.668 0.275 0.747 0.708 200 0.001 0.340 0.208 0.154 0.167 0.135 0.01 0.642 0.581 0.514 0.547 0.498 1 0.747 0.920 0.772 0.941 0.934 100 0.744 0.875 0.395 0.892 0.884 500 0.001 0.693 0.616 0.579 0.599 0.580 0.01 0.870 0.921 0.916 0.921 0.917 1 0.903 0.999 0.971 0.999 0.999 100 0.903 0.969 0.499 0.969 0.967 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 67 Table 3.4 Size and Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y,= py..1+ at, p=1/3) 5% significance level KPSS LM94 LM99 LMMl LMMZ T 7. [=14 p= l4 p= [4 p= I4 p= [4 100 0 0.067 0.058 0.062 0.061 0.062 0.001 0.193 0.200 0.211 0.201 0.211 0.01 0.544 0.571 0.589 0.577 0.593 1 0.825 0.943 0.617 0.949 0.953 100 0.835 0.936 0.371 0.942 0.944 200 0 0.071 0.054 0.056 0.055 0.056 0.001 0.422 0.397 0.405 0.398 0.405 0.01 0.804 0.813 0.827 0.817 0.827 1 0.946 0.986 0.761 0.987 0.987 100 0.950 0.968 0.399 0.969 0.969 500 0 0.070 0.054 0.054 0.053 0.054 0.001 0.785 0.777 0.781 0.777 0.781 0.01 0.967 0.978 0.981 0.979 0.987 1 0.991 1.000 0.963 1.000 1.000 100 0.993 0.987 0.402 0.987 0.988 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for LM tests. 68 Table 3.5 Actual Critical Values of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y1= 9344+ 86 p=1/3) 5% significance level T KPSS LM94 LM99 1:14 p= 14 p= 14 LMM1 LMM2 p= 14 p= 14 100 200 500 0.5033 0.4909 0.5084 0.5204 0.4739 0.4810 0.5360 0.4745 0.4749 0.4997 0.5084 0.4792 0.4810 0.4741 0.4749 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 69 Table 3.6 Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with AR(1) Errors (DGP: y1= 1m:+ at, p=1/3) 5% significance level KPSS LM94 LM99 LMM1 LMM2 T )1, [=14 p= [4 p= 14 p= I4 p= [4 100 0.001 0.163 0.187 0.192 0.185 0.193 0.01 0.513 0.558 0.571 0.562 0.575 1 0.796 0.939 0.615 0.949 0.950 100 0.806 0.933 0.369 0.939 0.942 200 0.001 0.379 0.390 0.396 0.389 0.396 0.01 0.775 0.809 0.822 0.811 0.822 1 0.930 0.985 0.760 0.987 0.987 100 0.937 0.968 0.398 0.969 0.969 500 0.001 0.746 0.771 0.777 0.773 0.777 0.01 0.955 0.977 0.980 0.979 0.980 1 0.986 1.000 0.963 1.000 1.000 100 0.987 0.987 0.402 0.987 0.988 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 70 Table 3.7 Size and Power of KPSS and Leyboume-McCabe Tests with MA(1) Errors (DGP: Y: = 8t+98t-1, 9:0.5) 5% significance level KPSS LM94 LM99 LMM1 LMM2 T 7. [=14 p= [4 p= [4 p= [4 p= 14 100 0 0.056 0.129 0.149 0.141 0.166 0.001 0.109 0.169 0.193 0.171 0.196 0.01 0.388 0.443 0.468 0.446 0.473 1 0.813 0.924 0.638 0.931 0.934 100 0.824 0.920 0.429 0.926 0.929 200 0 0.060 0.086 0.098 0.087 0.101 0.001 0.253 0.282 0.294 0.278 0.294 0.01 0.677 0.713 0.728 0.713 0.728 1 0.944 0.977 0.740 0.978 0.979 100 0.944 0.962 0.455 0.962 0.963 500 0 0.059 0.063 0.066 0.062 0.067 0.001 0.626 0.617 0.622 0.615 0.622 0.01 0.933 0.943 0.947 0.944 0.947 1 0.991 1.000 0.981 1.000 1.000 100 0.992 0.984 0.491 0.984 0.984 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 71 Table 3.8 Actual Critical Values of KPSS and Leyboume-McCabe Tests with MA (1) Errors (DGP: y. = 8¢+98t-1, 0=0.5) 5% significance level KPSS LM94 LM99 LMM1 LMM2 T [=14 p= [4 p= [4 p= [4 p= I4 100 0.4721 0.7883 1.1373 0.9951 1.6230 200 0.4896 0.5884 0.6457 0.6024 0.6626 500 0.4908 0.5112 0.5198 0.5075 0.5225 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 72 Table 3.9 Size-Adjusted Power of KPSS and Leyboume-McCabe Tests with MA (1) Errors (DGP: y,= 8(+98t-1, 0=0.5) 5% significance level KPSS LM94 LM99 LMMl LMM2 T )1, [=14 p= [4 p= [4 p= [4 p= [4 100 0.001 0.103 0.073 0.062 0.058 0.038 0.01 0.386 0.305 0.269 0.269 0.207 1 0.805 0.880 0.608 0.897 0.886 100 0.817 0.878 0.401 0.897 0.886 200 0.001 0.238 0.212 0.202 0.203 0.196 0.01 0.663 0.659 0.657 0.656 0.652 1 0.937 0.974 0.737 0.976 0.976 100 0.938 0.956 0.448 0.956 0.957 500 0.001 0.604 0.590 0.592 0.590 0.591 0.01 0.921 0.936 0.940 0.938 0.939 1 0.989 1.000 0.981 1.000 1.000 100 0.990 0.983 0.490 0.983 0.983 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 73 Table 3.10 Size and Power of KPSS and Leyboume-McCabe Tests with ARMA (1,1) Errors (DGP: yt= pyM+ st+08,-1, p=1/3, 0=1/2) 5% significance level KPSS LM94 LM99 LMMl LMM2 T 7. =14 p= [4 p= 14 p= 14 p= [4 100 0 0.073 0.082 0.089 0.087 0.092 0.001 0.138 0.164 0.174 0.168 0.178 0.01 0.425 0.483 0.484 0.495 0.511 1 0.818 0.949 0.555 0.956 0.960 100 0.830 0.938 0.372 0.943 0.945 200 0 0.081 0.068 0.072 0.070 0.072 0.001 0.300 0.280 0.289 0.282 0.289 0.01 0.706 0.727 0.733 0.730 0.738 1 0.942 0.989 0.704 0.990 0.991 100 0.946 0.966 0.396 0.967 0.968 500 0 0.079 0.047 0.048 0.047 0.047 0.001 0.655 0.609 0.615 0.610 0.615 0.01 0.937 0.940 0.944 0.941 0.944 1 0.992 1.000 0.961 1.000 1.000 100 0.992 0.988 0.393 0.988 0.988 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 74 Table 3.11 Actual Critical Values of KPSS and Leyboume-McCabe Tests with ARMA (1,1) Errors (DGP: y,= pyt-1+ ect-08H, p=1/3, 0=1/2) 5% significance level KPSS LM94 LM99 LMM1 LMMZ T [=14 p= [4 p= [4 p= [4 p= [4 100 0.5357 0.6233 0.7115 0.6713 0.7407 200 0.5530 0.5294 0.5501 0.5377 0.5505 500 0.5484 0.4483 0.4534 0.4516 0.4528 Simulation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 75 Table 3.12 Size-Adj usted Power of KPSS and Leyboume-McCabe Tests with ARMA (1,1) Errors (DGP: y,= pyM+ et+08t-1, p=1/3, 0=1/2) 5% significance level KPSS LM94 LM99 LMM1 LMM2 T A, =14 p= I4 p= [4 p= I4 p= 14 100 0.001 0.094 0.119 0.117 0.117 0.116 0.01 0.374 0.405 0.395 0.410 0.415 1 0.768 0.933 0.547 0.947 0.951 100 0.785 0.920 0.362 0.932 0.933 200 0.001 0.239 0.245 0.245 0.243 0.245 0.01 0.650 0.699 0.703 0.701 0.708 1 0.916 0.987 0.703 0.989 0.989 100 0.924 0.963 0.393 0.964 0.965 500 0.001 0.603 0.619 0.622 0.618 0.622 0.01 0.919 0.942 0.944 0.943 0.945 1 0.986 1.000 0.961 1.000 1.000 100 0.986 0.988 0.393 0.988 0.988 Sirmrlation results based on 20,000 replications for the KPSS tests, 10,000 replications for the LM tests. 76 Chapter 4 Performance of the KPSS and Leyboume-McCabe Tests with Model Selection Rules 1. Introduction In this chapter we consider the performance of the KPSS and Leyboume-McCabe tests when the number of lags is determined by a model selection rule. In the previous chapters, we either assumed that the number of lags is finite and known a priori (Chapter 2), or we let the number of lags be a function of the sample size (Chapter 3). In this chapter we assume that the true number of lags is unknown, but we have a finite upper bound for it. The number of lags to be used is then the outcome of a general to specific (G-S) testing procedure. In the next section, we consider the Leyboume-McCabe tests, for which the “number of lags” is the order “p” of the autoregressive model for Ay,. Leyboume and McCabe (1999) suggested a model selection procedure that is based on a G-S sequential testing of the AR coefficients. Their approach is analogous to those of Hall (1994) and Ng and Perron (1995), which also used a G-S testing procedure to select the AR lag order in the augmented Dickey-Fuller regression. The LM99 procedure is “consistent” in the sense that, as T—)oo, the probability approaches one that the number of lags chosen is at least as large as the true number. However, the probability of overfitting does not go to zero. We suggest the possibility of making the critical value for the pretest depend on the 77 sample size, so that the model selection procedure is consistent in the stronger sense that it picks the true number of lags with probability one asymptotically. In the following section, we propose a model (lag) selection procedure for the KPSS test. This is based on a 6-8 sequential testing of the correlations between first- differenced residuals. Finally, we provide some Monte Carlo evidence on the finite- sample properties of the KPSS and Leyboume-McCabe tests when these model selection procedures are used to pick the number of lags. We do this for a variety of different DGP’s, similar to those considered in Chapter 2 and 3. 2. A consistent model selection rule for the Leyboume-McCabe tests We first discuss the model selection rule suggested by Leyboume and McCabe (1999). To do so, we consider the ARIMA(p,l,1) model Ayr = fl+i¢iAyt-i +91 _6C1—l (1) i=1 as previously given in equation (26) of Chapter 1. We assume that this model holds for some “true value” of p, say pa, with ¢p. ¢ 0. While p0 is unknown, there is a known finite upper bound p"m (pm 2 p0). Selecting the order of the AR component is done sequentially, using a general-to- specific (G-S) strategy. We start by estimating the model (1) with p= pm, and testing the null hypothesis that 90,, = 0. The test statistic for this pretest is defined as Z (p) = T ” zépé -) N(0,l), where 03p and 61 are the quasi-ML estimates from the ARIMA(pm,1,l) model. If we reject the null, we pick pm as the number of lags for the 78 Leyboume-McCabe tests. If we do not reject the null, we reduce p by one and repeat the test until we can reject the null. Thus the number of lags used will be the largest value of p such that we can reject the null that 05’, = 0 . If the null is never rejected, we set p=0. Leyboume and McCabe (1999, p.267) show that this pretest is “consistent” both under the stationary null and unit root alternative. In addition, Leyboume and McCabe show that the LM99 test is asymptotically not affected by the pretest. This implies that, with the number of lags p chosen by their model selection rule, we can proceed in large samples as if the chosen lag p were equal to the true order. It is instructive to note what it means for the pretest to be “consistent”. For p=po, Leyboume-McCabe note that Z(p) is 0,,(Tm) under both the null of stationarity and the alternative of unit root. Thus asymptotically the probability of picking a model that is false, in the sense that ppo). For example, if po=l and pm=5, and if the pretests are at the 10% a-level, the probability of picking p>l is (asymptotically) equal to 1-(0.9)“=0.344‘. However, it is not hard to modify this model selection procedure so that it picks p=p0 with probability one (asymptotically). We simply need to use critical values for the pretest that depend on the sample size. If the critical value is C, as T—>oo we need to require that C—>oo but C / JT —) 0. Since Z(p) is Op(1) under the hypothesis that p>p0 (¢p =0), the requirement that C—>oo will ensure that the probability of rejecting the hypothesis ' This calculation is justified by Theorem 3 of Leyboume and McCabe (1999), which shows the asymptotic independence of the statistics Z(p) for different p> po, 79 that Q, =0 will go to zero as T—>oo. However, since Z(p) is Op(T”2) under the hypothesis that p=p0 (so 05,, $0), the requirement that C/J—f —> 0 will ensure that the probability of rejecting the hypothesis that 15,, =0 will go to one as T->oo. In our simulations we will consider critical values of the form C=kT“4 for some k>0. These satisfy the conditions of the previous paragraph for consistent model selection. In the theory of statistical tests, the notion that the size of the test should approach zero as sample size approaches infinity is implicit. Our approach of letting the critical values grow with sample size accomplished this. However, to ensure a consistent model selection rule it is much more convenient to specify the rate at which the critical value changes with the sample size than it would be to specify the rate at which the size of the test changes with sample size. We know the order in probability of the test statistic under the alternative, and the only important consideration is how this compares to the rate of growth of the critical value. 3. A consistent model selection rule for the KPSS test In this section we propose a model selection rule for the KPSS test. For the KPSS test we assume the model y,=131+#1+“1 (2) as previously given in equation (1) of Chapter 1. Here ,u, is a random walk and u, is a short-memory process. We now make the parametric assumption that u, is MA(1) for 80 some “true value” of 1, say [0. While 10 is unknown, there is a known finite upper bound [max (1max 2 [0). Our model selection procedure is based on the autocorrelations of Ay,. Clearly Ay, = ,6 + w, , where w, = v, + Au, , (3) and where v, = A14. Now, if we suppose the error term u, follows MA(1) process, the maximum number of non-zero autocorrelation of the series w, is (1+1). So we can do model selection based on significance tests of the correlations of the w,. Once again we adopt a 6—8 strategy. Let 7]. be the jth autocovariance of w,. So, we start with [=1m, and correspondingly we test the hypothesis that 71+: = 0. If we reject this hypothesis, we pick [max as the number of lags. If we do not reject the null, we reduce [ by one and repeat the procedure until we can reject the null. Thus the chosen value of I is one less than the order of the largest significant autocorrelation of Ay,. If no autocorrelations of order j22 are significant, we pick [=0. To carry out the pretest of the hypothesis that y, = 0 , for any value of 1: 2.2, we first define the residuals 13/, = Ay, — ,0 = Ay, —X)7 , where Z; is an estimate of [3 that is consistent under the null 03 = 0 or the alternative of > 0. Then we use the simple test of Barttlet (1946): T‘” ‘, —) N(0,l), (4A) 7' T 2 where p. = 2w.-. /2 . . (48) Now we consider the consistency of our model selection rule. Consider first the case of a fixed critical value (equivalently, fixed a-level). If 10 is the true lag length, so 81 the 7,0,1: 0, we will reject the hypothesis that 7,0,1: 0 with probability one, asymptotically. Therefore, our rule is “consistent” in the sense that asymptotically it will choose a value of 1 at least as large as [0. However, even asymptotically there is a positive probability of overfitting (picking [> 10). As in the previous section, we can also consider critical values that depend on the sample size. If the critical value (C) satisfies C—)90, C/JT —+ 0 as T-—>oo, our selection rule will be consistent in the strong sense that it will pick [=10 with a probability that goes to one asymptotically. An example of a possible choice is C=kT”4 for some k>0. See Appendix II for further details. 4. Simulations In this section, we provide some Monte Carlo evidence on the size and power of the KPSS and Leyboume-McCabe tests with model selection rules. Simulations were performed using GAUSS 3.2.25 and the Maxlik optimization procedure. The DGP is equation (1) of Chapter 1, with B=0. Thus y, = ,u, +u, , p, = ,u,_l + v,, where the u, are iid N(0,a: ), the v, are iid N(0,0'3 ), and u and v are independent. The data contain no deterministic trend and we consider only the tests that allow for level but not trend (e.g., KPSS 77,1 but not ii, , and similarly for the Leyboume-McCabe tests). As described in previous chapters, white noise errors are used for a fair comparison of the KPSS and Leyboume-McCabe tests. We will also consider MA, AR and ARMA errors. Again the primary point here will be to see how the various tests perform when they are based on an incorrectly specified model. 82 For our model selection procedures, we will use “fixed” critical values for our pretest, with critical values of i165, corresponding to a nominal significance level of 10%. We will also consider “data dependent” critical values of the form C=(T/100)m, which corresponds to C=kT"4 with k=100‘“ 4=1 / M =0.3162. 1) The KPSS test with iid errors - Fixed critical values We first consider the KPSS test in the presence of white noise errors. We first report the fi'equency distribution of lags chosen by model selection rule, under the null hypothesis and the unit root alternative respectively. The results are from 10,000 replications. We use the 10% significance level (i.e., critical value=l.65) for the pretests and the upper bound of lags 1max is set to three. Tables 4.1 and 4.2 give the simulation results. The model selection rule works well for large values of ,1 = of / of and T. For example, for A.=l0,000, and T=500, the frequency of lag selection is (0.7341, 0.0804, 0.0930, 0.0925) for [=0, l, 2, 3, respectively. This agrees quite closely with the frequencies (0.729, 0.081, 0.090, 0.10) predicted by asymptotic theory. However, the model selection rule does not work. well under the null hypothesis (i=0) or generally for small values of 71 (say M1). The frequency of choosing the upper bound lag (1m=3) is greater than 0.1, and it shows no sign of approaching 0.1 as T—->oo. We do not understand this result. Table 4.3 gives the size of the KPSS test. We use various values of the upper bound 1m, namely 3, 5, and 10. There are size distortions for all values of 1m, and as expected, the size distortions are greater for larger 1m. However, these size distortions disappear quite rapidly as we increase the sample size (T) for all values of [my 83 Table 4.4 gives the power of the KPSS test. Power increases with T and 3., but decreases with [m. This is also as expected. When we compare these results to the power of the KPSS test with the true number of lags ([=0), as in Table 2.2 of Chapter 2, the power of the test with the model selection rule is clearly less. For example, for T=200 and A=100, the power of the test is l for [=0, 0.984 for 1m=3, 0.933 for 1m=5, and 0.851 for 1m=10. Clearly this power loss reflects the positive probability of overspecifying 1. This is the cost of using a fixed critical value and it motivates our consideration of data- dependent critical values for the pretest. We now proceed to consider size-adjusted power. Table 4.6 provides the size- adjusted power of the KPSS test based on the actual critical value in Table 4.5. The results are quite similar to those in Table 4.4. Power increases with T and 7., and decreases with [map The only substantial difference is that size-adjusted power (Table 4.6) is less than power (Table 4.4) for small values of T. 2) The KPSS test with iid errors -Data dependent critical values Now we consider the KPSS test when the model selection rule is performed with the “data dependent” critical values that increase with the sample size T. The critical values are C=(T/100)m. We note that these critical values are of the form C=kT'/4, and our choice of k is essentially arbitrary. It yields a critical value of 1.65 (as used in the previous section) for T=741 (approximately). For small values of T, therefore, we will have smaller critical values than 1.65, so we will choose larger values of 1 than in the previous section. Conversely, for T greater than 741, we will choose smaller values of 1. As T->oo, we will choose 1% (the true value) with a probability that approaches one. 84 Our simulation results are given in Tables 4.7-4.12, which have the same format as the previous tables for the case of fixed critical values. We first consider the frequency distribution of lags chosen with 1m=3. These results are given in Table 4.7 and 4.8. Under both the null of stationarity and the unit root alternative, the probability of choosing lags which are greater than the true lag converges to zero and the probability of choosing the true lag ([=0) converges to 1. That is, the results are as expected given the consistency of the model choice rule. Table 4.9 gives the size of the test. There are substantial size distortions for small sample sizes and large 1m. However, the size distortions disappear fairly rapidly as T increases. We can note that the results in Table 4.9 are for T5500. For T in this range, the pretest critical values are less than 1.65, and correspondingly the size distortions in Table 4.9 are larger than in Table 4.3 (where the critical value was fixed at 1.65, for a nominal 10% a-level). However, the size distortions in Table 4.9 are not very severe for T2200 and [m not unreasonably large. For T larger than 741 , the data-dependent critical values will be greater than 1.65 and we presume that the size distortions of the stationarity test will be smaller than before. Now we turn to the power of the test. This is given in Table 4.10. The power increases with T and A and decreases with 1",”, as expected. For T, in the range considered here (T5500), power is lower in Table 4.10 than in Table 4.4 because now the pretest critical values are smaller and we pick larger values of I. This comparison would reverse for larger T. Size-adjusted power is given in Table 4.12, using the actual critical values from Table 4.11. We will not discuss these results separately. 85 3) The Leyboume-McCabe tests with iid errors — Fixed critical values Now we provide simulation results on the size and power of the various Leyboume-McCabe tests in presence of iid errors. The DGP is same as in the KPSS case and we first consider the model selection rule with a fixed critical value of 1.65, corresponding to the 10% significance level. The upper bound pmax is set to three and the simulation results are based on 10,000 replications. Tables 4.13-4.15 give the simulation results. The size and power of the tests are in Table 4.13. The tests have moderate size distortions, and these size distortions do not decrease as rapidly as for the KPSS test in Table 4.3. The power of the tests increases with T and generally with 1. (except LM99). As in the previous chapters, the various Leyboume-McCabe tests are all more or less equally powerful (again except LM99). LMMZ is a little more powerful than LM94, but the difference is small. We can compare the power of the Leyboume-McCabe tests with the model selection rule to the power with p=0, as in Table 2.5 of Chapter 2. There is a considerable power loss for larger values of 71.. For example, for T=100 and i=1, for LM94 and LMM2 we have power of 0.988 and l with p=0, but the power of the tests with model selection is only 0.917 and 0.925. Clearly this power loss is due to the fact that the model selection rule overspecifies p with positive probability; this is true even for large T. We can also see in Table 4.13 that, for fixed T, the power of the LM tests with model selection decreases as we move to the largest value of A. (2=100). This is a reflection of the hear-cancellation” problem discussed in Chapter 1. This problem does not occur for the KPSS test. 86 Table 4.15 gives the size-adjusted power of the Leyboume-McCabe tests, based on the actual critical values given in Table 4.14. The results are fairly similar to those in Table 4.13. We can compare the size-adjusted power of the Leyboume-McCabe tests with pm=3 (Table 4.15) with the size-adjusted power of the KPSS test with 1m=3 (Table 4.12). They are not too different. The results favor the LM tests over the KPSS test, except for large 7., when the KPSS test is preferred. 4) The Leyboume-McCabe tests with iid errors - Data dependent critical values Here we perform simulations on the size and power of the Leyboume-McCabe tests when the pretest is performed with the data-dependent critical values we discussed in previous sections. That is, now the critical value equals (T/100)”4. The DGP is same as before, and the upper bound pm is set to three. Our simulation results are based on 10,000 replications. The main interest of this simulation is to see if there is any difference in the performance of the Leyboume-McCabe tests according to the choice of critical value for pretest. Tables 4.16-4.18 give our simulation results. The tests have moderate size distortions, but these decrease fairly rapidly as T increases. Power in Table 4.16 is rather similar to power in Table 4.13 (with fixed critical values) and this is also true of size- adjusted power (Table 4.18 versus Table 4.15). Larger sample sizes than we considered would presumably be necessary to find power gains fi'om the use of data-dependent critical values. 87 Power or size-adjusted power for the Leyboume-McCabe tests is not too different from power or size-adjusted power for the KPSS test with a data-dependent critical value for the pretest and with [mx=3. Once again the results seem to favor the Leyboume- McCabe tests except for large values of A. 5) The KPSS and Leyboume-McCabe tests with AR(1) errors Now we perform simulations with AR(1) errors of the form: ut=pum+et , where e; is normal white noise. We set the coefficient value p to be 1/3 for the same reason we discussed in previous chapters. We consider the KPSS test with 1m=3 and the LM test with pm=3 and use fixed critical values (1.65, for a 10% nominal significance level) for the pretest. Our simulation results are provided in Tables 4.19-4.21. Table 4.19 gives the size and power of the various tests, for various T and 7.. The KPSS test has large size distortions and these do not disappear for T=500 (size=0.080). This should be expected since its long run variance calculation does not take account correlations of order greater than three. The Leyboume-McCabe tests have smaller size distortions and these disappear quickly as T grows. The power of all of the tests increases with T and 7L. The power of the KPSS test compares favorably to the power of the Leyboume-McCabe tests, but this is only due to the size distortion. From Table 4.21, the size-adjusted power of the KPSS test is lower than that of the Leyboume-McCabe tests. For the Leyboume-McCabe tests, it is interesting to compare the present results to the earlier results with iid errors. Comparing Table 4.13 (iid errors) to Table 4.19 (AR(1) errors), we see that the size distortions are actually smaller in the AR(1) case. Size- 88 adjusted power (Table 4.21 versus Table 4.15) is similar. We can also compare the present results to the results in Chapter 2 for the AR(1) DGP and with p set equal to one (the true value). See Table 2.8 of Chapter 2 for these results. In terms of size or power, there seems to be little cost to model selection in the AR(1) case, in the sense that the results for the Leyboume-McCabe tests are not very different in Table 4.19 than in Table 2.8 of Chapter 2. It seems that the Leyboume-McCabe tests with model selection perform quite well, given our AR(1) DGP. 6) The KPSS test and Leyboume-McCabe tests with MA(1) errors Here we perform simulations with MA(1) errors of the form: y,=st+08t-1, where at is normal white noise. We pick 0=0.5 for the same reason we discussed in previous chapters. We consider the KPSS test with 1m=3 and the LM test with pm=3 as in the previous section. We use fixed critical values (1.65, for a 10% nominal significance level) for the pretest. Our simulation results are given in Tables 4.22-4.24. Now the KPSS test has correct size, whereas the Leyboume-McCabe tests suffer from size distortions. This is as expected. However, comparing Table 4.19 and 4.22, the size distortions of the Leyboume-McCabe tests when the DGP is MA(1) are less serious than the size distortion of the KPSS test when the DGP is AR(1), and they go away more quickly as T increases. The power of the Leyboume-McCabe test is comparable to that of the KPSS test, but this is largely due to the size distortion. The KPSS test is generally superior in terms of size-adjusted power (Table 4.24), especially when T is small. For T=500, there is not much difference, except when 7. is large; then KPSS is again better. 89 It is interesting to compare the present results for the KPSS test to the earlier results with iid errors. Comparing Table 4.3 and 4.4 (iid errors) to Table 4.22 (MA(1) errors), there are no substantial size distortions in either case, but power is less in the MA(1) case. We can also compare the present results to the results in Chapter 2 for the MA(1) case and with I set equal to one. See Table 2.11 of Chapter 2 for these results. Once again, in terms of size or power, there seems to be little cost to model selection. The KPSS test with model selection seems to work quite well, given our MA(1) DGP. 7) The KPSS test and Leyboume-McCabe tests with ARMA(1,1) errors Here we perform simulations using ARMA(1,1) errors of the form: y, = pyH +5, +9£H , where p=1/3 and 0=l/2. As before, we choose these specific values of the AR and MA parameters to equate the contribution of the AR and MA terms to the “long-run variance” of the error series. Our results are given in Tables 4.25427, which have same format as the previous tables for the AR and MA cases. The Leyboume-McCabe tests have modest size distortions for T=100, but these have essentially disappeared for T2200. The KPSS test has larger size distortions for T=100, and they decrease more slowly as T increases. The Leyboume-McCabe tests also typically have greater size-adjusted power than the KPSS test, except when 7. is large. Overall, the Leyboume-McCabe test seem to do better than the KPSS test when the DGP is ARMA(1,1). 90 5. Conclusions In this chapter we considered the KPSS and Leyboume-McCabe tests when the number of lags is determined by a model selection rule. LM99 proposed a model selection rule to pick the AR order (p) in their test. We proposed a similar model selection rule to pick the MA order (1) for the KPSS test. This rule is based on testing the significance of correlations of the first differenced series (Ay, ). We also proposed consistent model selection rules for the Leyboume-McCabe and KPSS tests. To obtain consistency, we let the critical values for the pretests go to infinity with T, but not too fast (e.g., critical value=kTm). We proposed a consistent model selection rule that can be applied to the KPSS test. The model selection procedure is based on tests of correlations of residuals from the regression of the first differenced series Ay, on an intercept. For consistency of the model selection rule, we need a critical value that grows with sample size T, but not too quickly (i.e., cv=(T/100)”4). We also discussed the distribution theory of the KPSS test with a model selection rule. The KPSS test with a consistent model selection rule is Op(T), not Op(T/[), under the alternative. Finally, we investigated the size and power characteristics of the tests via simulations. In these simulations, our data generating processes included white noise, AR(1) errors, MA(1) errors and ARMA(1,1) errors. Our conclusions are as follows. 1. Our consistent model selection rules work, in the sense that the probability of choosing the correct number of lags goes to one as T increases. 91 2. There is a power loss from model selection, as compared to knowing the correct number of lags, in finite samples. This loss seems to be less for the Leyboume- McCabe tests than for the KPSS test. 3. The LM99 test is still not recommended, due to its very poor power when 7. is large. The other Leyboume-McCabe tests are quite similar to each other. 4. With autocorrelated errors, the KPSS test with model selection does not do well if the DGP is AR(p) and the LM tests with model selection do not do well if the DGP is MA(1). When we compare different tests for different types of DGP, the Leyboume- McCabe tests can be argued to be more robust in finite samples than the KPSS test, in the sense that the size distortions of the KPSS test with AR(1) errors are greater than those of the Leyboume-McCabe tests with MA(1) errors. Similarly the LM tests do better than the KPSS tests under ARMA( 1,1) errors. 92 Appendix II: Asymptotic Effects of Model Selection Rules on the KPSS Test Let [0 be the true value of [ (the order of the MA process for u, ), and [m 2 10 be the specified upper bound. We called a model selection rule “consistent” if it picked an [e [10, [max] with probability one, asymptotically. This is the same notion of “consistency” as in LM99. F A “consistent” model selection rule does not affect the distribution of the KPSS test under the null. So long as the unweighted long run variance estimate is used, plimSz(m)=-'O'2 for all me[[.,, 1max ], and therefore axlfi.(m)—fi,.(l.)l—>0 (A1) where the max is over me [[0, 1max ]. However, unless 1m=[o, a “consistent” model selection rule does affect the distribution of the KPSS test under the unit root alternative. For example, consider the simplest case in which [0=1 and m= 1m=2. Then T T T—Isz (1) = r4 203+ 27 4212,11“ —> 303 I Z(syds. (A2) (=1 . —+ 502 [1(5de (A3) 93 T“[fi..(2) - 27,. (1)] = Tris} (7452(2)? .. 7‘2 S; {T432 (1)}" 1:1 1 1 l a 2 l —>[—-—]af J1!_W_’(s)ds) da/af J-Ej(s)2ds 5 3 0 0 0 2 l a 2 I 2 = [_ Em ojmsws] da/ ojms) ds :4 0 (A4) More generally, let 1 represent the true lag and m=1+q represent one of the lags possibly chosen (i.e., chosen with positive probability asymptotically), with ISqSUM- 0). Then 7 1 r T"82 (1) = T“2 212,2 + 22[T’2 Za,a,_,] 1:1 3:1 t=s+l T T T = r2212; + {742.251, +....+ r2 2,2,] 1 (=2 1: r=l+l 1 —-) (1+ 2003 IZ(S)2 dS , (A5) 0 T"sz(m) = Tail}? +2i[T-2 £81814] (=1 t=s+l s=l T T T _ -2::A2 —2::AA —22:AA (=1 [=2 t=m+l _.[1+2