LIBRARY Michigan Stat. University PLACE iN RETURN BOX to remove this chookwt from your record. TO AVOID FINES return on or before date duo. DATEADUE DATE DUE DATE DUE ,_ —-‘ I ’- ELIE,“ .‘ " fl r———T ___l[ 1 fiV—T W MSU Is An Affirmative Action/Equal Opportunity Institution cmma-nt A-..“ GL8 DBTRBNDING AND THE POWER OF UNIT ROOT AND STATIONRRITY TESTS BY Jaeyoun Hwang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Economics 1993 ABSTRACT GLS DETRENDING AND THE POWER OF UNIT ROOT AND STATIONARITY TESTS BY Jaeyoun Hwang This dissertation considers the problem of testing whether deviations of a time series from deterministic trend are stationary or contain a unit root. Common tests detrend the series either in levels, which is appropriate under stationarity, or in differences, which is appropriate given a unit root. This dissertation considers detrending by generalized least squares (GL8), based on an assumed value of the parameter of interest. This idea is closely related to King's theory of point optimal invariant (POI) tests. We consider two tests based on GLS residuals: the Bhargava-Schmidt-Phillips (BSP) test of a unit root, and the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test of stationarity. We derive asymptotic distributions for these GLS-based tests and for the corresponding POI tests, and we compare their finite sample properties through detailed Monte Carlo simulations” Our results ShOW'that the power of the GLS-based BSP unit root test is comparable to that of the POI test. However, the GLS-based KPSS test of stationarity is not very powerful, and is dominated by the POI test“ This supports the relevance of our theoretical result that the GLS—based KPSS test is inconsistent. Dedicated to my parents from whom I have inherited health, intelligence, and especially the spirit of independence. iii AOXNO'LEDGEHENTS I am indebted to many people in different ways. I wish to thank them for their guidance, support, encouragement and patience. IEspecially, I ‘wish to ‘thank Professor’ Peter Schmidt, my dissertation chairman, for his considerate and effective guidance from the beginning to the end of this project. I appreciate his sharing of scarce Sabbatical time and his patience, particularly over the last five months. I also wish to thank my other committee members, Professor Robert H. Rasche and.Jeffrey Wooldridge, for their invaluable comments. I gratefully acknowledge the financial support I have received from the Ssangyong Cement Industrial Co. Had it not been for their support, I could not have even gotten started. Foremost thanks must go to my wife, Younghee, and my little son and daughter, Innyoung and Sunny. They gratefully put up with my negligence of duties at home and.impatience for the past five years. I also have to thank my mother who always has been praying for her children. Finally, I wish to thank the Invisible Hand for my unexplained good fortune. Whenever I am in troubled waters, She comes to help me. iv CH CH. LI< TABLE OF CONTENTS LIST OFTABLES OOOOOOOOO ..... OOOOOOOOOOOOOOOOOOOOOOOOOO Vi CHAPTER 1 : INTRODUCTION .............................. 1 CHAPTER 2 : ALTERNATIVE METHODS OF DETRENDING AND THE POWER OF UNIT ROOT TESTS 1. INTRODUCTION .................................. 12 2. UNIT ROOT TESTS AGAINST STATIONARY AND NONSTATIONARY AR(1) PROCESSES: NEW TESTS AND POI TESTS ......................................... 16 3. DISTRIBUTION THEORY ........................... 21 4. SIMULATION RESULTS ............................ 26 5. CONCLUDING REMARKS ............................ 31 APPENDIX 1 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 4 8 APPENDIX 2 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 51 APPENDIX 3 O O O O O O O O O O O O OOOOO O O O O O O O O O O O O O O O O O O O O O O O 54 CHAPTER 3 : ALTERNATIVE METHODS OF DETRENDING AND THE POWER OF STATIONARITY TESTS 1. INTRODUCTION .. ...... .......................... 56 2. STATIONARITY TESTS: GLS-BASED KPSS TEST AND POI TEST .......................................... 61 3. DISTRIBUTION THEORY ........................... 66 4. SIMULATION RESULTS: SIZE AND POWER OF THE TESTS OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO 74 5. CONCLUDING REMARKS ............................ 77 APPENDIX 1 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 9 2 APPENDIX 2 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 9 4 APPENDIX 3 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 9 9 APPENDIX 4 O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O 10 3 CHAPTER 4 : CONCLUDING REMARKS ..... ............ ...... 107 LIST OF REFERENCES ........... .. .......... ... ....... ... 110 CHAPTER 2 Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table 1a 1b 10 1d le 1f 10 11 12 LIST OF TABLES 1%, 5%, and 10% critical values of';s(AJ .... 1%, 5%, and 10% critical values of ;"(AJ .... 1%, 5%, and 10% critical values of’;s(mJ .... 1%, 5%, and 10% critical values of ;N(AJ .... 1%, 5%, and 10% critical values of DKs(p.) .. 1%, 5%, and 10% critical values of DKN(p.) .. power, 5% lower tail tests, T = 50 .......... size and power, 5% lower tail tests, T = 100 ...................................... power, 5% lower tail tests, T = 200 . ........ power, 5% lower tail tests, T = 100 ......... power, 5% lower tail tests, T = 50 “b drawn from N(0, 1/(1-pf)) ................ power, 5% lower tail tests, T = 100 1% drawn from N(O, 1/(1-pf)) .... ............ power, 5% lower tail tests, T = 200 1% drawn from N(0, 1/(1-pf)) ................ power, 5% lower tail tests, T = 25 ... ........ power, 5% lower tail tests, T = 500 .. ....... power, 5% lower tail tests, T = 100, no = -10 ......................... power,5 5% lower tail tests, '1' = 100, 1% = - ............................... ...... vi 32 32 33 33 35 36 37 38 39 40 41 Table 13 power, 5% lower tail tests, T = 100, uo=-2 OOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOO Table 14 power, 5% lower tail tests, T = 100, uo=-1 ........ CHAPTER 3 Table 1a 90%, 95%, 97.5%, and 99% critical values Of "7(0*) OOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOO Table 1b 1%, 2.5%, 5%, and 10% critical values Of Pf(o.) OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO Table 2 percentiles of point optimal tests P,(0.) , T=500 OOOOOOOOOOOOO OOOOOOOOO OOOOOOOOOOOOOOOO Table 3 size of 6,09.) and p49.) tests, T = 30 Table 4 size of {77(0.) and P,(o.) tests, T 50 OOOOOOO Table 5 size of 5,09.) and P,(o.) tests, = 100 T Table 6 size of 640.) and P49.) tests, T = 200 T Table 7 size of 57(0.) and P,(0.) tests, = 500 Table 8 power of 5,0,) and P40.) tests, T = 30 ...... Table 9 power of 7-740.) and P,(o.) tests, T = 50 ...... Table 10 power of 540.) and p,(o.) tests, T = 100 Table 11 power of 57(0.) and P49.) tests, T = 200 Table 12 power of 7340.) and P709.) tests, T = 500 vii CHAPTER 1 CHAPTER 1 INTRODUCTION The finding of Nelson and Plosser (1982) that most U.S. macroeconomic data are nonstationary rather than stationary around a deterministic trend has had a huge impact on the character of empirical work in macroeconomics. It has become standard to test the hypothesis of a unit root in macroeconomic time series before proceeding with further analysis. This is so for the following two reasons. First, the presence (or absence) of a unit root in certain series is predicted by alternative economic theories; for example, the efficient market hypothesis, real business cycle theory, and the permanent income theory of consumption. Second, the presence of a unit root has strong implications for methods of statistical inference in regression. Regression with nonstationary data may produce spurious results, so that common statistics such as t-statistics and measures like R2 are not correct even asymptotically (Granger and Newbold (1974) and Phillips (1986)). One of the stylized facts in the unit root literature of the past.decade is that standard unit root tests often fail to reject the null hypothesis of a unit root for many economic time series. The conclusion that can be drawn from this empirical evidence is that most economic time series do not show strong evidence against the unit root hypothesis. It is 2 not clear whether this occurs because most series actually have a unit root, or because standard unit root tests have low power against relevant alternatives. Therefore Kiwatkowski, Phillips, Schmidt and Shin (1992), hereafter, KPSS, suggest that, in trying to decide whether economic time series are stationary or integrated, it would be useful to perform tests of the null hypothesis of stationarity as well as tests of the null hypothesis of a unit root. To do so, we consider the Data Generating Process (DGP) to be of the following form: (1A) 1!. = T + 6t + u.. (18) ut = pub1 4- wt - 0w”, t = 1,...,T. Clearly ut is the deviation of y} from deterministic trend (w + 5t). For the moment we assume that w ~ NID(0, of). In t matrix form, (2) Y = 21 + u, where Z is a Tx2 matrix with tth row zt’== [1,t], 7’ = [¢,£], and u is a Txl vector of realizations of the error process. The point of this parameterization is that it allows for linear deterministic trend under the null and alternative hypotheses, and the interpretation of the parameters 1]: (level) and s (trend) does not change whether the series is stationary or has a unit root. In addition, the distributions of all the unit root tests and stationarity tests considered in this thesis (except for the GLS-based KPSS tests in chapter 3) do not depend on the nuisance parameters ¢, 5 and a“. Though many testable hypotheses can be formulated in 3 terms of this DGP, by selecting particular values of the parameters p and 0, we are interested in two specific cases which imply trend stationary and difference stationary processes under the null and alternative hypotheses. First, we will consider testing the null hypothesis p = 1 against the alternative hypothesis p 6 [0,1) , assuming 0 = 0. Then ut has a unit root so that yt is difference stationary under the null hypothesis. All of the unit root tests that we will consider can be viewed as tests of the hypothesis p = 1 in this parameterization. Second, we will consider testing the null hypothesis 0 = 1 against the alternative 0 6 [0,1), assuming p = 1. Then ut = w are iid errors so that Y: is trend t stationary under the null hypothesis. We may note that even though the case of p = 0 and 0 = 0 constitutes the same null hypothesis of stationarity as the case of p = 1 and 0 = 1, the latter is more naturally related to the alternative hypothesis of a unit root, since yt contains a unit root when p = 1 and a 6 [0,1). This dissertation considers tests based on various types of residuals from equation (1A) for testing whether deviations of a time series from deterministic trend are stationary or contain a unit root. Obviously, different types of residuals correspond to different methods of detrending the series yt. First, we will define fit, t = 1,...,T, as the OLS residuals from (1A). That is, they are the residuals from an OLS regression of y on an intercept and time trend. The unit root tests of Dickey and Fuller, hereafter DF, and the KPSS 4 stationarity test are based on these OLS residuals. Second, Bhargava (1986), Schmidt and Phillips (1992) and Schmidt and Lee (1991) consider tests based on detrending in differences. That is, their tests are based on the residuals (3) u, = y. - {3, - Et = [(T-1IY. - . where E = A—y' = (yt-y1)/(T-1) and ix = y1 - E are the normal MLE's of the parameters (bx = 1’ + u0 and 5 when the restrictions p = 1 and o = 0 are imposed. Following the terminology in Schmidt and Phillips, we will refer to the {it as BSP residuals, and to their unit root tests as BSP tests. The main contribution of this thesis is to consider tests based on generalized least squares (GLS) residuals from (1A). For the case of unit root testing, GLS would be based on an assumed value of p, say'p., against which we wish to maximize power. The case of testing the null of stationarity is similar, except that GLS is based on an assumed value of 9, say'0,. Tests based on the GLS residuals are closely related to the point optimal invariant (hereafter POI) tests proposed by King (1980) and developed in his later work (King and Hillier (1985), King (1988), and Dufour and King (1991)). King (1988) defines a point optimal test as a test that optimizes power at a predetermined point under the alternative hypothesis, and develops a theory of point optimal tests as a second best in cases in which a uniformly most powerful test does not exist. The theory of point optimal testing ensures that the test is most powerful among the set of invariant tests at a predetermined point in the alternative parameter 5 space but one hopes that it also may have better power than other tests in a neighborhood of that point. In addition, point optimal tests can.be used to find the power envelope for a given testing problem, which will be a benchmark for other tests. Chapter 2 considers the problem of testing the null hypothesis of a unit root. Thus in equation (13) we impose 0 = 0 and we wish to test the null hypothesis p = 1 against the alternative p < 1. Given a set of residuals, say 8,, we will consider tests based on the artificial regression (4) A8, = a3“ + error , t = 2,...,T. Let a be the OLS estimate of 4> in (4). We will consider coefficient-based tests of the form T8, and also tests based on the t-statistic for the hypothesis ¢ = 0. These can be regarded as variants of the Dickey-Fuller tests. Specifically, if the {it are OLS residuals from (1A) and 3 is the OLS estimate from (4) , then the DF statistic 3,, equals T3 and the DF statistic ’1‘, is the t-statistic for 4: = 0 in equation (4). The BSP tests are also of this general form. Consider the equivalent of equation (4) , using {it in place of A u o t. (5) m3, = ¢Dt_1 + error, t = 2,...,T, and let a be the OLS estimate of ¢ in (5). Then Schmidt and Lee (1991) and Schmidt and Phillips (1992) consider the statistics X = T; and 7 = t-statistic for the hypothesis ¢ = 0. In the absence of corrections for autocorrelation, Z and 7 are equivalent to each other and to Bhargava's statistic.N2. 6 From this perspective, the Dickey-Fuller tests and the BSP tests are of exactly the same form, except that fit is used in Dickey-Fuller tests while {it is used in BSP tests. Both {it and {it are residuals from the levels equation (1), but fit is based on parameters estimated using differences (i.e., GLS estimates under the null that p = 1) whereas {it is based on the parameters estimated using levels. Since the regression in levels is spurious under the null, in the sense of Granger and Newbold (1974) and Phillips (1986), we might expect BSP tests to be more powerful than Dickey-Fuller tests against alternatives near the null. Conversely, we might expect the Dickey-Fuller tests to be more powerful than the BSP tests against alternatives far from the null. In fact, this pattern is exactly what Schmidt and Phillips (1992) and Schmidt and Lee (1991) find in their Monte Carlo experiments. This seems to be a dilemma from a practical point of view. However, we may ask a more fundamental question here; is there any other test which can dominate Dickey-Fuller and BSP tests? In, order' to answer' this question. we consider ‘test statistics based on the GLS residuals from (1A), where GLS is based on an assumed value of p, say p., against which we wish to maximize power. The Dickey-Fuller tests and BSP tests correspond to p. = 0 and p. = 1, respectiveLy. In fact, a value like p. = 0.85 might be reasonable in annual data, and the resulting tests might be expected to have better power than.Dickey-Fuller and.BSP'tests not only against the specific alternative p = p., but also against alternatives in a 7 (hopefully large) neighborhood of p.. Dufour and King (1991) derive the point optimal invariant (POI) test of the hypothesis p = p0 against the alternative p = p,” so that the unit root case corresponds to p0 = 1. Its calculation compares the unexplained sums of squares in GLS regressions based on p0 and p., so that the POI unit root test statistic is also a function of GLS residuals. In chapter 2 we present six unit root tests. We discuss coefficient-based and t-statistic tests based on GLS detrending, and a Dufour-King type POI test. However, there are two versions of each of these tests, depending on whether the alternative is taken to be a stationary AR(1) process or a particular type of nonstationary AR(1) process. This distinction occurs because we consider two of the several possible ways of treating the initial observation. According to our DGP as expressed in equation (1B), the initial "observation" 111 is generated as (6) u1 = pu0 + 61. We consider two different assumptions about no. First, we consider the case that u0 is fixed. In this case the distribution of ut is nonstationary, and the error covariance matrix used in GLS estimation is given in equation (7) of chapter 2. For a given value of p,” we obtain GLS residuals which we denote by {i t(p.,); GLS-based tests ;u(P-) and ;N(p*); (u) and a Dufour-King type POI test DK"(p.) . Second, we consider the case that uo is random, with mean zero and variance 02/(1-p2) . In this case the distribution of ut is covariance 8 stationary, and the error covariance matrix used in GLS estimation is given in equation (9) in chapter 2. For a given value of p,” we obtain GLS residuals 6,8,4“); GLS-based tests 380..) and ?,(p.): and a pox test DKs(p.) . The limits of these tests as p. » 1 are well defined. In chapter 2, we derive the asymptotic distributions of these test statistics, and we show how to construct asymptotically valid tests in the presence of error autocorrelation. We tabulate critical values for our tests, and we investigate their power in a set of Monte Carlo experiments. Specifically, the value of p, used in GLS detrending affects the size and power of the tests asymptotically and in finite samples. Let p1 denote the true value of p in the DGP. Then power depends on T, p,” p1, and the treatment of the initial observation. We perform extensive Monte Carlo experiments to investigate the power of the tests as a function of these parameters. The GLS-based tests offer a clear gain in power relative to the Dickey- Fuller and BSP tests over an empirically relevant range of the parameter space. Their power is comparable to that of the POI test. In chapter 3 we consider the problem of testing the null hypothesis of trend stationarity. Thus in equation (18) we impose p = 1 and we wish to test the null hypothesis 0 = 1 against the alternative 0 < 1. Thus we are testing for a unit root in the moving-average representation of Aut (i.e., overdifferencing) . Alternatively and equivalently, we can 9 follow’ KPSS in expressing' ut ix: terms of a components representation: (7) ut=rt+et,r=r t-1 where 6t are iid(0, 0‘2) errors and vt are iid(0, of). Here A (I avz/o‘z, 20) is the signal to noise ratio, which measures the ratio of the changes in permanent versus transitory components (Shepard and Harvey (1990)). The signal to noise ratio A is related to the moving average parameter 0 in the following way: (8) o = {(A + 2) - [A(A + 4)]‘/2)/2, A = (o - 1)2/o. (9) of = AO‘Z. Thus the null hypothesis of trend stationarity corresponds to A = 0 (or of = 0 or a = 1) and the alternative hypothesis of difference stationarity corresponds to A > 0 (or of > 0 or o < 1). In this context, the one-sided LM test can be derived under the stronger assumption that the Et are iid N(0, of) and A the V} are iid N(0, of). Let e t = 1,...,T, be the OLS t I residuals from the regression of y on intercept and trend: they correspond to fit above. Define 3‘2 and St to be the estimate of the error variance from this regression and the partial sum process of the residuals, respectively: T (10) 3‘2 = T451133, . t A (11) st =J_1ej, t = 1,...,T. Then the LM statistic is given as follows: T . (12) TM =tz_lst2 / 6‘2. 10 KPSS (1992) consider the asymptotic distribution of the LM statistic under the null hypothesis with weaker assumptions about the errors. They modify the LM statistic to allow for autocorrelation in at by replacing the denominator 3‘2 with a consistent estimate of the long run variance of et. Define the estimated autocovariances f5(3)) = T4éj+letebr j = 0, 1, .. .,T-1, and the long run variance estimator 32“) = 91(0) + 2 Sélw(s,£)9(s) . Here w(s,£) is an optional weighting function, such as the Bartlett-window w(s,£) = 1-s/(£+1), and c is the number of lags used to estimate oz, satisfying a -» no but e/T » 0 as T 4 w. Then the KPSS statistic is A T A A (13) n, = T'thdst2 / 02(8). In chapter 3 we modify the KPSS statistic by basing it on GLS residuals instead of OLS residuals. GLS is based on an assumed value 0,, < 1 in the MA representation (18) , or equivalently, on an assumed value A. > 0 in the components representation (7) . A given value of 9. implies the covariance matrix nun.) given by equation (10) in chapter 3, and a set of GLS residuals 5.49.). Let étfl.) be the partial sum process of this residual process. Let 5(2)2 be an estimator of the long run variance defined in the same way as 8(a)? above except that eta.) replaces 8,. Then the GLS-based KPSS test can be defined as an upper tail test based on the statistic - T - - (14) mo.) = raglan»?- / awe. Thus St(0*) and 5(2)? are used in the KPSS statistic instead of 11 s, and Gulf. We also (consider the POI test of the stationarity hypothesis. Thus we consider the problem of testing the null 0 = 1 against the specific alternative 9 = 0. < 1. The POI test is a lower tail test based on the statistic P,(0.), defined as the ratio of quadratic forms in GLS residuals: (15) PM.) = é(o.)'n.."(o.)é(o.> / é<1)'n."(1)é(1>, where e(0.) and 6(1) are GLS residual vectors from (1A) under the alternative 0 = 0. and under the null 0 = 1, respectively. In chapter 3, we derive the asymptotic distributions of the GLS-based KPSS test and the POI test under the stationary null and under the unit root alternative. The GLS-based KPSS turns out to be inconsistent against unit root alternatives, so we do not expect it to have good power properties in finite samples. We tabulate critical values for our tests, and we investigate their power in a set of Monte Carlo experiments. As expected, the GLS-based KPSS test is not very powerful. However, the POI test based on a reasonable value for o. is considerably more powerful than the (standard, OLS-based) KPSS test over a wide range of 0. Thus for this problem, as for the unit root test problems, the POI approach offers the promise of substantial gains in power over other standard tests. Finally, chapter 4 contains some concluding remarks. CHAPTER 2 CHAPTER 2 ALTERNATIVE METHODS OF DETRENDING AND THE POWER OF UNIT ROOT TESTS 1. INTRODUCTION The purpose of this chapter is to provide new tests of the null hypothesis of a unit root against the alternative of trend stationarity. These tests are based upon detrending the series by a generalized least squares (GLS) regression, using an empirically plausible value of the autoregressive root. These tests are related to the unit root tests of Bhargava (1986), Schmidt and Phillips (1992) and Schmidt and Lee (1991), and also to the point optimal tests of Dufour and King (1991). Elliott, Rothenberg and Stock (1992), in work done independently of ours, have recently proposed essentially the same tests. Following Dickey (1984), Bhargava (1986), Schmidt and Phillips (1992) and others, we consider the data generating process (DGP) to be of the form: (1) yt=¢+£t+ut, u =put_1+et, t=1,...,T, t where et ~ NID(0, 02). In matrix form, (1') Y=Zv+m where z is a matrix of dimension Tx2 with tth row zt== [1,t], 7' = [¢,§], and.u is a Txl vector of realizations of the error process. The null hypothesis of a unit root corresponds to p = 1, and the alternative hypothesis to be considered in this 12 13 chapter corresponds to p a [0,1). This parameterization is useful because it allows for linear deterministic trend under the null and alternative hypotheses, with the interpretation of the parameters ‘6 (level) and g (trend) being the same whether the null hypothesis holds or not. In addition, the distributions of most common unit root tests, and of all of the tests considered in this chapter, are independent of the nuisance parameters p, g, and a under both the null and the alternative hypotheses. In this chapter we will consider tests based on various types of residuals (OLS and GLS) from equation (1). Given a set of residuals, say at, we will consider tests based on the artificial regression (2) A3, = 4.6“ + error , t = 2,...,T. Let 8 be the OLS estimate of ¢ in (2). We will consider coefficient-based tests of the form T3, and also tests based on the t-statistic for the hypothesis ¢ = 0. These can be regarded as variants of the Dickey—Fuller tests. Specifically, if the G, are OLS residuals from (1) and a is the OLS estimate from (2), then the Dickey-Fuller statistic 3, equals T3 and the Dickey-Fuller statistic Ir" is the t- statistic for ¢ = 0 in equation (2). Bhargava (1986), Schmidt and Phillips (1992) and Schmidt and Lee (1991) consider tests based on detrending in differences. That is, their tests are based on the residuals (3) i. y. - {6, - Et = tar-1m - (t-lm - (T-twu/(T-l). where E E = (yt-y1)/(T-1) and px = y1 - E are the normal 14 MLE's of the parameters (6" = II) + x0 and 6 when the restriction p = 1 is imposed. (Following the terminology in Schmidt and Phillips, we will refer to tests based on 13.t as BSP tests. Note that our 1‘3.t is Schmidt and Phillips' St.) Consider the equivalent of equation (2), using {it in place of 3,: (4) m3, = pa” + error, t = 2,...,T, and let 3 be the OLS estimate of ¢ in (4). Then Schmidt and Lee (1991) and Schmidt and Phillips (1992) consider the statistics ; = T3 and ? = t-statistic for the hypothesis ¢ = 0. In the absence of corrections for autocorrelation, 3 and 7 are equivalent to each other and to Bhargava's statistic N2. In this chapter we will not consider the statistics p and T of Schmidt and Phillips (1992), or the closely related R2 statistic of Bhargava (1986), which are based.on.an artificial regression like (4) above but with an intercept. From this perspective, the Dickey-Fuller tests and the BSP tests are of exactly the same form, except that fit is used in Dickey-Fuller tests while {1t is used in BSP tests. Both {1, and {it are residuals from the levels equation (1), but it is based on parameters estimated using differences (i.e., GLS estimates under the null that p = 1) whereas G.t is based on the parameters estimated using levels. Since the regression in levels is spurious under the null, in the sense of Granger and Newbold (1974) and Phillips (1986), we might expect BSP tests to be more powerful than Dickey-Fuller tests against alternatives near the null. Conversely, we might expect the Dickey-Fuller tests to be more powerful than the BSP tests 15 against alternatives far from the null. In fact, this pattern is exactly what Schmidt and Phillips (1992) and Schmidt and Lee (1991) find in their Monte Carlo experiments. In this chapter we construct test statistics based on the GLS residual from (1), where GLS is based on an assumed value of p, say p., against which we wish to maximize power. The Dickey-Fuller tests and BSP tests correspond to p.== 0 and p. = 1, respectively. In fact, a value like p. = 0.85 might be reasonable in annual data, and the resulting tests might be expected to have better power than Dickey-Fuller and BSP tests not only against the specific alternative p ==p., but also against alternatives in a (hopefully large) neighborhood of p.- This idea dates back.at least to King (1980) and has been developed in his later work (King and Hillier (1985), King (1988), and Dufour and King (1991)). King (1988) defines a point optimal test as a test that optimizes power at a predetermined point under the alternative hypothesis, and develops a theory of point optimal tests as a second best in cases in which a UMP test does not exist. Dufour and King (1991) derive the point optimal invariant (POI) test of the hypothesis p = poragainst the alternative p = p., so that the unit root case corresponds to po = 1. Its calculation compares the unexplained sums of squares in GLS regressions based on p0 and p., so that the P01 unit root test statistic is also a function of GLS residuals. The Dufour-King POI test is based on a specific assumption about the generation of the 16 initial value of the series, and it is not guaranteed to be point optimal under some initializations that we consider. Nevertheless, as we shall see, the POI test and the Dickey- Fuller type tests based. on GLS residuals are not ‘very different. The value of p. used in GLS detrending affects the size and power of the tests asymptotically and in finite samples. Let p1 denote the true value of p in the DGP. Then power depends on T, p., p1, and the treatment of the initial observation. ‘We perform extensive Monte Carlo experiments to investigate the power of the tests as a function of these parameters. The new tests offer a clear gain in power relative to the Dickey-Fuller and BSP tests over an empirically relevant range of the parameter space. Their power is comparable to that of the POI test. 2. UNIT ROOT TESTS AGAINST STATIONARY AND NONSTATIONARY AR(1) PROCESSES: NEW TESTS AND POI TESTS In this section we present six unit root tests. We discuss coefficient-based and t-statistic tests based on GLS detrending, and a Dufour-King type test. However, there are two versions of each of these tests, depending on whether the alternative is taken to be a stationary AR(1) process or a particular type of nonstationary AR(1) process. This distinction occurs because we consider two of the several possible ways of treating the initial observation. According to our DGP given in equation (1), the initial 17 "observation" u.1 is generated as (5) tn = pUh + £1 . We consider two different assumptions about no. First, we consider the case that u,J is fixed. In this case the distribution of ut is nonstationary. Second, we consider the case that no is random, with mean zero and variance 02/(1-p2) . In this case the distribution of ut is covariance stationary. Neither of these assumptions generally corresponds to the Dufour-King treatment of the initial observation. They assume (6) u1==d161 for some constant d1. This is different from either of our assumptions, except in two special cases to be discussed below. We note in passing that Elliott, Rothenberg and Stock (1992) focus on asymptotics and therefore do not discuss the treatment of the initial observation.in«detailn However, even though it will not.matter’asymptotically, the treatment of the initial observation can be important in finite samples. Consider first the case that “o is assumed to be fixed, so that ut is a nonstationary AR( 1) process. Then the covariance matrix of the Tx1 vector u is can“ p) , where n"( p) and its inverse are as follows: P 1 p p2 . . . pT'1 " (7) an (P) = Pa (1+p2) p 1+p2+p4 o o . (14124194) pT'3 L p o o co. 1+p2+ooo+p2(‘r-1) . d 18 1+p2 "p o co. 0 0 -P 1+p2 -p 0.. 0 o (3) n_'1(p) = o -p 1+),2 o o o no. 1+P2 ...p L .00 —p 1 J. We may note that our flu(p) is the same as Dufour and King's 0(p,1), as defined on p. 123 of their article. This correspondence occurs because our DGP with uo = 0 is the same as the Dufour-King DGP with d1 = 1; in each case u1 = 61. When u0 7% 0, our model is not the same as the Dufour-King model, even though the covariance matrix of u is the same under both models. All of the tests in this chapter have distributions that are invariant to the value of u0 under the null hypothesis, but power depends on no, and the Dufour-King POI test has no known optimality properties for u0 7! 0. Next consider the case that 11.0 is assumed to be random, with mean zero and variance Oz/(l-pz), so that ut is a covariance stationary AR(1) process. Then the covariance matrix of the vector u is ngxp), where fls(p) and its inverse are as follows: ’1 2 1-1 _ p p 00. p p 1 p on. pT'z (9) mp) =(1-,2)-1 ,2 p 1 ,r-s 19 ' 1 —p o ... o ‘ -p 1+p2 —p ... O (10) ns"(p) = o -p 1+p2 o o 0 O ... 1+p2 “p _ o o -p 1 . . We may note that our model with random 110 is the same as the Dufour-King model with (11 = (1-p2)"’2. We can now define our GLS-based tests. For a given p. in the interval [0,1), let U(s)t(p*) , t = 1,...,T, be the residuals from the GLS regression of yt on [1,t], using the (assumed) error covariance matrix 08(p.) , and consider the regression (11) Afi(”t(p.) = ¢U(s)t_1(p,) + error, t = 2,...,T, similarly to equations (2) and (4) above. Define 54p.) as the OLS estimate of ¢ in this regression: (12) 6.0.) = 21.2Afi(.,.(p.)fi.,,.-1(p.)/z{.zfi(.,.-1(a)? . Then we define 53m) = Trish.) , and ;s(p*) = usual t-statistic for the hypothesis ¢ = 0 in (11). The tests ;u(p.) and hop.) are defined in exactly the same way, except that we use the residuals infirm) from the GLS regression of yt on [1,t], using the (assumed) error covariance matrix D"(p.) . When p,' = 0, the GLS residuals {3,8,40) and (1,,”(0) become the OLS residuals fit, and correspondingly our tests become the Dickey-Fuller tests: £40) = ;"(0) = 2,, and ?s(0) = ?"(0) = A .- 17. Similarly, when p. = 1, the GLS residuals urge”) and 20 130mm) become the BSP residuals fit, and our tests become the BSP tests I and 7. More precisely, gnu) = 1im,._,1 58m = 7 and ?,(1) = limnM ;,(r) = ?; the limits are taken in the stationary case because RsCI) is singular. The mathematical details for p. = 1 are given in Appendix 1. The relationship between these tests and.the Dufour-King POI test is slightly more complicated. We consider the statistic 82(1,p.n1:) as given by Dufour and King (Theorem 5, p. 127). Heretd: is an assumed value of d1 in equation (6) above, and the statistic equals the ratio of quadratic forms in GLS residuals, using the covariance matrices 0(pmny*) and n(1,1) as defined on their p. 123. In order to make their treatment as comparable to ours as possible, we consider only the case that d: = 1: as noted above, their model with d1== 1 corresponds to our nonstationary case with uo== 0. Then their n"(p..1) = our n,"(p.) and their n"(1,1) = our nu"(1), where our notation n;‘(-) is defined in equation (8) above. Thus we obtain their statistic in our notation as (13) amp.) = sz/fi'n,"(1)11 . We may note that the matrix ns"(1) is singular, but nevertheless well defined. In fact, the denominator of UK is exactly the same as the denominator of UK", because the only difference between 8184(1) and nn"(1) is in their (1,1) elements, and {i1 = 0. 3. DISTRIBUTION THEORY In the previous section we considered three tests [;N(p.) , ;u(p.) and DKN(p.)] designed to be powerful against nonstationary AR(1) alternatives, and three tests [ps(p.), ;8(p.) and DK3(p.)] designed to be powerful against stationary alternatives. In this section we discuss their distributional properties under the unit root null and under stationary and nonstationary alternatives. The above six test statistics are all based on GLS residuals from the regression of yt on [1,t], using different 22 covariance matrices. It isleasy to show, along the same lines as Schmidt and Phillips (1992, pp. 262-263), that under the null hypothesis of a unit root the residuals ut,‘umnxp.) and fianxp.) are independent of the nuisance parameters p and s, and also of the initial value uo in the case that 110 is fixed. Furthermore all six statistics are independent of the error variance oz, because it scales the numerator and the denominator of each statistic in the same way. Thus, under the null hypothesis, the distributions of the six statistics are independent of p, 5; t5 and oz. They obviously depend on p. and the sample size T. Under the alternative hypothesis, the distributions of the statistics do not depend on p, 5 and 02. They depend on p., T, and the true value of p, say p1. In the case that 110 is fixed, they also depend on uo/o. All of the statements of the last two paragraphs are true for'most common unit root tests, such as the Dickey-Fuller and BSP tests, as well for the tests discussed in this chapter. We next consider the asymptotic distributions of our GLS- based tests as T -» do with p. fixed, under standard assumptions about the errors 6t. Specifically, we assume the regularity conditions of Phillips and.Perron (1988, p. 336), though other similar sets of conditions would yield the same results. Interestingly, the asymptotic distributions of the statistics 3.0.). ;.(p.). Amp.) and mp.) do not depend on p... for any value of p. in the interval [0,1). Specifically, as we prove in Appendix 2, the asymptotic distributions of these 23 statistics for any value of p. < 1 are the same as for p. = 0. That is, using any value of p. < 1, the new tests have asymptotically the same distributions as the Dickey-Fuller test statistics pfland ?,. From this perspective, there is a discontinuity in the asymptotic distribution theory at p.==]q since choosing p.==1.yields the BSP statistics 7 and ?, which do not have the same asymptotic distributions as the Dickey- Fuller statistics. One important implication of these results is that.we can modify the £50.), ism), ;u(p*) and 1MP.) statistics to allow for error autocorrelation in exactly the same ways as are currently done for the Dickey-Fuller tests. We can create augmented versions of these tests along the same lines as in Said and Dickey (1984) , by adding lagged values of NEWM) or mfiwnxp.) to the regression that yields the test statistics, where the number of lagged values grows at a suitable rate with sample size. Alternatively, the corrections of Phillips and Perron (1988), based on consistent estimates of the innovation variance 02 and the long run variance we, also lead to an asymptotically valid test. The asymptotic distributions of the Dufour-King POI tests DK"(p,) and DK8(p.) are derived in Appendix 3. The two statistics have the same asymptotic distribution, which is given by (oz/oz) (1-p,..)2 times a functional of Brownian motion. Thus, in contrast to the GLS-based tests just discussed, the asymptotic distribution. depends on pw. Furthermore, to correct for error autocorrelation we need simply to multiply 24 the statistic by a consistent estimate of (oz/o2) . This is a correction of the same general type as in Phillips and Perron (1988), but the fact that the statistic is simply scaled by the ratio of nuisance parameters is very similar to the results in Schmidt and Phillips (1992). There is no obvious analogy to the augmented versions of the previous tests. This is a potential disadvantage of the POI tests, since in previous Monte Carlo studies of the Dickey-Fuller and BSP tests, the augmented versions have typically had smaller size distortions than the Phillips-Perron corrected versions. We repeat that our asymptotics are done as T -v do for fixed p... This is standard and perhaps natural, but it is not the only possibility. Elliott, Rothenberg and Stock (1992) consider asymptotics for the same statistics as T -> do but with p. = 1-c./T, for fixed c... Therefore they obtain different asymptotic distributions than we do. In particular, the asymptotic distributions of all of the test statistics then depend on c... Furthermore, the corrections that make the statistics asymptotically valid in the presence of error autocorrelation are also different under their type of asymptotics than under ours. Which type of asymptotic analysis leads to tests with better finite sample performance in the presence of error autocorrelation is an important topic for further research. Despite our asymptotic results, for values of p. close to one we would not expect the critical values for the Dickey- Fuller statistics to be very accurate for our GLS-based tests, 25 for empirically relevant sample sizes. Therefore the finite sample distributions of the above six test statistics will be tabulated by a Monte Carlo simulation. Since the distributions of all of the test statistics under the null hypothesis depend only on the two parameters p. and T, critical values can be tabulated through simulations using various values of these two parameters. We consider sample sizes T = 25, 50, 100, 200, and 500. We also consider values of p. = 0.0, 0.5, 0.7, 0.8, 0.85, 0.9, 0.95, 0.99, and 1.0. The critical values are calculated by a direct simulation using 25,000 replications, with random normal deviates generated by the routines GASDEV and RAN3 of Press, Flannery, Teukolsky and Vetterling (1986). Normality does not matter asymptotically, and from previous results for the Dickey- Fuller tests it seems unlikely to matter much here. The critical values are presented in Table 1. The critical values in Table 1 look pretty much as one would expect. For our GLS-based tests (38”.), ;s(p*), 5“...) and ;"(p.)], for each sample size and critical level, the critical values are monotonically increasing (i.e., monotonically decreasing in absolute value) as p, increases from zero to one. This reflects a continuous movement from the Dickey-Fuller critical values toward the BSP critical values as p. varies from zero to one. Furthermore, for each p. between zero and one, as T 4 w the critical values should converge to the Dickey-Fuller asymptotic critical values. This convergence is apparent in Table 1, but it is relatively 26 slow for p. close to unity. This convergence of critical values as T ~ on is faster for the £8 and L. tests than for the ,3, and T“ tests. For p. in the empirically relevant range between 0.8 and 0.99, use of the finite sample critical values instead of the asymptotic values will make a difference even for rather large sample sizes, such as T = 500. For the Dufour-King POI tests DK!5 and UK", for any T the critical values approach one as p. -+ 1. For a given value of p., the critical values are not very sensitive to T, except when p. is small. When p. is small, the critical values are roughly proportional to sample size, as we would expect from the asymptotic results in Appendix 3. The fact that this is true only for small values of p. casts doubt on the accuracy of the asymptotic results for large values of p,, for reasonable sample sizes, and suggests that it will be important to use the finite sample critical values. 4. SIMULATION RESULTS In this section we consider the powers of the six tests described above. We will consider both the nonstationary case in which uo is fixed and the stationary case in which uo is drawn from the stationary distribution of ut. As before, let p, represent the value of p used in GLS detrending, and p1 represent the true value of p in the DGP. Then the powers of the tests are independent of the parameters 11:, 5 and 02, but they depend on T, p. and p1. When 110 is fixed, the powers also depend on uo/o. Without loss of generality we set w]: = f = 0 2'7 and 02 = 1. We consider sample sizes T = 25, 50, 100, 200 and 500; values of p. = 0.0, 0.5, 0.7, 0.8, 0.85, 0.9, 0.95, 0.99 and 1.0; and values of p1 = 0.0, 0.5, 0.7, 0.8, 0.85, 0.9, 0.95 and 0.99. For experiments in which uo is fixed, we consider uo = 0, -1, -2, -5 and -10. (Because the distribution of our errors is symmetric, power depends only on Iuol and we do not need to consider positive values of uo.) We consider only 5% lower tail tests, and we consider only the case of iid errors 6t. Power is calculated using a Monte Carlo simulation with 25,000 replications, and with normal random deviates generated as described in the previous section. We use the critical values presented in the previous section, so size should be exact apart from randomness: there are no size distortions to correct for, as there would be if we used the asymptotic critical values. We will present our experiments in three sets, according to what is assumed about the initial value uo. The first set of experiments corresponds to the case that “o is fixed at zero. The results for T = 50, 100 and 200 are given in Tables 2-4: the results for T = 25 and 500 are given in Tables 9 and 10. Since our model with no = 0 corresponds to Dufour and King's model with (11 = 1, their POI test DK,‘ using p. = p1 should have maximum power against the specific alternative hypothesis p = p... Our simulation results generally support this expectation. That is, for each value (pair) of T and p1, the DKI(p.) test with p. = p1 generally has power at least as 28 high as that of any of the other tests, apart from randomness. Exceptions to this statement.are found mainly for small sample sizes (e.g., T = 25, 50), are only marginally larger than could be explained as randomness, and are not substantively significant" The gain to using a POI test can be substantial; for example, for T = 100 and p1 = 0.85 (Table 3), compare the power of 0.580 for the POI test to 0.393 for ’r‘, [i.e., ?"(0) or ;s(0)] and 0.467 for 3, [i.e., ;,(0) or ps(0)], or to .526 and .524 for the BSP tests. Furthermore, these gains in power occur over an optimistically wide range of the parameter space. For example, the DK"(.85) test dominates the Dickey- Fuller tests forp1 over at least the range from 0.7 to 0.95, and hence arguably over the empirically relevant range of p1. Our GLS-based tests ;"(p.) and ;“(p.) are quite similar in performance to the POI test DKN(p.). When p. = p1, they are generally slightly less powerful than the POI test. An interesting result is that, for a given value of p1, the maximal power of our GLS-based tests is generally obtained at a value of p. slightly larger than p1. These values of maximal power are comparable to those of the POI test. Finally, since the DGP in this set of experiments is nonstationary, we would expect the nonstationary variants of our tests (DK', 5,, and T“) to be more powerful than their stationary counterparts (DKs, 5s and fl) . Our results generally support this expectation, though the differences in power are not large. The second set of experiments considers a fixed nonzero 29 initial value no. Since this does not correspond to Dufour and King's setup, neither DK3 nor UK" is a point optimal test in these experiments, though they may be approximately point optimal when uo is close to 0. Table 5 presents results for T = 100, p1 = 0.85, and u0 = -1, -2, -5 and -10. Some results for other values of p1 are given in Tables 11-14. Only the absolute value of uo matters in this experiment because our errors have a symmetric (normal) distribution. From Schmidt and Lee (1991) and Schmidt and Phillips (1992) it is known that the power functions of the Dickey- Fuller and BSP tests are monotonic in Iuol, but in opposite directions; small Iuol favors the BSP tests while large Iuol favors the Dickey-Fuller ’1‘, test. Our results in Table 5 show similar results for the tests proposed in this chapter. The power of the 5,0...) , p"(p.), DKs(p,,) and DKN(p,) tests decreases monotonically as Iuol increases, especially for large values of p.. Their power becomes even less than nominal size under some alternatives. However, the power functions of the test statistics ;“(p.) and ;‘(p.) depend on u,J in more complicated ways. Power tends to increase with Iuol when p. is close to zero and to decrease with Iuol when p, is close to one, reflecting the differing behaviors of the Dickey-Fuller fr" test (p. = 0) and the BSP 7 test (p. = 1). When |uo| is very large, for example 110 = -10, all the tests have their maximum power at p. = 0 and power decreases monotonically as p. gets closer to one, so that the Dickey- Fuller tests '1‘, and 3, are most powerful. In fact, ’1‘, 30 dominates all of the other tests in every experiment with u0 = -10. The third set of experiments takes uo as random and drawn from the stationary distribution of ut. Our results for T = 50, 100 and 200 are given in Tables 6-8. The DGP for this set of experiments does not match the DGP assumed for Dufour and King's unit root test. Nevertheless, as argued in the previous section, the statisic DKs(p.) should be expected to be most powerful against p1 in the neighborhood of p.. The results in Tables 6-8 generally support this expectation, although there is not much difference in power between DKs(p*) and DKfl(p.). Also, the gain in power from using a POI test instead of the Dickey- Fuller tests is smaller than it was in the first set of experiments. For example, for T = 100 and p1 = 0.85, the power of DKs(.85) is 0.509, compared to 0.411 and 0.468 for i, and 2,. Nevertheless, the POI test with a reasonable value of p,, such as 0.85, still dominates the Dickey-Fuller tests over much or all of the empirically relevant range of p1. As in the previous experiments, our GLS-based tests are similar in performance to the POI test. Interestingly, despite the fact that the DGP for this set of experiments is a stationary AR( 1) process, the p" and T" tests are generally more powerful than the p8 and is tests. The reason for this result is not clear. The loss in power from using the .3" test rather than the DK3 test is generally negligible. 31 5. CONCLUDING RENARHS We have proposed new unit root tests based on the residuals from a GLS regression of ytcn1[1,tJ, using a value p. 6 [0,1) against which maximal power is desired. These tests are constructed in the same way as the Dickey-Fuller tests and the tests of Bhargava (1986) and Schmidt and Phillips (1992) , but they are based on detrending by GLS rather than in levels or differences. They are similar in spirit to the point optimal invariant test of Dufour and King (1991). The power of the tests depends on the true value of p (p1) , the value of p used in detrending (p.) , and the sample size (T). In finite samples power also depends on the way in which the initial observation is generated. Our results indicate that, for reasonable values of p., such as p. in the range from 0.85 to 0.95, the new tests are more powerful than the Dickey-Fuller tests or the Bhargava-Schmidt-Phillips tests over the empirically relevant range of p1. Furthermore, the new tests have power comparable to the power of Dufour-King's point optimal invariant tests. The new tests are perhaps easier to relate to existing tests than the point optimal invariant tests, and they can be modified to allow for error autocorrelation either by augmentation or by applying the corrections of Phillips and Perron (1988). Thus they would appear to be of practical importance. T 25 50 100 200 500 25 50 100 200 500 p.=°O° “4.53 “3.74 “3.36 “4.23 “3.57 “3.25 “4.10 “3.48 “3.19 “4.03 -3045 “3.16 “4.03 -3045 “3.15 Pg=00° “4.53 “3.74 “3.36 “4.23 “3.57 “3.25 “4.10 “3.48 “3.19 “4.03 “3.45 “3.16 “4.03 “3.45 “3.15 “4.29 “3.56 “3.23 “4.11 “3.51 “3.19 “4.03 “3.45 “3.16 “4.00 “3.43 “3.14 “3.99 -3042 -3012 TABLE 1b 1%, “4.21 “3.46 “3.13 -4003 “3.45 “3.13 “4.00 “3.42 “3.13 “3.98 “3.41 “3.12 “3.98 “3.41 “3.12 0.7 “4.11 “3.42 “3.10 “4.01 -3041 “3.12 “4.04 “3.41 “3.13 “3.97 “3.41 “3.12 “3.96 -3042 “3.13 5%, AND 10% 0.7 -3093 “3.23 -2089 “3.83 “3.22 “2.93 “3.89 “3.28 -3000 “3.89 “3.33 “3.05 “3.92 “3.38 “3.10 32 “3.94 “3.25 “2.93 “3.95 “3.33 “3.03 “3.97 “3.37 “3.07 “3.93 “3.39 “3.10 “3.93 “3.40 “3.12 “3.75 “3.05 “2.72 “3.70 “3.08 “2.77 “3.69 “3.12 “2.84 “3.75 “3.20 “2.93 “3.84 “3.31 “3.03 “3.84 “3.18 “2.84 “3.85 “3.26 “2.95 “3.91 -3032 “3.03 “3.93 “3.37 “3.09 “3.93 “3.39 “3.11 “3.73 “3.07 “2.75 “3.77 “3.15 “2.84 “3.79 “3.24 “2.96 “3.86 “3.33 “3.05 “3.89 “3.38 “3.10 “3.58 “2.91 “2.58 “3.57 “2.97 “2.68 “3.62 “3.07 “2.81 “3.73 “3.20 “2.93 “3.82 “3.32 “3.04 “3.44 “2.72 “2.40 “3.33 “2.74 “2.43 “3.37 “2.77 “2.49 “3.44 “2.87 “2.61 “3.60 “3.05 “2.79 TABLE II 1%, 5%, AND 10% CRITICAL VALUES OE ;.(AJ “3.37 “2.70 “2.38 “3.28 “2.65 “2.35 “3.20 -2063 “2.34 “3.22 “2.62 “2.33 “3.15 “2.61 “2.33 CRITICAL VALUES OF ;.(p.) 0.85 “3.66 “2.97 “2.65 “3.63 “3.00 “2.71 “3.63 “3.05 “2.75 “3.68 “3.12 “2.84 “3.75 “3.23 “2.96 0.9 “3.57 “2.91 “2.57 “3.55 “2.92 “2.61 “3.53 “2.97 “2.67 “3.58 “3.02 “2.73 “3.62 “3.11 “2.82 0.95 “3.46 “2.79 “2.45 “3.42 “2.80 “2.51 -3042 “2.85 “2.57 -3046 “2.89 “2.59 “3.47 “2.95 “2.66 “3.40 “2.69 “2.36 “3.28 “2.67 “2.36 “3.28 “2.67 “2.38 “3.31 “2.74 “2.46 “3.41 “2.83 “2.54 “3.37 “2.70 “2.38 “3.28 “2.65 “2.35 “3.20 “2.63 “2.34 “3.22 “2.62 -2033 “3.15 “2.61 “2.33 25 50 100 200 500 25 50 100 200 500 33 TABLE 1c 1%, 5%, AND 10% CRITICAL VALUES OE ;.(AJ Pd-0.0 ~22. ~18. ~15. ~25. ~19. ~17. ~27. ~20. ~17. ~28. ~21. ~17. ~29. ~21. ~18. 65 04 66 76 85 08 54 81 68 62 24 93 53 78 29 0.5 ~22 ~17 ~15. ~25 ~19 ~27. ~20. ~17. ~28. ~21. ~17. ~29. ~21. ~18. .07 .49 25 .43 .67 ~16. 74 24 51 44 14 20 86 34 43 05 0.7 ~21. ~16. ~14. ~24. ~18. ~16. ~27. ~20. ~17 ~28 ~21 ~28. ~21 12 ~20. 74 ~15 84 ~24. 99 ~18. 41 ~15. 52 ~26. 50 ~20. .42 ~17 .41 ~27. .00 ~20. ~17. 70 ~17 81 ~28. .48 ~21. ~18. 00 ~18 0.8 14 .75 61 ~13. 57 32 55 78 89 10 .04 75 83 .70 50 35 .02 0. ~19. ~15. .01 ~13 ~23. ~17. ~15. ~26 ~28. ~20. ~17. ~28. .47 .04 ~21 ~18 TABLE Id 1%, 5%, AND 10% pig-0.0 ~22. .04 ~18 ~15. ~25. ~19. .08 ~17 ~27. .81 ~17. ~20 ~28. ~21. ~17. ~29. ~21. ~18. 65 66 76 85 54 68 62 24 93 53 78 29 0.5 ~21. ~17. ~14. ~25. ~19 ~27 ~28 ~29. ~21 ~18 79 12 87 17 .40 ~16. 51 .07 ~20. ~17. 38 31 .05 ~21. ~17. 14 81 33 .41 .02 0.7 ~20. ~15. ~13. ~23. ~18 ~15. ~26. ~19. ~16. ~27. ~20. ~17. ~28. ~21. ~17. 17 ~19 62 ~22 .02 ~16 34 ~14. 64 ~24 72 ~18 93 ~26 70 ~28 0.8 .11 80 ~14. 58 ~12. 54 33 .43 .78 16 .76 .41 68 ~15. 55 .48 60 ~19. 39 ~16. 71 64 .01 34 ~20. 88 ~17. 82 56 0 ~18. ~14. ~11. ~21. .03 ~13. ~16 ~24. ~17. ~14. ~25. ~19. ~15. ~27. ~20. ~17. 85 53 24 55 98 27 .42 ~19. ~16. 79 74 12 98 67 55 0.9 ~18. ~14. ~12. ~22. .06 ~17 ~14. ~25 ~27. ~20. .41 ~17 ~28 ~21. ~17. 87 53 33 83 37 .43 ~19. ~16. 12 16 51 59 .49 31 94 0. ~17. .48 ~11. ~13 ~20. ~15. ~13. ~23. .49 ~14. ~17 ~26 ~19 ~27. ~20. ~17. 95 89 21 97 50 00 54 83 .09 .40 ~16. 35 51 99 70 ~16. ~12. ~10. ~18. ~13. ~10. ~20. ~14. ~11. ~22. .02 ~13. ~16 ~25. ~18. ~15. .99 95 20 01 80 52 95 79 47 91 61 25 32 32 30 1.0 ~16. .00 .87 ~12 ~9 ~18. ~12. ~10. ~18. ~13. ~10. ~19. ~13 ~19. ~13 ~10 52 29 72 31 92 22 57 90 .43 ~10. 69 54 .49 .73 CRITICAL VALUES or 9.02.) .85 45 00 84 78 53 08 63 76 84 11 91 17 43 16 0.9 ~17. ~13. ~11. ~21. ~15. ~12. ~22. ~16. ~13. ~24 ~18 ~14. ~26 ~16 83 50 21 00 21 60 83 79 90 .79 .06 93 .07 ~19. .07 36 0. ~17. ~12. ~10. ~19. ~14. ~11. ~21 ~15 ~23 ~13 ~23. ~17. ~14. 95 14 62 38 67 11 66 .45 .42 ~12. 84 .04 ~16. .47 52 97 59 50 ~16. ~11. .75 ~9 ~18. .95 ~10. ~12 ~19. ~13. ~10. ~20. ~14. ~11. ~22. ~15. ~12. .99 69 94 31 41 79 59 96 96 66 85 90 96 91 1.0 ~16 ~12 ~9 ~18. ~12 ~18. ~13. ~10. ~19. ~13 ~10 ~19. ~13 .52 .00 .87 29 .72 ~10. 31 92 22 57 90 .43 .69 54 .49 ~10. 73 T 25 50 100 200 500 25 50 100 200 500 THUELE p.-0.0 nahat‘ rdraI“ c>c><> UIP-h’ .5825 .7362 .8502 .0065 .3110 .5322 .8586 .4716 .9324 .5681 .8139 .6932 .5545 11. 13. 7442 9972 TTUELE Pd“0.0 PIP-F' hahat‘ 8 11 13. 0.5825 0. 0.8052 7362 .0065 .3110 .5322 .8586 .4716 .9324 3.5681 4. 5.6932 8139 .5545 7442 9972 34 1O 1%, 5%, AND 10% CRITICAL VALUES 0? DKJp.) 0.5 0.6538 0. 0.6993 0 c>c>C> P‘P‘c> 1.4174 .7240 .9565 F‘F‘ F'F‘F‘ .6778 .4956 .0596 ¢~0353 F'P‘F' If 1%, 0.5 000 .7377 0 c>c>C> .9951 0 P'P‘c> .2727 0 1.4229 .7323 .9672 F'P' P‘P‘F‘ 2.6879 3.5065 4.0702 NH.“ .7654 0. .8461 0. .9082 0. .9882 0. .1407 0. .2628 0. .6566 0. .7032 0. .7692 0. .8529 0. .9165 0. 0.7 7586 .7756 0.7319 0. 7883 7995 8314 8553 8760 9383 9833 .0327 .1537 .2367 .5014 .7815 .9920 5%, AND 10% CRITICAL VALUES OF 0.7 7602 7789 .7925 8033 8376 8634 .8829 .1493 0. 9477 .9964 .0428 .1668 .2526 .5150 .7966 .0145 C>C>C> P‘F‘F‘ P‘P‘C’ I-H-H“ 0.8 .8268 .8356 .8420 .8456 .8611 .8725 .8828 .9107 .9308 .9542 .0063 .0450 .1638 .2918 .3812 0.8 .8277 .8372 .8445 .8477 .8651 .8784 .8870 .9188 .9411 .9606 .0194 .0617 .1771 .3106 .4045 000 I-‘I-‘i" P‘H'F‘ 0.85 .8655 .8705 .8742 .8762 .8855 .8924 .8972 .9135 .9261 .9370 .9673 .9901 .0579 .1267 .1794 0.85 .8660 .8714 .8757 .8775 .8880 .8957 .9003 .9190 .9335 .9425 .9772 .0028 .0690 .1440 .2027 P‘F'C> COO P‘F‘P‘ 0.9 .9070 .9093 .9111 .9120 .9166 .9198 .9220 .9294 .9353 .9397 .9540 .9642 .9940 .0254 .0497 0.9 .9072 .9097 .9117 .9126 .9177 .9215 .9233 .9319 .9387 .9429 .9593 .9715 .0010 .0371 .0655 0.95 .9518 .9524 .9529 .9531 .9543 .9553 .9558 .9580 .9595 .9606 .9646 .9676 .9746 .9828 .9892 0.95 .9518 .9525 .9530 .9533 .9546 .9556 .9561 .9586 .9604 .9613 .9661 .9698 .9771 .9871 .9949 0.99 0.9900 0. 0.9901 0 0.9901 0. 0.9901 0 0.9902 0. 0.9903 0 0.9904 0. 0.9906 0. 0.9908 0. 0.9910 0 DR‘P.) 0.99 0.9900 0. 0.9901 0. 0.9901 0 .9901 0. 0 0.9901 0 0 0.9902 0. 0.9903 0. 0.9904 0. 0.9904 0 0.9911 0 0.9920 0 1.0 9999 .9999 0.9901 0. 9999 9999 .9999 0.9902 0. 9999 9999 .9999 0.9904 0. 9999 9999 9999 9999 .9999 0.9915 0. 0.9918 0. 9999 9999 1.0 9999 9999 .9999 9999 .9999 .9902 0. 9999 9999 9999 9999 .9999 0.9907 0. 0.9908 0. 9999 9999 .9999 0.9916 0. 9999 .9999 35 TABLE 2 POWER, 5% LOWER TAIL TESTS, T = 50 BxP- T P1 P~ “0 f. 7. P. P. D1‘. DR. NOO 3A 50 0.9 0.0 0 .087 .087 .100 .100 .103 .103 3A 50 0.9 0.5 0 .089 .093 .099 .102 .106 .109 3A 50 0.9 0.7 0 .097 .105 .103 .105 .104 .108 3A 50 0.9 0.8 0 .099 .112 .103 .112 .111 .114 3A 50 0.9 0.85 0 .106 .114 .109 .114 .112 .113 3A 50 0.9 0.9 0 .103 .109 .104 .109 .111 .111 3A 50 0.9 0.95 0 .109 .113 .110 .113 .102 .108 3A 50 0.9 1.0 0 .114 .114 .116 .116 BB 50 0.0 0 .123 .123 .151 .151 .160 .160 3B 50 0.5 0 .139 .147 .156 .160 .173 .177 BB 50 0.7 0 .155 .182 .172 .179 .181 .184 BB 50 0.8 0 .164 .190 .171 .188 .188 .191 BB 50 0.85 0 .170 .196 .176 .194 .191 .195 BB 50 0.9 0 .176 .189 .178 .191 .191 .193 BB 50 0.95 0 .187 .193 .188 .192 .176 .186 BB 50 1.0 0 .183 .183 .186 .186 3C 50 0.8 0.0 0 .198 .198 .244 .244 .264 .264 3C 50 0.8 0.5 0 .210 .226 .245 .251 .274 .280 BC 50 0.8 0.7 0 .240 .282 .265 .280 .285 .292 3C 50 0.8 0.8 0 .247 .292 .258 .291 .289 .295 BC 50 0.8 0.85 0 .263 .298 .271 .299 .293 .295 BC 50 0.8 0.9 0 .275 .295 .279 .296 .297 .298 3C 50 0.8 0.95 0 .293 .295 .295 .294 .272 .282 BC 50 0.8 1.0 0 .281 .281 .284 .284 3D 50 0.7 0.0 0 .415 .415 .492 .492 .524 .524 3D 50 0.7 0.5 0 .440 .464 .493 .505 .543 .552 3D 50 0.7 0.7 0 .485 .554 .525 .554 .561 .568 3D 50 0.7 0.8 0 .512 .575 .532 .573 .570 .569 3D 50 0.7 0.85 0 .533 .579 .547 .579 .570 .565 3D 50 0.7 0.9 0 .557 .574 .565 .578 .577 .563 3D 50 0.7 0.95 0 .568 .554 .571 .554 .527 .527 3D 50 0.7 1.0 0 .511 .511 .514 .514 BE 50 0.5 0.0 0 .891 .891 .933 .933 .945 .945 BE 50 0.5 0.5 0 .908 .921 .936 .940 .952 .954 BE 50 0.5 0.7 0 .930 .952 .946 .953 .951 .943 BE 50 0.5 0.8 0 .938 .948 .945 .947 .945 .923 BE 50 0.5 0.85 0 .940 .938 .946 .939 .939 .913 BE 50 0.5 0.9 0 .949 .918 .951 .921 .928 .894 BE 50 0.5 0.95 0 .934 .888 .935 .889 .883 .855 BE 50 0.5 1.0 0 .807 .807 .809 .809 36 TABLE 3 SIZE AND POWER, 5% LOWER TAIL TESTS, T = 100 Exp T p, p. “0 73 7| P: pl DR: DR! "GO 4 100 1 0.0 O .049 .048 .048 .048 .048 .048 4 100 l 0.5 O .051 .051 .052 .051 .048 .049 4 100 1 0.7 0 .052 .050 .051 .050 .050 .049 4 100 l 0.8 0 .051 .052 .051 .052 .052 .052 4 100 1 0.85 0 .050 .050 .050 .050 .049 .050 4 100 1 0.9 0 .051 .050 .052 .050 .049 .048 4 100 1 0.95 0 .054 .053 .054 .053 .053 .052 4 100 l 1.0 0 .051 .051 .051 .051 4A 100 0.95 0.0 0 .082 .082 .094 .094 .098 .098 4A 100 0.95 0.5 0 .089 .091 .101 .102 .105 .106 4A 100 0.95 0.7 0 .091 .102 .099 .103 .104 .105 4A 100 0.95 0.8 0 .095 .112 .102 .110 .110 .114 4A 100 0.95 0.85 0 .099 .110 .103 .111 .107 .112 4A 100 0.95 0.9 0 .103 .110 .105 .110 .110 .110 4A 100 0.95 0.95 0 .117 .120 .119 .121 .119 .117 4A 100 0.95 1.0 0 .118 .118 .117 .117 4B 100 0.9 0.0 0 .191 .191 .234 .234 .254 .254 4B 100 0.9 0.5 0 .201 .211 .247 .250 .266 .271 4B 100 0.9 0.7 0 .212 .251 .244 .262 .275 .283 4B 100 0.9 0.8 0 .221 .289 .251 .286 .288 .304 4B 100 0.9 0.85 0 .237 .289 .257 .291 .282 .295 4B 100 0.9 0.9 0 .256 .290 .268 .290 .285 .290 4B 100 0.9 0.95 0 .291 .306 .297 .310 .304 .304 4B 100 0.9 1.0 0 .291 .291 .289 .289 4C 100 0.85 0.0 0 .393 .393 .467 .467 .503 .503 4C 100 0.85 0.5 0 .418 .433 .492 .498 .535 .543 4C 100 0.85 0.7 0 .431 .492 .479 .512 .537 .553 4C 100 0.85 0.8 0 .453 .564 .504 .563 .563 .591 4C 100 0.85 0.85 0 .477 .571 .510 .574 .559 .580 4C 100 0.85 0.9 0 .507 .571 .528 .572 .561 .568 4C 100 0.85 0.95 0 .566 .586 .573 .589 .580 .573 4C 100 0.85 1.0 0 .526 .526 .524 .524 4D 100 0.8 0.0 0 .656 .656 .736 .736 .769 .769 4D 100 0.8 0.5 O .676 .691 .750 .755 .788 .796 4D 100 0.8 0.7 0 .692 .755 .745 .774 .796 .811 4D 100 0.8 0.8 0 .717 .817 .765 .817 .816 .835 4D 100 0.8 0.85 0 .738 .824 .771 .827 .813 .824 4D 100 0.8 0.9 0 .771 .821 .791 .823 .814 .808 4D 100 0.8 0.95 0 .815 .822 .823 .826 .816 .799 40 100 0.8 1.0 0 .725 .725 .723 .723 Exp. No. SA SA SA SA 5A 5A SA SA SB SB SB 5B SB SB 5B SB SC SC SC SC SC SC SC SC T 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 37 TABLE 4 POWER, 5% LOWER TAIL TESTS, T = 200 0.95 0.95 0.95 0.95 0.95 0.95 0 UI H0000000 00000000 00 \0 UI 00000000 00000~i010 01 Oi Oi H0000000 Ui 00000QUIO H0000000 01 00030QU‘0 QC 00000000 00000000 00000000 4: .185 .198 .200 .210 .212 .228 .265 .292 .626 .639 .658 .669 .683 .705 .763 .747 .952 .957 .961 .964 .968 .972 .984 .922 at .185 .206 .223 .269 .280 .291 .315 .292 .626 .650 .703 .768 .799 .826 .840 .747 .952 .961 .972 .985 .991 .991 .991 .922 .236 .242 .243 .252 .240 .254 .280 .290 .722 .719 .737 .742 .736 .750 .786 .746 .977 .979 .980 .981 .981 .982 .988 .921 .236 .243 .252 .278 .279 .288 .312 .290 .722 .722 .754 .787 .801 .822 .841 .746 .977 .979 .984 .989 .992 .992 .991 .921 DR .257 .263 .273 .284 .277 .286 .306 .761 .762 .788 .797 .794 .810 .824 .984 .986 .989 .990 .991 .990 .990 D‘- .257 .265 .281 .303 .298 .299 .317 .761 .766 .803 .830 .829 .834 .835 .984 .987 .991 .993 .994 .991 .987 38 TABLE 5 POWER, 5% LOWER TAIL TESTS, T = 100 Exp. No. 13C 13C 13C 13C 13C 13C 13C 13C 12C 12C 12C 12C 12C 12C 12C 12C 11C 11C 11C 11C 11C 11C 11C 11C 10C 10C 10C 10C 10C 10C 10C 10C T 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 P1 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 0.85 ‘6 n 00000~i0i0 01 01 H0000000 01 01 UI O ogmooooxturo ODWQGQUIO H0000000 H0000000 Ui H0000000 01 00000~l0i0 “10 “10 “10 “10 “10 “10 “10 “10 .396 .409 .438 .447 .478 .509 .552 .499 .402 .422 .440 .454 .468 .493 .523 .422 .471 .470 .470 .458 .449 .415 .321 .113 .667 .636 .580 .495 .408 .218 .032 .001 .396 .423 .497 .552 .557 .555 .562 .499 .402 .434 .485 .521 .500 .472 .471 .422 .471 .465 .423 .311 .221 .157 .132 .113 .667 .591 .270 .026 .006 .002 .001 .001 .467 .481 .487 .495 .510 .527 .560 .496 .463 .481 .475 .489 .492 .505 .526 .420 .438 .440 .426 .419 .406 .378 .305 .112 .338 .329 .268 .222 .177 .095 .023 .001 .467 .486 .516 .550 .559 .556 .566 .496 .463 .486 .494 .515 .501 .474 .475 .420 .438 .434 .380 .288 .216 .157 .133 .112 .338 .297 .110 .017 .005 .002 .001 .001 DR .498 .514 .536 .550 .549 .552 .561 .480 .495 .501 .517 .502 .497 .497 .365 .343 .320 .300 .269 .236 .197 .101 .059 .033 .021 .014 .007 .003 DR. .498 .521 .547 .567 .559 .547 .547 .480 .499 .494 .504 .482 .457 .458 .365 .320 .242 .193 .160 .132 .125 .101 .039 .007 .003 .002 .001 .001 Exp. No. 7A 7A 7A 7A 7A 7A 7A 7A 7B 7B 7B 7B 7B 7B 7B 7B 7C 7C 7C 7C 7C 7C 7C 7C 7D 7D 7D 7D 7D 7D 7D 7D T 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 39 TABLE 6 POWER, 5% LOWER TAIL TESTS, T = 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 00000000 00000000 0 3 00000000 H0000000 0.85 0.85 0.85 0.85 0.85 0.85 0.85 00000000 00000000 00000000 uo DRAWN non mo, 1/(1-p,2)) b O Ui H0000000 UI 00000Q010 Ui H0000000 Ui 00000\IUIO 00000\1010 Ul Ui 00000~iUIO Ui Ul H0000000 «a .058 .061 .064 .059 .062 .064 .063 .063 .085 .085 .095 .094 .093 .097 .102 .099 .128 .137 .148 .153 .154 .160 .168 .162 .201 .209 .234 .238 .243 .256 .260 .242 an .058 .061 .066 .064 .065 .066 .065 .063 .085 .087 .101 .101 .099 .100 .104 .099 .128 .143 .164 .164 .165 .164 .167 .162 .201 .220 .265 .260 .257 .258 .253 .242 .061 .062 .065 .060 .063 .064 .064 .064 .095 .090 .100 .096 .095 .099 .102 .100 .148 .149 .158 .155 .157 .162 .169 .164 .234 .231 .253 .244 .247 .260 .261 .244 50 .061 .061 .064 .064 .065 .067 .064 .064 .095 .092 .100 .101 .100 .101 .103 .100 .148 .151 .161 .163 .164 .166 .166 .164 .234 .237 .260 .260 .257 .259 .253 .244 .061 .062 .065 .063 .064 .068 .058 .096 .094 .099 .100 .098 .104 .093 .153 .158 .162 .164 .162 .168 .155 .244 .250 .262 .258 .259 .267 .238 “u .061 .062 .064 .064 .064 .068 .062 .096 .095 .100 .100 .098 .104 .099 .153 .159 .160 .165 .162 .166 .161 .244 .252 .261 .256 .253 .258 .240 Exp. No. SA SA SA SA SA 8A 8A 8A SB SB SB SB SB SB SB SB SC SC SC SC SC SC SC SC SD SD SD SD 8D SD SD SD T 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 40 TABLE 7 POWER, 5% LOWER TAIL TESTS, T = P1 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 00000000 00000000 00000000 0 0 UI H0000000 00000~lUiO 0.85 0.85 0.85 0.85 0.85 0.85 0.85 00000000 00000000 00000000 uo nun man no, 1/(1“p12)) b O Oi Oi H0000000 OOOOOOOO 00000\1010 00000\1010 Ui U'l i-'0000000 01 Oi Oi 08000Q010 H0000000 .084 .086 .093 .089 .093 .100 .106 .103 .199 .203 .214 .218 .228 .246 .260 .238 .411 .420 .443 .452 .467 .494 .525 .445 .665 .684 .702 .717 .735 .760 .790 .637 .084 .087 .099 .102 .096 .098 .106 .103 .199 .209 .238 .260 .248 .249 .252 .238 .411 .433 .489 .524 .510 .492 .494 .445 .665 .697 .753 .787 .771 .745 .726 .637 .091 .095 .097 .096 .096 .101 .107 .102 .230 .234 .233 .237 .240 .254 .262 .236 .468 .482 .480 .488 .492 .507 .529 .443 .733 .749 .743 .754 .760 .776 .793 .635 100 .091 .096 .100 .102 .098 .099 .108 .102 .230 .236 .244 .257 .249 .251 .255 .236 .468 .485 .498 .517 .511 .493 .497 .443 .733 .753 .763 .782 .772 .747 .730 .635 DR .092 .095 .099 .101 .095 .100 .106 .238 .242 .248 .258 .248 .253 .258 .484 .500 .508 .521 .509 .506 .507 .752 .772 .772 .785 .775 .769 .754 “I .092 .096 .100 .103 .097 .098 .105 .238 .244 .246 .263 .248 .245 .250 .484 .503 .506 .514 .501 .480 .480 .752 .774 .768 .770 .747 .716 .705 Exp. No. 9A 9A 9A 9A 9A 9A 9A 9A 9B 9B 9B 9B 9B 9B 9B 9B 9C 9C 9C 90 9C 9C 9C 90 T 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 POWER, 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 00000000 00000000 00000000 0 0 01 H0000000 0.85 0.85 0.85 0.85 0.85 0.85 0.85 41 TABLE S no on" non m0, 1/(1“p12)) Ui H0000000 U'i 00000\iU|0 H0000000 Ul 000g0~i010 01 Ui 00000~lU|0 .191 .203 .207 .206 .217 .220 .246 .235 .637 .649 .668 .680 .683 .703 .749 .630 .956 .960 .960 .966 .972 .974 .982 .851 .191 .209 .224 .245 .255 .245 .255 .235 .637 .661 .705 .757 .760 .745 .726 .630 .956 .963 .971 .984 .987 .976 .948 .851 .227 .233 .236 .239 .232 .237 .254 .233 .718 .716 .731 .741 .723 .737 .765 .628 .979 .978 .978 .980 .982 .982 .985 .850 5% LOWER TAIL TESTS, T = 200 .227 .233 .242 .249 .252 .241 .255 .233 .718 .718 .744 .768 .755 .740 .726 .628 .979 .979 .981 .986 .987 .975 .949 .850 DR .235 .241 .249 .250 .252 .250 .259 .739 .740 .764 .769 .756 .762 .765 .983 .984 .985 .987 .987 .985 .980 DR- .235 .241 .250 .256 .254 .244 .253 .739 .742 .766 .768 .747 .725 .713 .983 .984 .986 .983 .974 .955 .936 42 TABLE 9 POWER, 5% LOWER TAIL TESTS, T = 25 "P- '1' P1 p. “o '. n P. P. '3‘. D‘- No. 2A 25 0.9 0.0 0 .060 .060 .064 .064 .064 .064 2A 25 0.9 0.5 0 .064 .065 .063 .063 .063 .063 2A 25 0.9 0.7 0 .063 .063 .063 .064 .064 .064 2A 25 0.9 0.8 0 .066 .066 .066 .066 .068 .066 2A 25 0.9 0.85 0 .064 .067 .065 .066 .067 .065 2A 25 0.9 0.9 0 .066 .065 .066 .065 .065 .065 2A 25 0.9 0.95 0 .065 .065 .064 .065 .060 .062 2A 25 0.9 1.0 0 .062 .062 .062 .062 2B 25 0.85 0.0 0 .071 .071 .081 .081 .081 .081 28 25 0.85 0.5 0 .079 .081 .081 .082 .084 .085 2B 25 0.85 0.7 0 .076 .077 .078 .077 .079 .080 28 25 0.85 0.8 0 .084 .084 .083 .084 .085 .082 2B 25 0.85 0.85 0 .082 .088 .083 .087 .086 .085 2B 25 0.85 0.9 0 .077 .076 .077 .077 .077 .076 28 25 0.85 0.95 0 .083 .082 .083 .083 .077 .079 28 25 0.85 1.0 0 .083 .083 .084 .084 2C 25 0.8 0.0 0 .087 .087 .098 .098 .102 .102 2C 25 0.8 0.5 0 .097 .102 .100 .102 .106 .106 2C 25 0.8 0.7 0 .098 .102 .100 .103 .105 .106 2C 25 0.8 0.8 0 .105 .108 .104 .108 .108 .106 2C 25 0.8 0.85 0 .101 .107 .102 .105 .104 .103 2C 25 0.8 0.9 0 .105 .105 .105 .105 .106 .106 2C 25 0.8 0.95 0 .107 .107 .106 .108 .099 .103 2C 25 0.8 1.0 0 .103 .103 .104 .104 2D 25 0.7 0.0 0 .133 .133 .158 .158 .168 .168 2D 25 0.7 0.5 0 .157 .169 .167 .172 .181 .180 ZD 25 0.7 0.7 0 .164 .174 .168 .174 .176 .179 2D 25 0.7 0.8 0 .182 .188 .183 .188 .189 .186 ZD 25 0.7 0.85 0 .177 .183 .178 .181 .182 .178 2D 25 0.7 0.9 0 .176 .174 .176 .174 .174 .171 2D 25 0.7 0.95 0 .174 .174 .173 .175 .163 .165 2D 25 0.7 1.0 0 .176 .176 .178 .178 28 25 0.5 0.0 0 .328 .328 .386 .386 .409 .409 28 25 0.5 0.5 0 .380 .404 .399 .410 .426 .425 2E 25 0.5 0.7 0 .397 .415 .405 .413 .415 .411 28 25 0.5 0.8 0 .433 .437 .434 .439 .435 .417 28 25 0.5 0.85 0 .418 .419 .421 .416 .416 .398 28 25 0.5 0.9 0 .417 .398 .417 .398 .399 .380 28 25 0.5 0.95 0 .408 .389 .407 .392 .374 .372 28 25 0.5 1.0 0 .371 .371 .374 .374 Exp. No. 6A 6A 6A 6A 6A 6A 6A 6A 6B 6B 6B 6B 6B 6B 6B 6B T 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 POWER, 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 § O 01 H0000000 U'I 00000~i010 Ui H0000000 U'i 00000QUIO 43 TABLE 10 5% LOWER TAIL TESTS, T = 500 .5 00000000 00000000 .077 .085 .084 .085 .089 .088 .094 .116 .808 .829 .830 .839 .842 .848 .879 .874 .077 .086 .089 .093 .103 .107 .112 .116 .808 .834 .849 .877 .904 .936 .960 .874 .092 .101 .099 .099 .100 .099 .100 .116 .882 .895 .896 .900 .893 .899 .908 .874 .092 .101 .100 .102 .106 .105 .112 .116 .882 .895 .900 .915 .920 .938 .962 .874 DR .098 .107 .103 .107 .106 .105 .107 .909 .919 .921 .931 .927 .931 .942 M. .097 .107 .104 .109 .110 .108 .113 .909 .920 .926 .945 .947 .956 .964 Exp. No. 10 10 10 10 10 10 10 10 10A 10A 10A 10A 10A 10A 10A 10A 108 10B 108 10B 10B 103 103 10B 10C 10C 10C 10C 10C 100 10C 10C 10D 10D 10D 10D 10D 10D 10D 10D POWER, T 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 b .0 I-'i-'I-'i-'I-'I-'I-'I-' 0.95 0.95 0.95 0.95 0.95 0.95 00 OOOOOOO O 00 UIUI 00000000 00 00000000 00 OO 000000 UIUIUIUIUIU'IUIUI 000000 OOOOOOO O 00000000 00000000 44 TABLE 11 5% LOWER TAIL TESTS, T = 100, no = -10 Po Ui 00030QU|0 0| H0000000 H0000000 Oi 01 Oi Oi H0000000 H0000000 Oi Oi 00000\IU'|0 00000~lUiO 00000\)U‘I0 00000~iUi0 H0000000 Ui no “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 “10 .049 .051 .052 .051 .050 .051 .054 .051 .089 .086 .080 .068 .062 .054 .045 .026 .304 .277 .246 .189 .147 .081 .025 .003 .667 .636 .580 .495 .408 .218 .032 .001 .912 .894 .868 .816 .742 .513 .068 .000 .049 .050 .051 .052 .050 .049 .052 .051 .089 .079 .055 .040 .035 .032 .032 .026 .304 .247 .101 .019 .009 .005 .004 .003 .667 .591 .270 .026 .006 .002 .001 .001 .912 .871 .598 .058 .006 .001 .000 .000 .048 .052 .051 .051 .050 .052 .054 .051 .057 .060 .056 .053 .051 .048 .044 .026 .121 .116 .100 .084 .066 .047 .021 .003 .338 .329 .268 .222 .177 .095 .023 .001 .677 .666 .599 .527 .428 .252 .045 .000 .048 .051 .050 .052 .050 .050 .053 .051 .057 .059 .047 .039 .035 .032 .033 .026 .121 .105 .051 .015 .009 .005 .004 .003 .338 .297 .110 .017 .005 .002 .001 .001 .677 .626 .299 .035 .006 .001 .000 .000 .048 .049 .050 .052 .049 .049 .053 .044 .044 .040 .040 .037 .036 .036 .045 .033 .024 .017 .014 .010 .007 .101 .059 .033 .021 .014 .007 .003 .265 .163 .081 .047 .024 .012 .003 DR. .048 .049 .049 .052 .050 .048 .052 .044 .042 .035 .034 .031 .031 .032 .045 .025 .012 .006 .005 .003 .003 .101 .039 .007 .003 .002 .001 .001 .265 .097 .012 .003 .001 .001 .000 45 TABLE 12 POWER, 5% LOWER TAIL TESTS, T = 100, 05 = “5 “P- '1' P1 P1 I“11 7. '11 P. P11 13". DR- NOO 11A 100 0.95 0.0 “5 .085 .085 .084 .084 .082 .082 11A 100 0.95 0.5 “5 .088 .088 .088 .088 .085 .085 11A 100 0.95 0.7 -5 .087 .087 .086 .085 .084 .081 11A 100 0.95 0.8 -5 .086 .088 .088 .086 .087 .086 11A 100 0.95 0.85 “5 .090 .085 .088 .086 .085 .084 11A 100 0.95 0.9 “5 .088 .083 .088 .083 .084 .081 11A 100 0.95 0.95 -5 .091 .085 .090 .086 .087 .084 .11A 100 0.95 1.0 “5 .079 .079 .079 .079 118 100 0.9 0.0 “5 .216 .216 .199 .199 .170 .170 118 100 0.9 0.5 “5 .213 .212 .200 .197 .162 .153 118 100 0.9 0.7 “5 .218 .198 .195 .179 .157 .132 118 100 0.9 0.8 “5 .210 .156 .193 .148 .152 .120 118 100 0.9 0.85 -5 .208 .130 .189 .130 .144 .112 118 100 0.9 0.9 -5 .192 .110 .180 .111 .135 .102 118 100 0.9 0.95 -5 .169 .106 .164 .107 .129 .102 118 100 0.9 1.0 -5 .098 .098 .097 .097 11C 100 0.85 0.0 “5 .471 .471 .438 .438 .365 .365 11C 100 0.85 0.5 -5 .470 .465 .440 .434 .343 .320 11C 100 0.85 0.7 -5 .470 .423 .426 .380 .320 .242 11C 100 0.85 0.8 “5 .458 .311 .419 .288 .300 .193 11C 100 0.85 0.85 -5 .449 .221 .406 .216 .269 .160 11C 100 0.85 0.9 “5 .415 .157 .378 .157 .236 .132 11C 100 0.85 0.95 “5 .321 .132 .305 .133 .197 .125 11C 100 0.85 1.0 “5 .113 .113 .112 .112 11D 100 0.8 0.0 “5 .741 .741 .716 .716 .633 .633 11D 100 0.8 0.5 “5 .746 .745 .727 .721 .610 .572 11D 100 0.8 0.7 -5 .746 .708 .709 .655 .560 .414 11D 100 0.8 0.8 -5 .738 .548 .702 .508 .529 .316 11D 100 0.8 0.85 “5 .731 .373 .687 .363 .473 .247 110 100 0.8 0.9 -5 .694 .245 .646 .244 .410 .198 110 100 0.8 0.95 -5 .559 .191 .529 .192 .321 .178 110 100 0.8 1.0 -5 .159 .159 .158 .158 Exp. No. 12A 12A 12A 12A 12A 12A 12A 12A 12B 12B 12B 12B 12B 12B 12B 12B 12C 12C 12C 12C 12C 12C 12C 12C 12D 12D 12D 12D 12D 12D 12D 12D T 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 POWER, 0.95 0.95 0.95 0.95 0.95 0.95 00 00 0101 H0000000 H0000000 H0000000 H0000000 00000\1010 01 00000000 00 00000000 00000000 00000000 00000000 5% LOWER TAIL TESTS, T = Po Oi Oi Oi 01 UI 00000\1010 00000\1010 00000\IU'|0 01 UI 46 TABLE 13 «1 .082 .087 .091 .093 .095 .101 .110 .107 .192 .203 .212 .221 .225 .247 .265 .246 .402 .422 .440 .454 .468 .493 .523 .422 .675 .687 .702 .721 .734 .759 .772 .589 it I .082 .089 .098 .110 .104 .102 .113 .107 .192 .210 .240 .264 .253 .254 .258 .246 .402 .434 .485 .521 .500 .472 .471 .422 .675 .700 .747 .775 .751 .706 .676 .589 100, .091 .099 .095 .100 .099 .103 .112 .107 .225 .238 .234 .242 .240 .254 .267 .245 .463 .481 .475 .489 .492 .505 .526 .420 .737 .746 .739 .755 .758 .768 .775 .588 .091 .099 .099 .108 .105 .104 .114 .107 .225 .240 .246 .259 .254 .254 .260 .245 .463 .486 .494 .515 .501 .474 .475 .420 .737 .750 .757 .768 .749 .708 .679 .588 .095 .101 .100 .109 .103 .102 .111 .237 .249 .252 .261 .250 .258 .264 .480 .495 .501 .517 .502 .497 .497 .750 .761 .761 .771 .763 .752 .725 M11 .095 .102 .101 .112 .105 .103 .111 .237 .251 .252 .265 .254 .249 .255 .480 .499 .494 .504 .482 .457 .458 .750 .763 .749 .741 .709 .674 .650 47 TABLE 14 POWER, 5% LOWER TAIL TESTS, T = 100, no = “1 BxP- '1' P1 P11 “0 I. '11 P. P11 DR. Im1. NOO 13A 100 0.95 0.0 -1 .083 .083 .095 .095 .101 .101 13A 100 0.95 0.5 -1 .088 .091 .101 .101 .103 .104 13A 100 0.95 0.7 “1 .089 .097 .095 .099 .101 .102 13A 100 0.95 0.8 “1 .092 .110 .100 .108 .109 .114 13A 100 0.95 0.85 “l .097 .109 .102 .108 .106 .111 13A 100 0.95 0.9 “1 .104 .109 .108 .110 .109 .108 13A 100 0.95 0.95 “1 .112 .116 .113 .117 .115 .115 13A 100 0.95 1.0 “1 .113 .113 .112 .112 138 100 0.9 0.0 “1 .194 .194 .233 .233 .251 .251 138 100 0.9 0.5 -1 .197 .205 .241 .244 .258 .262 138 100 0.9 0.7 “1 .217 .248 .242 .256 .267 .274 138 100 0.9 0.8 -1 .223 .286 .250 .283 .285 .296 138 100 0.9 0.85 -1 .236 .281 .254 .282 .275 .288 138 100 0.9 0.9 -l .252 .284 .263 .285 .280 .283 138 100 0.9 0.95 -l .285 .294 .288 .296 .296 .290 138 100 0.9 1.0 -1 .280 .280 .278 .278 13C 100 0.85 0.0 -l .396 .396 .467 .467 .498 .498 13C 100 0.85 0.5 -1 .409 .423 .481 .486 .514 .521 13C 100 0.85 0.7 -l .438 .497 .487 .516 .536 .547 13C 100 0.85 0.8 -l .447 .552 .495 .550 .550 .567 13C 100 0.85 0.85 -1 .478 .557 .510 .559 .549 .559 13C 100 0.85 0.9 -1 .509 .555 .527 .556 .552 .547 13C 100 0.85 0.95 -1 .552 .562 .560 .566 .561 .547 13C 100 0.85 1.0 -1 .499 .499 .496 .496 13D 100 0.8 0.0 -1 .657 .657 .732 .732 .761 .761 13D 100 0.8 0.5 -1 .676 .691 .749 .754 .781 .788 13D 100 0.8 0.7 -1 .695 .753 .745 .773 .789 .796 13D 100 0.8 0.8 “1 .716 .805 .759 .804 .804 .811 130 100 0.8 0.85 “1 .740 .805 .770 .809 .801 .796 13D 100 0.8 0.9 -1 .767 .793 .785 .794 .798 .776 lBD 100 0.8 0.95 -1 .806 .783 .811 .789 .795 .759 13D 100 0.8 1.0 “1 .687 .687 .686 .686 48 APPENDIX 1 In this Appendix, we show that 5,,(1) = limo“1 58(r) = Z and ;o(1) = limM ;o(r) = 7. Let 1 = (111, 5)' as in equation (1') of the main text. Let ; be the restricted normal MLE's: E = E = (yo-y1)/(T-1) and 71x = y1 - E, so that {it = yt - 2:7 are the BSP residuals. Similarly let $89.) and 9“,.) be the GLS estimates using the covariance matrices 08(p.) and no(p.), respectively, so that {1(sot(p.) = yt - 2t;s(p,) and 11mm) = yt - Zt;u(p.). Then it is sufficient to prove that 711(1) = limM 1.0:) = 1. The nonstationary case is fairly straightforward. We have (111.1) 1,01.) = [20."(p.)21"z'n.."(p.)y 2 T B(T-1) + l 8281t + poBT + p, -l t- 2T 2T 2 Btzlt + mm + p, Btzlt + p.3T2 + p.T T Bzzlyt + PtY1 + ptBY'f t- T thglty. + (PtBT+P.)YI . where B = (l-p.). When p. = 1, so that B = 0, _ 1 l -1 Y1 (111.2) 1..(1)= 1 T 49 _1 Ty, - yT 13,. - = (l—T) = ~ :1 1, Yr - Y1 9 which is exactly the same as the restricted MLE. Therefore {iooom = yt - zo; = {it and the GLS-based tests are the same as the BSP tests. The stationary case is more complicated because 08(p*) is singular for p.r = 1. However, we can evaluate the GLS estimator for p. v‘ 1 and take the limit as p. -> 1. Thus, for po ,. 1 we have: (Am) 1.01.) = [z'n."x [1 'H~ ]= YT-Y1 9 4 a which is exactly the same as the restricted MLE. Therefore 0(S)t(1) = yt - 217 = {lo and the result follows. 51 APPENDIX 2 In this Appendix we show that the asymptotic distributions of the GLS-based statistics do not depend.on.p., for any p. in the interval [0,1). We will give the proof for the 5‘0.) and flu.) tests; the proof for the 5AM and ?s(p.) tests is essentially the same. Define the notation 1 l 'l T o 1"”? o (A2.1) D = , so that 13'”2 = o T3 o T'-"’2 . Our test statistics are functions of the normalized residual series 1"”2 uwnw.) , and so we consider (A2-2) 1""2 limo.) = T“ ut - 1"”? Zt[;u(p.)-1] = T-1/2 “t _ zt(T‘/ZD'”Z) [D’1IZZ'flu-1(p*)ZD-1/2]-1(T-1D-1/2)Z'flu'1(p.)u. Now consider the terms on the right hand side of (A2.2). We have (A2.3) zt(T1’ZD"’2) = [1, t/T]. For the term 0'1’2Z'nu"(p.)ZD"’2, note that Z'QN'1(p.)Z is as given in the first matrix on the right hand side of equation ”2 normalizes the (A1.1). Pre- and post-multiplication by 0' 1,1 element by T"; the 1,2 element by T'z; and the 2,2 element by T4: Taking probability limits, the first terms in each sum dominate, and thus 1 1/2 -1 4 -6 = B'2 1/2 1/3 -6 12 (A2.4) p1im[o“/Zz'n""(p.)zn"/2]"= 13'2 52 Finally, for the term ('r“n“/2)z'n,,"(p.)u, note that Z'n“-1(p*)u is the same as the vector on the right hand side of equation (A1.1), except that u replaces y. Pre-multiplication by -3/2 'I"‘D"’2 normalizes the first element by T and the second element by T's”. Again the first terms in each sum dominate, and so T T'3/2 2 ut -1 -1/2 -1 2 t'1 (A2.5) T D z'n,‘ (p.)u = B T + op(l) . 1"”2 z tut t=1 We now substitute (A2.3), (A2.4) and (A2.5) into (A2.2). Note that the terms involving B = (l-p.) cancel. Doing a little algebra yields (A2.6) 1"”? {‘(nMM = T'VZ u _ '3/2 _ '5/2 t (4T ztut 6T zttut) - (t/T) (-6T'3/22tut + 12T'5’22ttut) + op(1) . 12 " ’ u,u)t(p.) does not Thus the asymptotic distribution of I" depend on p.. To be more precise, for any r between zero and one, define [rT] as the nearest lesser integer to rT; let W(r) be the Wiener process; and let :02 be the long-run variance of et = au Then standard results applied to (A2.6) imply that t. 9.2.7) '2'"? fi,,.,,..,,(p.) -> «W(r) where W*(r) = [W(r) - (4-6r)fgW(s)ds + (6-12r)fgsW(s)ds] is a demeaned and detrended Wiener process, as defined by Park and Phillips (1988). This is exactly the same as the A A asymptotic distribution of 1""2 um], where ut, t = 1, . . . ,T are the residuals upon which the Dickey-Fuller tests are 53 implicitly based. Thus our GLS-based tests based on any value of p. in the interval [0,1) have the same asymptotic distributions as the corresponding Dickey-Fuller tests. 54 APPENDIX 3 In this Appendix we derive the asymptotic distribution of the Dufour-King POI statistic DK'(p.) . The statistic DKs(p,) has the same asymptotic distribution. Consider first the denominator of the statistic. We have (A3.1) fi'n_"(1){i x12 + 22353 - ZEIgzutu 1 t-1 = 21,2151} (using the fact that u, = 0) out , _ _ T ' - 22t=2ut4 where the last equality follows from Lemma 1 of Schmidt and Phillips (1992, p. 281). Schmidt and Phillips show that T'1z{:2&t_,hfit converges in probability to -oZ/2, where a2 is the innovation variance (the variance of et = out) . Therefore (A3.2) T'16'n'"(1)fi -+ 02. We next consider the numerator of the statistic. For typographical simplicity we will omit the subscript "N" from the residual vector am.) and from the individual residuals Lima.) . We have (Am) fi(p.)'n.."(p.)fi = 6,202.) + (1+p.2)21;}fi.’-(p.) - 2p.21,25t(p.)5.-1(p.) = 21.2[fi.(p.) - p.l~1.-1(p.) ]2 + (11200.) - Note that [(1.0%) " p.{1t-1(p.)] = [Mad/J.) + (l-p.)1~lt-1(p.)] SO that (AM) T'2fi(p.)'n."(p.)fi2w2f3W*(r)2dr . where W*(r) is a demeaned and detrended Wiener process and wz is the long run variance of e, as discussed in Appendix 2. Combining (A3.2) and (A3.5), we obtain the asymptotic distribution of the statistic: (A16) T"DK.(p.) -' (ma/oz) (l-p.)2 f3W*(r>2dr. CHAPTER 3 CHAPTER 3 ALTERNATIVE METHODS OF DETRENDING TEE POIER OP STATIONARITY TESTS 1. INTRODUCTION The purpose of this chapter is to provide new tests of the null hypothesis of trend stationarity against the alternative hypothesis of a unit root. These tests are based upon detrending the series by a generalized least squares (GLS) regression, using various values of the moving average root. They are related to the stationarity tests of Kwiatkowski, Phillips, Schmidt and Shin (1992), hereafter KPSS, and Schmidt (1992), and also to the point optimal invariant (POI) tests of King (1980, 1988). Hence, in this chapter they are called GLS-based KPSS tests. Following KPSS, consider the problem of testing the null hypothesis that an observable time series is stationary around a deterministic trend. They assume a components representation in which the series under study can be written as the sum of a deterministic trend, a random walk, and a stationary error: ( 1) Y, t (1') yt = r0 + {t +j21uj+ et, t = 1,...,T, £t+rt+et, r =r t t_1+ut, t=1,...,T, or where at are iid(0,a‘2) errors and ut are iid(0, of). Here A (3 auz/a‘z, Z 0) is the signal to noise ratio, which measures the ratio of the changes in permanent versus transitory 56 57 components (Shepard and Harvey (1990)). The initial value ro is treated as fixed and plays the role of intercept. The null hypothesis of trend stationarity corresponds to a“2 = 0 (or A = 0) and the alternative hypothesis of difference stationarity corresponds to 002 > 0 (or A > 0) . In this context, the one-sided LM test can be derived under the stronger assumption that the 5: are iid N(O, 0‘2) and the ut are iid N(O, of). Let 6. t = 1,...,T, be the OLS residuals t I from the regression of y on intercept and trend. Define 33 and St to be the estimate of the error variance from this regression and the partial sum process of the residuals, respectively: T (2) 6‘2 = @3183, A t A Then the LM statistic is given as follows: T . 2 A 2 (4) 1M =t§18t / 0‘ . Since the assumption of iid errors is restrictive and unrealistic, KPSS (1992) consider the asymptotic distribution of the LM statistic under the null hypothesis with weaker assumptions about the errors. See KPSS ( 1992) for more detailed discussion. Since the numerator normalized by T'2 converges to 02 (long run variance of the error) times a functional of a Brownian bridge, they modify the LM statistic by replacing the estimate of the error variance 3‘2 by a consistent estimate of the long run variance. Define the T estimated autocovariances 1Hj) = T42. late”, 3' = 0,1, . . . ,T-1, t-J+ 58 and the long run variance estimator 32(3) = 9(0) + 2 I A 21w(s,£)1(s). Here w(s,£) is an optional weighting function, 5- such as the Bartlett-window w(s,£) = 1-s/(£+1), and Z is the 2 number of lags used to estimate a , satisfying 2 -* no but Z/T 4 0 as T e w. Then the KPSS statistic is T A (5) 3, = T'thdst2 / 82“). Interestingly, the statistic (4) also may arise in the context of testing the hypothesis of a moving average unit root (or overdifferencing) using the ARIMA( 0 , 1 , 1) parameterization: (6) Ayt = 5 + wt - ow“, t = 1,...,T, where cat are iid(0,auz) and o is a parameter which is assumed to be in the range [0,1]. In this model, difference stationarity corresponds to values of o 6 [0,1) and trend stationarity is the special case corresponding to 0 = 1. The null hypothesis of a moving average unit root, 0 = 1, implies overdifferencing in the ARIMA representation, while the alternative hypothesis of an invertible moving average process, a 6 [0,1) , implies that yt has an autoregressive unit root. However, we must note that while (1) and ( 6) are identical under the null of stationarity, they represent different processes under the alternative. Saikkonen and Luukkonen (1992a, b), in this context, derive a statistic (their R? statistic) of the same form as (4) as the locally best unbiased invariant (LBUI) test of the moving average unit root hypothesis. Campbell and Mankiw (1987, 1989) also use 59 this parameterization to develop a method of measuring the long term effect of a current shock as a test to discriminate between trend stationary and difference stationary processes. The relationship between the signal to noise ratio A and the moving average parameter a can be found without difficulty as follows: (7) o = m + 2) - [m + 4)1"2}/2. A = (o - 02/0 (8) of = 03/0. Thus A = 0 corresponds to o = 1 (stationarity), while A = w corresponds to o = 0 (so y is a pure random walk). When A is very small, or equivalently 0 is very close to 1, yt follows a nearly stationary process and standard unit root tests are expected to have low power. Since the KPSS test is a modification of the LM test, it is therefore based on detrending under the null (A = 0 or o = 1). Since the null is stationarity, this is the same type of detrending as in the Dickey-Fuller tests: an OLS regression of the variable yt on intercept and trend. Another possibility is to detrend as Bhargava (1986) and Schmidt and Phillips (1992) do, using a regression in differences (A = no or 0 = 0) . This leads to the residuals (yt-y1) - (t-1) (yr-y1)/(T-1) , which will be denoted et(0) in the notation of the next section. Recall that in the case of testing the autoregressive unit root hypothesis, the Bhargava-Schmidt-Phillips (hereafter BSP) test detrends under the null, while the Dickey-Fuller tests detrend under the alternative. The result is that BSP tests are more powerful against alternatives close to the null (when 60 power is low), while Dickey-Fuller tests are more powerful against alternatives far from the null (when power is high). See Schmidt and Lee (1991), Schmidt and Phillips (1992), and the previous chapter. In the present context also, by analogy, we might expect the KPSS detrending method to maximize power against alternatives close to the null of stationarity, and this is consistent with the fact that.it is the locally best invariant test. Conversely, we might expect KPSS test based on BSP residuals to give better power against alternatives far from the null. This is arguably important in the present context. As KPSS's simulations show, as A = ouZ/a‘2 -> 00, the power of the ’15,, test does not necessarily approach unity. For example, with T = 100, power as A » m approaches 0.82 for e = 4 and approaches 0.41 for e = 12. Thus there is a clear need to increase power against alternatives far from the null. However, according to Schmidt (1992) , the KPSS statistic using BSP residuals does not yield a satisfactory test. This is so for two reasons. First, its asymptotic distribution under the null of stationarity depends on the marginal distribution of 6. Second, the KPSS test based on BSP residuals is not consistent against unit root alternatives. Another alternative, along the same lines as in the previous chapter, is to‘construct‘the KPSS test statistic with GLS residuals from (6), using an assumed value of 0, say 0., against which maximal power is desired. Let 0, denote the actual value of 0 in the model (6). Then King's (1980, 1988) 61 most powerful invariant test of the null of 0 = 1 against the alternative of a specific value, say 0., involves GLS regressions with 0 = a. and 0 = 1. The power of the POI test depends on 0,, as well as 01. Since the theory of point optimal testing ensures that the POI test will be at least as powerful as any other invariant test against a = 0., we might expect that it is also more powerful against 0 in a reasonable neighborhood of 0.. In the following section, we will derive the GLS—based KPSS test statistic and the point optimal invariant test statistic. In section 3, the asymptotic distributions of these tests statistics will be derived under the null and under the alternative hypothesis. In section 4, the finite sample size and power of the tests will be investigated using Monte Carlo simulation. Section 5 concludes. 2. STATIONARITY TEST: GLS-BASED KPSS TEST AND POI TEST In this section we provide two tests of the hypothesis of trend stationarity. They consist of the GLS-based KPSS test and King's POI test. We assume the DGP: (9A) yt = (b + 5t + Xt, (9B) Xt = Xt-1 + wt - 000:4, t = 1,...,T, where I]: is ro in (1) and wt are iid N(0, of). The null hypothesis of stationarity corresponds to o = 1, so that Xt (= cat) is an iid process and the alternative hypothesis of unit root to be considered in this chapter corresponds to 0 6 (0,1) . Note that Xt can be expressed as a component 62 representation of the form of equation (1): that is, Xt = rt + 6 This component representation and the ARIMA t. representation in ( 9) are identical under the null hypothesis. In matrix form, (9') Y = Z1 + X. where Z is the Tx2 matrix with tth observation row zt' = [1,t] , 1' = [we]. and x is a Tx1 vector of realizations of the error process. Based on this specification, our GLS-based test and King's POI test are invariant under the transformation y -v aoy + 2a,, where a0 and a1 are a scalar and a vector of real constants, respectively. In equation (93) , the initial value «no is assumed to be fixed at zero, which implies that AXt (or Ayt) follows a (nonstationary) conditional MA(1) process under the alternative hypothesis. (If the initial value (.00 were assumed to be a random variable having the same distribution as wt, AXt would follow a stationary unconditional MA(1) process under the alternative hypothesis.) Thus we have x ~ N(0, a”20"(0)), where OHM) and its component matrices are defined as follows: (10) 0,0) = c"<1>cé(o.) / é<1>'n."(1)é(1>. where e(0.) and e(1) are GLS residual vectors from (9) under the alternative 0 = 0. and under the null o = 1, respectively. Since, as discussed above, e(1) is just the OLS residual vector 8 and n'( 1) is an identity matrix, the denominator of P,(0.) can be expressed simply as 8’3. The numerator of P40.) also can be expressed as the sum of squares of the OLS residual vector from the transformed regression equation; that is, as e'(0.)’e*(0.), where e'(0.) is the OLS residual vector from the following regression: (17) 9* = i'v + x" where {f a C'1(0.)C(l)y, 2* a C"(o.)C(1)z, and :2" -=- C'1(0.)C(1)x. The OLS residual vector from (17), e'(o.), is related to the GIS residual vector e09.) in the following way: (18) 3'0.) = c"'é'(o.) / 8'8. Since the residuals et*(0.), t = 1,...,T, are 66 asymptotically equivalent to an exponentially weighted average process of x (see equation (A2.8) in Appendix 2), o. in the numerator can be seen as an optimal weight in the estimation of the permanent component rt in (1) . Also the denominator divided by T is simply an estimate of the variation of the transitory component and the numerator divided by T is asymptotically equivalent to the estimate of the variation of the permanent component (Muth (1960)). 3. DISTRIBUTION THEORY In this section we consider the asymptotic distributions as T » wtwith.o. fixed of the GLS-based KPSS test and the POI test. Since they are based on GLS residuals and these residuals can be expressed as functions of the error process )g, we can analyze the properties of the statistics under the alternative assumptions that Xt is stationary (under H5) and that it contains a unit root (underiHfi. Along the same lines as Schmidt (1992), we make the following simple alternative assumptions. In these assumptions and the rest of the chapter, => denotes weak convergence, [rT] denotes the integer part of rT, 02 is the long run variance, W(r) is the Wiener process on [0,1], and integrals like f3W(r)dr and fng(r)dr will sometimes be denoted by simply as [W and frW. ASSUMPTION A (stationarity) : (i) Equation (9A) holds. (ii) For r 6 [0,1], the Xt satisfy 67 [rT] the invariance principle T’”2 jzlxj : aW(r) , with a > 0. (iii) 2 - 1T 2 o o = limtaT' 21E(Xt) ex1sts. t- X ASSUMPTION B (Unit root): (i) Equation (9A) holds. (ii) For r 6 [0,1], the X; satisfy the invariance principle T'sztm =9 aW(r) , with a > 0. It is important to note that, in Assumptions A and B, Xt is simply the deviation of y} from deterministic trend, as implied by equation (9A). However, for the purposes of our asymptotic distribution theory we do not assume equation (9B) . Thus the assumption that X} followed an ARIMA(0,1,1) process was used to derive the test statistics, but we now consider the asymptotic distributions of these statistics under more general assumptions on Xt. As a preliminary step, we examine the order of probability of two exponentially weighted moving average processes under these assumptions. Let 8;;(L)XT and 8;(L)XI be polynomials in the lag operator L, defined as follows: T—l . (20) 9;(L)xT = 2_ 0,3er ‘ T-l . . (21) 93(L)xT = 2: 0.3””er These two polynomials produce absolutely summable series for any fixed value of 0. 6 [0,1), and so we claim the following two propositions as T 4 m with fixed 0.. LBHHA 1. Under Assumption A (stationarity), i) e;(L)x, = op(1) and 68 ii) 83(L)X1 = 09(1) . Proof; The results are self-evident from ‘the absolute summability of the series and the stationarity assumption. LEMMA 2. Under Assumption B (unit root), 1) 8;(L)X, = opcr‘lz) and ii) 6;(L)X, = op(1) . Proof. See Appendix 1. The asymptotic distribution of the GLS-based KPSS test is derived in Appendix 3. We summarize the main asymptotic results as Theorems 1 and 2. We deduce the important conclusions that the asymptotic distribution of the GLS-based KPSS test depends on the marginal distribution of x under the null hypothesis, and that it is not consistent against the alternative hypothesis of a unit root. THEOREM 1. Denote the weak limit of XT as T -* on by X. and let Assumption A (stationarity) hold. Then for any given 0, 6 [0,1). (22) étw.) = xt-(l-o.)e; a: + [(1—9.)2/31{[e‘.;(L)X..12 + [63(L)X.1[e;(L)X.1 + [9;(L)x.]2}, .. '1‘ .. .. (26) T"n, azng(s)2ds, - T - - T"n,ds12dr ng(s)2ds rW(1) is the Brownian bridge. Thus 137(0.) Note that the polynomials 8;(L) and 8;(L) in equations (23) and below (also in Appendix 3) should be interpreted as j-l m o 0 Q o o 2 0).,1'1L’“l andjEIOJ'JLV‘, respectively. Theorems 1 and 2 apply to the GLS-based KPSS tests for the case that c = 0, where e is the number of covariance terms used in estimation of the long run variance. The analysis of the case that c -v 00 but e/T 4 0 is more complicated. However, following the same lines as Schmidt (1992) , it is possible to 70 show that in this case 5,09.) is Op(T/£) under both Assumption A (stationarity) and Assumption B (unit root). Thus the test is inconsistent in the case that t 4 w, z/T 4 0 as well as in the case that Z = 0. Theorem 1 implies that the asymptotic distribution of the GLS-based KPSS test depends on the marginal distribution of x as well as a. 6 [0,1). The basic problem here is that, even though the Mt process is stationary and ergodic, eta.) is non- ergodic, and the usual central limit theorems do not apply because terms involving 8;:(L)X1 and e;(L)x, do not average away. Furthermore, while etw.) = Op(1) from the Lemma 1, its cumulation is Op(T) rather than Op(T1’2). These are strong arguments against statistics, like 13,09.) , that depend on St(0.): such statistics have a limiting distribution which depends on the distribution of the data and they do not yield a consistent test. Theorem 2 shows that under Assumption B, for any a. less than unity, the GLS-based KPSS test 737(0.) has the same asymptotic distribution as the KPSS test based on BSP residuals, that is, 0. = 0. Recall that for 0. = 1 and under Assumption B, (33) T-vzémm => aW*(r), where‘W*(r) = W(s) + (6s-4)fW + (6-125)frW’is the demeaned and detrended Wiener process as in KPSS (1992). So we find that there is a discontinuity in the asymptotic distribution at o. = 1, as there was in the previous chapter at p*==1u Finally, we note that we get the same asymptotic results as in Schmidt 71 (1992) when a. = 0 is used in Theorems 1 and 2. The asymptotic distribution of the POI statistic is derived in Appendix 4 based on the limiting distribution of sample autocorrelations as T -+ on with fixed 9.. We summarize the main results under each assumption as Theorems 3 and 4. THEOREM 3. Let px(j) be the jth population autocorrelation coefficient of X}. Then under Assumption A, (34) P :- plim Pm.) = 2(1+o.)"[1 - (140510.“ min. (35) T‘/2[P,(o.) - P] => N(0, V), where V is given by (36) V 5 [2(1-0.)/(1+0.)]2§ '3 0.“j'2W--, 1-1 j-l ‘1 and w“. is given by (A4.13) in Appendix 4. Thus P10.) is Op(T“’2). Theorem 3 implies that the asymptotic null distribution of the POI test depends 0.. If the X1 are not iid, it also depends on their covariance structure. Unfortunately, the way in which the asymptotic distribution of the POI test depends on the correlation structure of Xt is complicated, and does not suggest a simple Phillips-Perron type correction that would make the test robust to error autocorrelation. Recent papers by Saikkonen(and.Luukkonen (1992a, b) and Leybourne and McCabe (1992) suggest parametric corrections for autocorrelation. This would amount to assuming (98) and also assuming an ARMA(p,q) model for wt, so that X} is ARMA(p,q+1) with a unit ‘moving average root under the null. The parametric model would be used to whiten w» and then the POI t 72 test would be applied to the whitened.data. The finite sample properties of such corrected tests are an important topic for future research. THEOREM 4. Let Assumption B hold and let 9(j) and 1(j) be jth sample and population autocovariance of Axu respectively. Then, T"; (2:30),.)2 [1(0) - 2‘3 031(1)] t-l j-l (37) T'p,(o.) = T o» . 1.13.1th 02(1‘9-2) “3'" (r) 2dr Thus P,(0,,) is Op(T“) . Hence, comparing Theorem 3 with Theorem 4 shows that the test is consistent. Recent research by Saikkonen and Luukkonen (1992b) derives the asymptotic distribution of the POI test of level stationarity when 0. = 1 - 6mflr with 6, fixed. Hence their asymptotic distribution is quite different from ours both because they fix 6. instead of 0., and because their model is level stationary under the null while ours is trend stationary. Since the distributions of both the GLS-based KPSS and the POI test statistics depend only on the assumed value of a (that is, 0.) and the sample size T under the null hypothesis, the finite sample distributions can be tabulated by Monte Carlo simulation. We calculate the critical values of the tests through simulations using various values of these two parameters. For the purpose of comparison with the KPSS results, we consider the sample sizes T = 30, 50, 100, 200, 73 and 500. We also consider the assumed values of a. = 1.0., 0.99, 0.969, 0.905, 0.73, 0.382, 0.01, and 0.0001. These correspond to assumed values of the signal to noise ratio of A“ = 0.0, 0.0001, 0.001, 0.01, 0.1, 1.0, 100, and 10000. The critical values are calculated by a direct simulation using 25,000 replications and normal random numbers are generated by the routines.GASDEV'and.RAN3 of Press, Flannery, Teukolsky and Vetterling (1986). These critical values are presented in Table 1. The critical values in Table 1 reflect the analytical results given above. For our GLS-based KPSS test 737(0.) , the critical values for each sample size and critical level are monotonically increasing as 0. decreases from one to zero. Also, for a given value of 0. and a given critical level, the critical values of the statistic increase in proportion to the sample size, T. This reflects the fact that our GLS-based KPSS test 57(0.) is Op(T) under the null hypothesis, as shown in Theorem 1. For the original KPSS test, which corresponds to our GLS-based test at o. = 1, we see a very stable distribution with respect to the sample size under the null hypothesis, as expected. As for the POI test P,(0.), its critical values seem to depend on the values of a. and the sample size T in a very complicated way. However, we can see the convergence of the normalized POI test to the normal distribution in Table 2. Table 2 presents percentiles of the distributions of the POI tests at sample size T = 500. If 0, = .730, for example, we 74 can find that the POI test P,(0.73) has an approximately normal distribution around the 50% critical value 1.1604, which is the approximate value of plim.I;(0.73) = 2/1.73 = 1.156. This corresponds to the result of Theorem 3. 4. SIMULATIONS RESULTS: SIZE AND PO'ER OF THE TESTS In this section we present some limited evidence on the size and power of the 51““) and P709.) tests in finite samples. To do so, we perform Monte Carlo experiments which perform the 5% upper tail test for the GLS-based KPSS test with e = 0 and the 5% lower tail test for the POI test, using the critical values obtained in the above section. The results are generated using the same random number generator as in section 3 and using 25,000 replications in every experiment. Data are generated according to equations (9A) and (93), with (00 = 0. We first consider the size of the tests in the presence of iid and AR(1) errors, w Under the null hypothesis that t. 0 = 1, the distributions of the POI test and the KPSS test do not depend on the nuisance parameters (b, 5 and ax, because the GLS residuals upon which the tests are based do not depend on 36 and 6 and the scale factor (IX appears in numerator and denominator and cancels. However, the null distribution of the GLS-based KPSS test does depend on a}. We assume that ¢ = 5 = 0 and aft= 1 in our experiments. As in KPSS (1992), we consider AR(1) errors “t = ¢wt_1 + vt, where vt are iid N(0, 1) and o = i.8, i.5, t.2 and 0. Then the relevant parameters in 75 this experiment are the sample size T, the chosen value 0. used in detrending, and the AR(1) coefficient o. We consider a. = 1.0, 0.99, 0.969, 0.905, 0.73, 0.382, 0.01 and 0.0001, and sample sizes T = 30, 50, 100, 200, and 500. Tables 3-7 summarize the simulation results for the size of the tests in terms of T, o and 0.. The results for the cases of ¢ = -.5 and 4.8 are not tabulated because all the numbers are very close to zero, except for the GLS-based KPSS test with very small values of A" Under the null hypothesis of o = 1, the AR coefficient 4’ conveniently measures the distance of the null hypothesis from the alternative. When ¢ = 0 so that X; are iid errors, the tests have size equal to their nominal level of 5% (the first block of each Table). When 4: = .8, an overrej ection problem can be predicted because )g approaches a pure random walk process as o 4 1. For the KPSS (o. = 1) and POI tests, the results in the Tables correspond to our expectations. For a given T and 0., we have severe overrejection as ¢ 4 1 and underrejection as ¢ 4 -1. For a given 0. and 4: > 0, we have more rejections as T increases, and for 4) < 0 we have less rejections as T increases. Given T and ¢. as 0.-4 0, the POI test shows more severe overrejections for positive ¢ and less severe underrejections for negative ¢ (but with very little difference for the negative values of ¢). As for the GLS- based KPSS test (a. less than unity), size depends upon T, o and o. in a very complicated way. When a. is closer to unity, it suffers from more overrejection as T increases and o 4 1: 76 when 0. is closer to 0, it shows underrejection even for 4: = .8, especially as T increases (see Table 7A, 0. = 0.01 and 0.0001). Next we consider the power of the tests in the presence of iid errors. The relevant parameters are the sample size T, the chosen value 0. and the actual value 01. The main point in this chapter is to compare the power of the POI test and GLS-based KPSS test (including the KPSS test) with different possible values of 0. under the alternative hypothesis of different values of 01. (As before, 01 represents the actual value of 0 in the DGP while 0. is the value of 0 chosen to construct the test.) More specifically, we perform the experiments with the following values of the relevant parameters; 01 = 0.99, 0.969, 0.905, 0.73, 0.382, 0.01, 0.0001; 0. = 1.0, 0.99, 0.969, 0.905, 0.73, 0.382, 0.01, 0.0001; sample size T = 30, 50, 100, 200, and 500. The simulation results are summarized in Tables 8-12. As expected, power increases for the KPSS test 7341) and the POI tests as T increases and as 01 decreases. We expect that the POI test P40.) should have the maximum power against a specific alternative hypothesis. Our simulation results support this expectation. We can see that the P40.) test with 0. = 01 generally has higher power than any other tests within each experiment block (value of T and 01) and this pattern is quite clear except for a few values of 01 near the null. The gain to using a POI test can be substantial: for example, for T = 30 and 01 = 0.382, compare the power of 0.837 for the POI 77 test P40.382) to 0.720 for KPSS test 1741) in experiment BE in Table 8. In addition, the gain from using the POI test is quite robust to the choice of assumed value of 0, that is, 0.. In particular, P4.730) generally dominates the KPSS test except when the power is very low. As for the GLS-based KPSS tests 1740.) , the KPSS test 1341) dominates all of the GLS-based KPSS tests with 0. less than unity, at all values of 01, apart from small differences due to randomness. While the power of the tests with values of 0. close to unity improves as 01 decreases and as T increases, the power of the tests with small value of 0. does not improve much as 01 decreases or as T increases. These simulation results reflect the fact that 1340.) is not consistent for 0. 6 [0,1). 5. CONCLUDING REMARKS By analogy to the previous chapter, we have proposed GLS- based tests and POI tests in the context of testing the null of stationarity against the alternative of a unit root. These tests are based on the residuals from a GLS regression of yt on [1,t], with the covariance matrix 0,4 0) using a chosen value 0. 6 (0,1] of the moving average parameter against which maximal power is desired. Our GLS-based KPSS test statistic 1740.) includes the KPSS test and the KPSS test based on BSP residuals as special cases, corresponding to 0. = 1 and 0. = 0. For 0. 6 (0,1) , its asymptotic behavior resembles that of the latter rather than that of former: its asymptotic 78 distribution depends on the marginal distribution of x, and the test is not consistent, in the sense that it has the same order of probability under the null and under the alternative hypotheses. Our simulation results show that the GLS-based KPSS tests have low power. In sum, the GLS-based KPSS test seems to be a failure. As for the POI test, we expect that it will be at least as powerful as any other invariant test against 01 = 0. and also might be more powerful against 01 in a reasonable neighborhood of a". This expectation is also supported by our Monte Carlo experiments. IHowever, the POI test depends on the assumption of iid errors and should not be used in the presence of more general stationary errors. So more research is needed to develop an autocorrelation-robust version of the POI test, either in a parametric fashion as in Saikkonen and Luukkonen (1992a, b) or in a nonparametric fashion as in Phillips and Perron (1988). 90%, 95%, 97.5%, AND 99% CRITICAL T % 0.-1.0 .990 30 90 .122 .122 95 .148 .150 97.5 .174 .177 99 .209 .211 50 90 .121 .122 95 .148 .151 97.5 .174 .180 99 .210 .215 100 90 .119 .127 95 .149 .157 97.5 .178 .188 99 .213 .233 200 90 .118 .161 95 .147 .208 97.5 .176 .256 99 .218 .318 500 90 .119 .574 95 .147 .793 97.5 .176 1.020 99 .215 1.302 h‘h‘h‘ NM?“ .969 .130 .160 .189 .227 .145 .180 .217 .269 .258 .345 .432 .548 .853 .203 .548 .979 .188 .550 .935 .879 79 THUNUB 1a. .905 .255 .329 .403 .495 .575 .771 .959 1.195 .725 .402 .042 .091 WWNH .115 .618 .108 .133 oumk 10.80 14.96 19.29 24.57 .730 wNNH M9UN 9. 12 12. 16. 19. 24. 30. 41. 51. 61. .562 .054 .529 .083 .904 .814 .585 .644 .102 .081 880 .00 29 07 58 23 83 28 08 88 .382 .196 .136 .861 .713 O‘U‘lUlL‘ 6.768 8.267 9.470 10.82 13.58 16.79 19.36 21.93 26.66 33.15 38.36 43.66 65.37 81.52 94.65 107.4 .010 6.437 7.439 8.191 8.875 10.45 12.09 13.31 14.48 20.16 23.65 26.14 28.60 40.22 47.05 52.25 56.90 99.05 116.1 128.8 141.1 more or 1140.) .0001 6.473 7.477 8.231 8.934 10.44 12.06 13.30 14.50 20.37 23.63 26.16 28.60 40.29 47.21 52.40 56.80 100.6 116.7 128.4 140.4 1%, 2.5%, 5%, T % fle-l 30 CU‘LDH 50 OUlUIH 100 1 2.5 10 200 1 2.5 10 500 1 2.5 10 P‘P‘F‘h‘ F‘h‘h‘h‘ P‘P‘P‘h‘ P‘F‘F‘P‘ P‘F‘P‘h‘ .990 .0094 .0095 .0096 .0097 .0090 .0091 .0093 .0094 .0079 .0083 .0086 .0089 .0060 .0067 .0072 .0078 .0026 .0037 .0046 .0055 P‘F‘F‘F‘ P‘h‘h‘h‘ P‘P‘P‘P‘ P‘P‘F‘F‘ P‘F‘P‘F‘ .969 .0256 .0266 .0274 .0282 .0216 .0234 .0247 .0259 .0136 .0167 .0191 .0213 .0058 .0098 .0127 .0158 .0045 .0074 .0100 .0125 80 IHUELE 11> P‘F‘F‘F‘ F‘P‘P‘P‘ P‘F‘P‘h‘ P‘h‘h‘h‘ .905 .0468 .0559 .0625 .0694 .0267 .0384 .0471 .0561 .0144 .0249 .0338 .0432 .0159 .0245 .0314 .0386 .0260 .0309 .0351 .0398 F‘P‘P‘P‘ P‘P‘P‘P‘ P‘F‘P‘P‘ F‘P‘F‘P‘ P‘P‘P‘F‘ AND 10% CRITICAL .730 .0314 .0700 .1001 .1321 .0321 .0620 .0890 .1175 .0536 .0787 .0974 .1170 .0813 .0965 .1095 .1236 .1086 .1176 .1253 .1336 ‘VAJJHBS P‘P‘P‘P‘ P‘h‘h‘h‘ P‘h‘h‘h‘ P‘P‘h‘h‘ P'P’P‘P‘ .382 .0938 .1607 .2245 .2949 .1477 .2101 .2598 .3126 .2275 .2690 .3033 .3414 .2892 .3197 .3427 .3692 .3468 .3647 .3786 .3950 P‘P‘F‘P‘ F‘P‘F‘P‘ P‘P‘P‘P‘ P‘P‘P‘F‘ P'P'P‘P‘ (I? 1;(a.1 .010 .0001 .2601 1.2659 .3313 1.3795 .4344 1.4375 .6024 1.6123 .3914 1.4076 .4377 1.4979 .5672 1.5323 .6631 1.6349 .5504 1.5633 .6199 1.6332 .6791 1.6936 .7503 1.7630 .6725 1.6810 .7190 1.7340 .7642 1.7300 .3142 1.3311 .7733 1.7936 .3107 1.3232 .3377 1.3569 .3711 1.3336 PERCENTILES OP 0.: 1.0026 1.0037 1.0046 1.0055 1.0064 1.0070 1.0074 1.0077 1.0080 1.0082 1.0085 1.0088 1.0090 1.0091 1.0092 POINT 1.0045 1.0074 1.0100 1.0125 1.0152 1.0169 1.0182 1.0194 1.0204 1.0214 1.0224 1.0237 1.0246 1.0253 1.0260 81 TABLE 2 OPTIIIAL TESTS P,(0.)p T = 500 1.0260 1.0309 1.0351 1.0398 1.0451 1.0485 1.0515 1.0540 1.0563 1.0586 1.0614 1.0649 1.0674 1.0695 1.0717 .730 1.1086 1.1176 1.1253 1.1336 1.1431 1.1496 1.1552 1.1604 1.1653 1.1704 1.1764 1.1842 1.1905 1.1959 1.2018 .382 1.3468 1.3647 1.3786 1.3950 1.4148 1.4290 1.4411 1.4520 1.4626 1.4741 1.4873 1.5055 1.5207 1.5332 1.5474 .010 1.7788 1.8107 1.8377 1.8711 1.9109 1.9391 1.9626 1.9845 2.0066 2.0302 2.0580 2.0963 2.1284 2.1555 2.1854 .0001 1.7986 1.8282 1.8569 1.8886 1.9277 1.9565 1.9800 2.0024 2.0250 2.0495 2.0778 2.1180 2.1513 2.1791 2.2119 82 TABLE 3 SIZE 01' 7.140.) AND P40.) TESTS, T = 30 T 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 3O 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 4' 00000000 -.2 ”.2 -.2 -.2 -.2 -.2 -.2 Q .0 HHHHHHHH HHHHHHHH 14141-11414141—11—1 HHHHHHHH H14h9H14h'H14 0. 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 5.10.1 .051 .050 .051 .050 .050 .050 .050 .050 .769 .764 .745 .573 .403 .214 .093 .090 .419 .413 .394 .289 .200 .110 .051 .055 .139 .139 .131 .114 .096 .074 .051 .053 .013 .013 .013 .018 .022 .032 .043 .043 2,10.) .034 .047 .050 .050 .050 .050 .050 .713 .756 .792 .881 .954 .972 .970 .351 .402 .442 .552 .717 .770 .765 .105 .129 .143 .175 .219 .257 .247 .007 .011 .013 .009 .005 .005 .005 83 TABLE 4 SIZE OF 7.),(0J AND P,(0.) TESTS, T = 50 .55 #bbhfibfih T 9 a, e. 11,19.) 2,10.) 50 0 1 1.0 .050 - 50 0 1 .990 .051 .040 50 0 1 .969 .051 .050 50 0 1 .905 .050 .050 50 0 1 .730 .050 .050 50 0 1 .382 .050 .050 50 0 1 .010 .050 .050 50 0 1 .0001 .050 .050 50 .8 1 1.0 .879 - 50 .8 1 .990 .873 .857 50 .8 1 .969 .835 .883 50 .8 1 .905 .534 .927 50 .8 1 .730 .360 .988 50 .8 1 .382 .169 .999 50 .8 1 .010 .052 .999 50 .8 1 .0001 .059 .999 50 .5 1 1.0 .479 - 50 .5 1 .990 .473 .438 50 .5 1 .969 .426 .482 50 .5 1 .905 .262 .558 50 .5 1 .730 .189 .770 50 .5 1 .382 .103 .919 50 .5 1 .010 .044 .944 50 .5 1 .0001 .044 .943 50 .2 1 1.0 .149 - 50 .2 1 .990 .148 .125 50 .2 1 .969 .137 .147 50 .2 1 .905 .105 .166 50 .2 1 .730 .091 .235 50 .2 1 .382 .074 .330 50 .2 1 .010 .051 .368 50 .2 1 .0001 .052 .371 50 -.2 1 1.0 .012 - 50 -.2 1 .990 .012 .009 50 -.2 1 .969 .013 .011 50 -.2 1 .905 .019 .011 50 -.2 1 .730 .021 .005 50 -.2 1 .382 .032 .003 50 -.2 1 .010 .044 .002 50 -.2 1 .0001 .046 .002 84 TABLE 5 812: or 1.140.) m 240.) TESTS, T = 100 Exp. T 41 a, 0. 1140.) P40.) Ho. 5 100 0 l 1.0 .051 - 5 100 0 l .990 .050 .046 5 100 0 1 .969 .050 .050 5 100 0 l .905 .050 .050 5 100 0 l .730 .050 .050 5 100 0 l .332 .050 .050 5 100 0 1 .010 .050 .050 5 100 0 1 .0001 .050 .050 5A 100 .3 1 1.0 .949 - 5A 100 .3 1 .990 .944 .951 5A 100 .3 1 .969 .760 .967 5A 100 .3 1 .905 .433 .996 5A 100 .3 1 .730 .305 1.00 5A 100 .3 l .332 .099 1.00 5A 100 .3 1 .010 .026 1.00 5A 100 .3 l .0001 .023 1.00 53 100 .5 l 1.0 .513 — 53 100 .5 1 .990 .512 .523 53 100 .5 1 .969 .331 .573 53 100 .5 1 .905 .244 .768 53 100 .5 l .730 .168 .964 53 100 .5 l .332 .033 .997 53 100 .5 1 .010 .033 .999 53 100 .5 1 .0001 .037 .999 sc 100 .2 l 1.0 .154 - 50 100 .2 1 .990 .153 .154 50 100 .2 l .969 .119 .163 50 100 .2 l .905 .105 .227 50 100 .2 l .730 .086 .373 5c 100 .2 1 .332 .062 .554 50 100 .2 1 .010 .049 .614 50 100 .2 l .0001 .053 .616 50 100 -.2 l 1.0 .011 - 50 100 -.2 1 .990 .012 .010 50 100 -.2 l .969 .015 .009 50 100 -.2 1 .905 .013 .006 50 100 -.2 1 .730 .021 .002 50 100 -.2 l .332 .032 .000 50 100 -.2 1 .010 .044 .000 50 100 -.2 l .0001 .043 .000 Exp. No. 010500010100 85 TABLE 6 an: or 11,111.) m 2,111.) TESTS, T = 200 T 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 9 00000000 -.2 -.2 -.2 -.2 -.2 -.2 -.2 -.2 Q d P‘HFHF‘HFHP'H F‘HFJP‘HFJF‘H hoH14haH14haH h‘Hb‘h‘Ht‘h’H HPJP‘HFHP*H14 0. 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 .050 .050 .050 .050 .050 .050 .050 .050 .976 .932 .579 .448 .259 .068 .011 .012 .563 .453 .264 .234 .169 .075 .032 .030 .167 .138 .108 .109 .093 .064 .045 .047 .010 .013 .014 .018 .022 .033 .044 .047 11,10.) 2,11.) .045 .049 .050 .050 .050 .050 .050 .978 .995 1.00 1.00 1.00 1.00 1.00 .561 .688 .947 1.00 1.00 1.00 1.00 .154 .192 .340 .606 .823 .874 .876 .009 .007 .002 .000 .000 .000 .000 I53 \IQQQQQQQ 86 TABLE 7 SIZE OP 7'),(0.) AND P,(0.) TESTS, T = 500 T 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 ‘2 00000000 91 1414141414141-11-0 1414141414141-014 1414141414141414 HHHHHHHH 1-41-4141-41414141-0 0. 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 13,10.) .050 .050 .050 .050 .050 .050 .050 .050 .987 .655 .509 .415 .218 .053 .007 .006 .578 .288 .248 .225 .152 .074 .029 .029 .169 .110 .103 .105 .085 .067 .048 .050 .010 .014 .016 .019 .021 .032 .047 .046 2,10.) .048 .050 .050 .050 .050 .050 .050 .997 1.00 1.00 1.00 1.00 1.00 1.00 .670 .933 1.00 1.00 1.00 1.00 1.00 .185 .311 .601 .927 .994 .997 .998 .007 .003 .000 .000 .000 .000 .000 87 TABLE 8 POWER or 13,10.) m 240.) TESTS, T = 30 Exp' T ¢ 01 0. "7‘00, P'(0.) No. 8C 30 0 .905 1.0 .074 - 8C 30 0 .905 .990 .076 .053 8C 30 0 .905 .969 .071 .068 8C 30 0 .905 .905 .073 .074 8C 30 0 .905 .730 .068 .076 8C 30 0 .905 .382 .060 .068 80 30 0 .905 .010 .051 .065 8C 30 0 .905 .0001 .055 .064 8D 30 0 .730 1.0 .281 - SD 30 0 .730 .990 .275 .231 8D 30 0 .730 .969 .273 .269 8D 30 0 .730 .905 .244 .276 8D 30 0 .730 .730 .199 .287 8D 30 0 .730 .382 .125 .257 8D 30 0 .730 .010 .081 .210 8D 30 0 .730 .0001 .080 .207 BE 30 0 .382 1.0 .720 - 8E 30 0 .382 .990 .715 .668 8B 30 0 .382 .969 .701 .707 BE 30 0 .382 .905 .591 .730 8E 30 0 .382 .730 .480 .790 BE 30 0 .382 .382 .314 .837 BE 30 0 .382 .010 .163 .809 BE 30 0 .382 .0001 .163 .799 8F 30 0 .010 1.0 .881 - 8F 30 0 .010 .990 .877 .845 8F 30 0 .010 .969 .867 .874 8F 30 0 .010 .905 .737 .895 8F 30 0 .010 .730 .592 .948 8F 30 0 .010 .382 .410 .985 8F 30 0 .010 .010 .221 .989 8F 30 0 .010 .0001 .221 .989 8G 30 0 .0001 1.0 .884 - 86 30 0 .0001 .990 .876 .847 86 30 0 .0001 .969 .870 .879 86 30 0 .0001 .905 .741 .897 86 30 0 .0001 .730 .596 .952 86 30 0 .0001 .382 .407 .984 86 30 0 .0001 .010 .220 .990 86 30 0 .0001 .0001 .219 .990 Exp. No. 9C 9C 9C 9C 9C 9C 9C 9C 9D 9D 9D 9D 9D 9D 9D 9D 9E 9E 9E 9E 9E 9E 9E 9E 9F 9F 9F 9F 9F 9F 9F 9F 96 9G 96 9G 9G 9G 9G 9G 88 TABLE 9 PONER OP 540.) AND P40.) TESTS, T = 50 T 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 50 1? 00000000 00000000 00000000 00000000 00000000 91 .905 .905 .905 .905 .905 .905 .905 .905 .730 .730 .730 .730 .730 .730 .730 .730 .382 .382 .382 .382 .382 .382 .382 .382 .010 .010 .010 .010 .010 .010 .010 .010 .0001 .0001 .0001 .0001 .0001 .0001 .0001 .0001 9. 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 5,10.) .129 .126 .124 .110 .088 .069 .057 .058 .540 .538 .523 .418 .316 .175 .098 .099 .911 .908 .883 .703 .568 .368 .190 .192 .973 .970 .959 .775 .639 .434 .236 .234 .972 .971 .959 .782 .637 .439 .236 .238 2,10.) .108 .125 .126 .117 .095 .079 .083 .508 .547 .570 .590 .552 .440 .437 .897 .909 .936 .973 .983 .977 .979 .965 .973 .985 .998 1.00 1.00 1.00 .966 .974 .985 .999 1.00 1.00 1.00 89 TABLE 10 POIER, OP 7'),(0.) AND P'U.) TESTS, T = 100 up. 2 0 0, 0. 11,10.) 2,10.) NO. 10A 100 0 .990 1.0 .052 - 10A 100 O .990 .990 .053 .048 10A 100 0 .990 .969 .053 .054 10A 100 0 .990 .905 .050 .051 10A 100 0 .990 .730 .051 .052 10A 100 0 .990 .382 .048 .050 10A 100 0 .990 .010 .050 .049 10A 100 0 .990 .0001 .052 .048 103 100 0 .969 1.0 .077 - 103 100 0 .969 .990 .083 .076 103 100 0 .969 .969 .077 .080 103 100 0 .969 .905 .069 .082 103 100 O .969 .730 .059 .071 103 100 0 .969 .382 .051 .059 103 100 0 .969 .010 .051 .057 103 100 0 .969 .0001 .054 .058 10C 100 0 .905 1.0 .342 - 10C 100 0 .905 .990 .347 .339 10C 100 O .905 .969 .300 .351 10C 100 O .905 .905 .239 .366 10C 100 0 .905 .730 .140 .318 10C 100 0 .905 .382 .080 .231 10C 100 0 .905 .010 .064 .164 10C 100 0 .905 .0001 .069 .169 100 100 O .730 1.0 .877 - 10D 100 0 .730 .990 .872 .875 10D 100 O .730 .969 .771 .890 10D 100 0 .730 .905 .625 .930 100 100 0 .730 .730 .449 .950 100 100 0 .730 .382 .245 .927 10D 100 O .730 .010 .133 .872 10D 100 0 .730 .0001 .130 .865 10E 100 0 .382 1.0 .993 - 10E 100 0 .382 .990 .992 .993 10E 100 O .382 .969 .956 .994 103 100 0 .382 .905 .787 .999 103 100 0 .382 .730 .631 1.00 10E 100 0 .382 .382 .404 1.00 10E 100 0 .382 .010 .217 1.00 10E 100 0 .382 .0001 .220 1.00 Exp. No. 11A 11A 11A 11A 11A 11A 11A 11A 113 113 113 113 113 113 113 113 11C 11C 11C 11C 11C 11C 11C 11C 110 110 110 113 11D 11D 11D 11D 11E 11E 11E 11E llE 11E llE 11E 90 TABLE 11 PONER 01' :140.) AND P,(0.) TESTS, T = 200 T 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 ¢ 00000000 00000000 00000000 00000000 00000000 91 .990 .990 .990 .990 .990 .990 .990 .990 .969 .969 .969 .969 .969 .969 .969 .969 .905 .905 .905 .905 .905 .905 .905 .905 .730 .730 .730 .730 .730 .730 .730 .730 .382 .382 .382 .382 .382 .382 .382 .382 9. 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 11.10.) .066 .060 .059 .055 .055 .049 .046 .048 .194 .178 .147 .107 .073 .056 .051 .055 .735 .683 .534 .418 .240 .112 .073 .075 .990 .981 .872 .742 .553 .318 .165 .165 1.00 1.00 .955 .838 .665 .429 .238 .233 2,10.) .055 .057 ".061 .057 .054 .053 .053 .181 .183 .168 .123 .096 .076 .079 .722 .759 .798 .738 .602 .456 .456 .991 .996 .999 1.00 1.00 .998 .998 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Exp. No. 12A 12A 12A 12A 12A 12A 12A 12A 123 123 123 123 123 123 123 12B 12C 12C 12C 12C 12C 12C 12C 12C 12D 12D 12D 120 12D 12D 12D 12D 12E 12E 12E 12E 12E 12E 123 12E 91 TABLE 12 POWER or 13,10.) AND 2,10.) TESTS, T = 500 T 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 500 41 00000000 00000000 00000000 00000000 00000000 91 .990 .990 .990 .990 .990 .990 .990 .990 .969 .969 .969 .969 .969 .969 .969 .969 .905 .905 .905 .905 .905 .905 .905 .905 .730 .730 .730 .730 .730 .730 .730 .730 .382 .382 .382 .382 .382 .382 .382 .382 0. 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 1.0 .990 .969 .905 .730 .382 .010 .0001 11,10.) 2,10.) .141 .116 .093 .070 .051 .051 .052 .052 .615 .465 .369 .221 .107 .068 .058 .058 .983 .889 .772 .614 .375 .191 .103 .102 1.00 .995 .947 .824 .624 .391 .215 .203 1.00 .999 .972 .869 .671 .451 .253 .252 .133 .127 .102 .076 .062 .054 .058 .626 .669 .603 .454 .289 .190 .192 .989 .997 .999 .997 .986 .953 .953 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 “- 92 APPENDIX 1 LEMMA 2. Under Assumption B (unit root), 1) e;(L)x, Op(T"2) and ii) 63(L)X, 041). Proof. The polynomial 8;(L)X, can be expressed equivalently as (A1.2) through a well known polynomial decomposition, which decomposes a linear filter into long run and transitory elements. (Al.1) e;(L) s e;(1) - (l-L)é;(1.), ~ T-2 . T-l . . where 8' L = E ¢S.LJ and 6. = Z 0.‘ = 0.’”-0.I 1-0. . Thus .( ) 3-0 , , i-j+1 ( )/( ) we have (A1.2) 01(L)x, = 0;(1)x, - 3;(L)Ax,. T—l . The first term equals [(1'91TV‘1'911HX1 because 8241) = 1200.‘ = (1-0.T)/(1-0.) . For the second term, a little algebra shows that (Al.3) 8;(L)AXT [0./(l-0.)][Ax,+ 0.4x,_1+...+ 0."?sz + 0.T"x,] - [0*1/(1-0.)]XT [0../11-0.)19,‘(1L)4xT - 103/1140111,. Substituting 9:11”, and (Al.3) into (A1.2) yields (31.4) 9111011. = UNI-0.1111, - [0./11-0.)19;1L)4xT. Under Assumption B, that is, Xt has a unit root, XT is OP(T”2) and AXt and its absolutely summable series 8;(L)AX, are 04 1) , 93 so the result follows. Similarly, 8;(L) X: can be decomposed into as follows: (11.1.5) 0;(L)x, = 93(1))1, - é;(L)Ax,, ~. T—2 j T—2-j 1 1-1-' where 83(L) = jEOdIL and dj = 12-0 0. = [(1-0. J)/(1-0.)]. Since e;(1) = 0:11) = i2.00.i = (1-0.T)/(1-0.), the first term equals [(1-0.‘)/(1-0.)]X,. The second term can be written as :- (A1.6) é;(L)Ax, [1/(1-0.)](x, - [0."1411, +...+ 0.sz + x11) :1 [IL/114.0111I - [1/(1-6.)]9;(L)AX1- Substituting e;(l)x, and (A1.6) into (A1.5) gives (Al.7) 6;(L)X, = -[0.'/(l-0.)]x, + [1/(1-0.)]e;(L)Ax,. Since lim,_,,,0.I 4 0, the first term is op(1) and AXt and its absolutely summable series 82(L)1.\.XT are Op( 1) under Assumption B. Hence, the result follows. 94 APPENDIX 2 In this Appendix, we derive GLS estimates and residuals, which will be used for constructing the GLS-based KPSS test, 1340.), and the POI test, 240.). We show that 1141) = r“), (KPSS test statistic based on the OLS residuals) and 1140) = 5, (KPSS test based on BSP residuals). Let 7’ = [0,5] as in equation (9') of the main text, and define 9 and 3 to be the OLS and BSP estimates, respectively. Let ;( 0.) be the GLS estimates using the covariance matrix 0,40.) so that e40.) yt - zt’;(0.) . Then it is sufficient to show that 51(1) = 9 and $10) = '7'. We start with the derivation of the GLS estimates. Let * Z = C"(0.)C(1)Z and 9* = C'1(0.)C(1)y be the transformed variables as in equation (17) of the main text- They have the following form: .. l 0. 0.2 03" (A2.1) z" = 1 1+0. 1+0.+0.2 1+0.+-.-0.T" (A2.2) 9'1 [y1 Ay2+0.y1 Ay3+0.1:1y2+0."’y1 .. AyT+0.Ay,_1+--+0."‘y1] [611104101 6:1L)Ay2 e;1L)4y. 61110411,] Sinc estimatio (32.3) be 0, = 1, sc so that «.1 (112.3) (112,4) ‘ 7 95 Since GLS estimation from (9') is identical to OLS estimation from (17), the GLS estimates 3( 0.) are defined as (A2.3) below. From (A2.1) and (A2.2), 2* = Z and 9‘ = y for 0. = 1, so that 3(1) = 3; and for 0. = 0, 2' = AZ and {7* = Ay, so that 3(0) = 3. For any 0. 6 (0,1), 132.3) 310.) = 12m."(0.)21“z'n.."10.)y = [zt'zt]-1it'§t (1-0 2T)/(1-0 2) 5 0 i"(l-0 i)/(1-0) -1 t t i 1 t i .1 50"‘1-0i 1-0 3 1-0i 1-0 2 11. ( .)/( .) i_1[( .)/( .)1 T 1-111 3310. 8,.(L)Ayt T * £1[(1'9*t)/(1’91)19A(L)Ayt ’ 114.") 11-0.T)11-0.‘*‘) ‘ -1 (1-03) (1-0.) (1-03) = (1-0.')11-0.'*‘) T11-0.2)—20.11-0.‘)4311-03)"- 1 114011-03) 11-0.)211-0.’-) . (1+0..)"9;’11:.)yT + 11+0.)"0.‘9§1L)yT -11-0..2)"‘0.11‘.;1L)y,+11-0.)"[1-0..‘*‘/11+0.)19:111.):1T . 132.4) 310.) - 7 [z'n."10.)ZJ"z'n,."10.)x [it,i*]-1it,;{0 Simplifyin given 0. E (32.5) [ 96 11-03‘) 11-0.‘)11-0.‘*‘) ‘ -1 11-03) 11-0.) (1-0."‘) = 11-0.‘)11-0.‘*‘) 211-0.?)-20.11-0.')4311-03)“2 11-0.)11-0.“’) 11-0.)211-0.Z) . (l+0.)'16;(L)XT + (1+0,)“0,.,'6),";(L))1T -(1-0,.;’-)"0..3;(l.)xT + (l—0.)"[1-0.“‘/(1+0.)10;;(l.)xT . Simplifying the elements of (A2.4) using limF~0f 4 0 for any given 0. 6 (0,1), we have 132.5) [310.) - 71 . 1 1 ”-1 W (1-0.)11-0.2) = 1 T(l-0.2)-20.-0.2 . 114011-03) 11-0.)211-0.“’) _ (1+0.) '16;(L) xT . -+ op(1). -(l-0.2)"0.3;(1.)xT + (1-0.)"0}[(L)xT Note that this expression does not apply for 0. = 1. For our asymptotic analysis we have to consider the properties of the GLS residuals under our alternative assumptions because they show quite different behavior under these alternative assumptions. Under Assumption A, X,, 8;(L)X,, and 6:(L)XT are 09(1) , so we have (for t = 1,...,T): (A2.6) 9.10.) = yt - 2.410.) = X. - 2.4310.) - 01 =X t When at = which is a For convenient are relate (12.3) (0 Note that under the residuals 1 overdiffer' Under m'JI'IllilliZecl matrix (42.10) Then (412.10) T‘ = 1114/2. NOW 00115 id‘ 97 = xt - (1-0.)e;(L)x, - (t/T) (1-0.)[9;(L)-6;(L)]X, + 09(1). When 0. = 0, we can see that 5.10) = xt - )11 -(t/T) ()1T - x1), which is asymptotically equivalent to the BSP residuals. For 0. = 1, (A2.5) and (A2.6) do not apply, but we have simply (A2.7) et = e41) = OLS residuals from (9'). For the construction of the P01 test, it is more convenient to use the OLS residuals 5,4 0.) from (17), which are related to 340.) by (18) in the main text. Specifically, 132.8) 9:10.) = 2.‘ - 2:410.) = i.’ - 201310.) - ~71 = 93L)”: - (1-0.)0.“‘0,(L)x, + op(1), t = l,..,T. Note that these residuals 540.) are 04 1) under the null and under the alternative hypotheses. This is so because the residuals take the form of an exponentially weighted series of overdifferenced processes under the null. Under Assumption B (unit root), we consider the normalized GLS residuals series T'1’25(0.). Define D as the matrix (2.2.10) 0 1 0 l 0 , so that D‘”2 = 0 T 0 T'”2 . (112.10) T’1’2e40.) = T'szt - T“’Zz.'D"’2D"2[7(0.) - 7] Then = 111'1/2xt _ zt' (T'1/ZD'1/2) [D'1/Zz Inn°1(9*)ZD‘1/2]-1D-1/Zz Inu'1(0*)x. Now consider the terms on the right hand side of equation 98 (A2.10). We have (A2.11) zt’(T'”2D"’2) = [ 1"”? t/T ]. For the term [0'1/22111'"(0.)z0“/2]", note that 210,410.” is as given in the first matrix on the right hand side of equation (A2.4). For any given 0. 6 (0,1), pre- and post-multiplying z'nu"(0.)z by D'”2 and taking probability limits of the elements (using the fact that lim,,0fl 4 0) yields 1-0.2 0 (A2.12) plim [0'1/Zz'nu"(0.)20'1’21" = 0 (1-0.)2 . For the term D'1’ZZ’nn"(0.)x, note that Z’nn"(0.)x is the same as the second matrix of the right hand side of equation (A2.4). Premultiplying it by D"’2 and taking probability limits of the elements yields (using Lemmas 1 and 2) q ' (1+0,.,)"0;(L)xT (A2.13) plim 0°"Zz'0,,“(0.)x = + op(1) . T'1’2(l-0.)'16;(L)XT . (1+0,)"0;(L)xT 1 -1/2 _ -2 T (1 0.) xT d Note that the second equality follows from equation (A1.4). Now substituting (A2.11), (A2.12) and (A2.13) into (A2.10) and doing some algebra yields (A2.14) T'1/2e40.) = T'Wxt - (t/T)xT + op(1), t = 1,...,T. Note that these are asymptotically equal to the BSP residuals; asymptotically, they do not depend on the value of 0.. 99 APPENDIX 3 In this Appendix we derive the asymptotic distribution of the new GLS-based KPSS test under our alternative assumptions. We show that its asymptotic distribution depends on the marginal distribution of x, and that the test is not consistent, because it has the same order of probability under the null and alternative hypotheses. We consider the test with 0. 6 (0,1) , because the asymptotics for 0. = 1 and 0. = 0 are given by KPSS (1992) and by Schmidt (1992), respectively. For the long run variance estimator, we consider only the case 2 = 0. For the case 2 ,1 0, the same results as in Schmidt (1992) can be derived without difficulty, just by applying the results of this Appendix and Schmidt (1992). We first prove Theorem 1 under Assumption A. From Appendix 2, the GLS residuals are given as (A3 . 1) e40.) = Xt-(1-0.)9;(L) x,-(t/T) (1-0.) [0;(L) -e;(L) 1x,+op(l) . Denote the weak limit of XT as T 4 90 by X.. Then under Assumption A (stationarity), for any 0. 6 [0,1) , (113.2) 2‘16 10.) =1 -11-0.)1re;1L)X. + 1r2/2)[e;1L)-e;1L)JX.). [rT] [rT] ~ Proof. T'1Sm] = T“1 fileiw.) _1[rT] _1 * T 131x, - T [rT](l---0..)63(L)XT -2 . . [rT] . _. T (1-0.) [6A(L) - 83(L)]X, 3&1] + op(1). 100 The first term converges in probability to zero and the second term converges to r(1-0.)8;(L)X.. For the third term, we use T the fact thatjzlj = T(T+1)/2. So replacing T with [rT] yields the result. '1' .. 133.3) T351840»? 4 1(1402/60)1111931411..)2 + 919:1L)X.116;1L)X.1 + 31611101132) Proof. -3T “ 2 -1T " 2 From (A3.2), T t£1840.) = T t2_1[S40.)/T] 4 0511-1021 re;1L)X. + 1r2/2)te;(L)-e;(t.nx. 123.». Then evaluating the integral yields the result. 1013.4) 5210) =1 0.2 + 111-0.)2/31119;1L)X.12 + 16;1L)X.1[e;1L)X.1 + [911L)X.1’-). Proof. - '1' .. (A3.5) 02(0) = T'1t21e40.)2 T * t 1! T"tz_:1{xt - (1-0..)e,,(l.)xT - (t/T)(1-0.)[6.(L)-9.(L)]XT )2 T i T"): { x, - (l-0.)13,,(L)l1T )2 t-l T + 114021161111.) - 11:11.)1x,)2'1"3t21t2 * t - T i - 211-0.)1[e.1L) - e.1L)1X,)TZt2_1tIX. - 11-0.)e,1L)xT1. T Since T'1t.$‘..1Xt = op(1) and 81(L)XT = 0,41) , the first term of the T right hand side equals T'1t21th + (1-0.)2[81(L)X,]2 + op(1) and converges to 0x2 + (1-0.)2[8;(L)X,]2. The second term converges 101 to (1-0.)2[e;(L)x, - e;(L)x,]2/3 (usingtlelt2 = T(T+1)(2T+1)/6, so that T'-”t>1.i1t2 has limit 1/3). In the third term, T'Ztgiltxt = op(1) and [3};(L)xT - e;(L)x,] = 041) so that the third term has the same asymptotic distribution as 2(1-0.)2[8;(L)XT - 8;(L)XT][8;(L)X,]T'2é1t, which converges to (1-0.)2[e;(L)x_ - 8;(L)X.] [8;(L)X.]. Thus, collecting terms yields the result. .. '1‘ .. . 1113.6) 'r"n,10.) = 235184002 / 010)2 111-0.)"1181931L)X.12 + 91631L)X.1[e;1L)X.1 + 3161mm?) 60on + zen-0.121[9;1L)X.)2+1e;1L)X.J[9;1L)X.1 +16;1L)X.12) and 1340.) = Op(T). :5 Proof. Simple substitution of (A3.3) and (A3.4) into the formula for the KPSS statistic yields the result. Now we derive the asymptotic distribution of the GLS- based KPSS test under Assumption B. Under the nonstationarity assumption the test is regarded as a function of the normalized residuals (A3.7) T'1/2540.) = 1"”th - (t/T)xT + op(1), t = 1,...,T. Under Assumption B, (A3.8) T'Vzem] (0.) =5 aB(s) , where B(s) = W(r) - rW(1) is the Brownian bridge. Proof. See Schmidt and Phillips (1992, Appendix 3). Once we have the result (A3.8), exactly the same steps as in Schmidt (1992) apply. 80 we just state the main results. 102 (A3.9) T'3’26m40.) =1 of53(s)ds. T - (A3.10) T"t218t(0.)2 =1 azfg[f33(s)ds]2dr. - T - (A3.1l) T"az(0) = T'2t21e40.)2 s azfgmsfids. - T - - (A3.12) T'1n40.) = T431840.)2 / T"a(0)2 f2,1!321s)dslzdr [33(s) st and 1340.) = 09(1)). Therefore, comparing (A3.6) and (A3.12) shows that the 1L(0.) test is not consistent, because the statistic is oper) under both the null and alternative hypotheses. 103 APPENDIX 4 In this Appendix we derive the asymptotic distribution of the POI statistic under our alternative assumptions. We start with Assumption A (stationarity) . First consider the denominator of the statistic. As we discussed in Appendix 2, for t = 1,...,T, 541) are identical to the OLS residuals fit and 11,,(1) becomes the identity matrix, so we have .. .. T (A4.l) e(l)'n,,"(1)e(l) = 313 113-18,2 and T (A4.2) T4318} =0 a"2 = “(0) . Next consider the numerator of the statistic. From (A2.8), the residuals are given as (A4.3) et*(0.) = 9;(L)AXt - (l-0,)0."‘e;(L)xT + op(1). From (19) in the main text, we have ~ -1 " ~* '0 T ' t 2 1114.4) e10.)'n,. 10.)e10.) = e mm (0.) =t21e.10.) . A45 T45". 2=T'1§ 0*L x- 1- t"0"1.)12 ( - ) t_let (0.) t:__1[ .( )A t ( Mo. .( ) 1] T = T'1t§1[6;(L)AXt]2 + (l-0.)2[8;(L)XT]ZT'1tZ_:10.2“'1’ T - 2 (1-0.) [9311.) x,]T"t210."‘[e;(L) Ax.) . T The second term converges to 0 because the limit of )3 0.2“") t- equals 1/(1-0.2) and 6:(L)Xr is 041) from Lemmas 1 and 2. After some algebra (using 11m... 0,,T 4 0) we can show that the third term asymptotically equals 2(1-0.) (1+0.)T"[6;(L)X,]2, so the third term also converges to 0 under both assumptions. 104 (Recall that 6;(L)XT is Op( 1) under both the stationarity and unit root assumptions.) This implies that for our asymptotic analysis only the first term in (A4.5) matters under both assumptions about the errors. Now we consider the first term in (A4.5) under Assumption A (stationarity). Since 6;:(L)A}I(t = [xt - (1'0*):E_119*j'1X:-j]' we have (A4.6) T"; [6;(L)AXt]2 = T"; [)(t - (149,“?o..5'1xt,.]2 t-l t-l j-l ’ -11‘ 2 _1 T t-l 1.4 = T talxt - 2(1-0.)'I' t2-1[x‘ jE149. X,.,-] T t-l . + 1-0. 2T": 2 o.’"x . 2. A - ~1T - -th Let 1x”) = T t§j+1x‘xt'i and 1x0) be 3 sample and population autocovariance of Xt and let 3x(j) and px(j) be the jth sample and population autocorrelation coefficient of Xt, respectively. Then after a little algebra, we have -1T 2 A (15.4.7) T 2 xt = ”(0), t-l _1T c-1 H T-l “A . (A4.8) T t*2:_.1[xt J21149.. xvi] — 32:10. ”(3) and (A43) T“; [t310.5"x -J"’ = (NEWS (0) + 2T§10.i9(j)1. t-l 3-1 "’ " 3-1 " Substituting (A4.7), (A4.8) and (A4.9) into (A4.6) and collecting terms yields -1T . 2 -1A T'l '-1A ~ (PA-10) T t§1[9A(L)AXt] = 2(1+9.) [1,,(0) ' (1'0.) 1.71103 1,,(J)] » 2(1+o.>"[1,<0)-(1-o.)j§10.i" min. 105 T - T A Since plim P,(9.) = plim T"t)::1et"(o..)2 / plim T‘1t21et2, from (A4.2) and (A4.10): (A4.11) P plim P,(o.) = 2(1+o.)"[1 - (140519.“ px(j)]. We know that under regularity conditions about the error process Xv the joint distribution of T1/2[3x(i) - px(i)], 1 S i S p, converges to the p-variate multivariate normal distribution with zero mean vector and covariance matrix w = (wfi), that is, (A4-12) T"2[3x(1) - p,(1).---. Mp) - px(p>]' => N(0. W). (A4-13) vi,- =k>31ux + mic-i) - 2p,(i)p,(k)} >< {px(k+j) + “(k-j) - 2px(j)px(k)} (Brockwell and Davis (1991), chapter 7). Note that when Xt are iid,‘wij = 1 for i = j and Wii = O for i u j, because:px(0) = 1 and px(j) = O for j 2 1 . Directly applying (A4.12) and (A4.13) to (A4.10) and (A4.11) gives the following result: (144.14) T"2[P,(o.) - P] =5 N(0, V), where V is defined as (A4.15) v a [2(1-0 )/(1+o ”22.1“: g oi*1'2w... * i i-lj-l * 1’ Hence P,(o.) = OP(T’”2) under Assumption A. Next we derive the limiting distribution of POI statistic under Assumption 8 (unit root): 106 -12 (A4.16) T ’xm] =9 oW(r). We know that the normalized OLS residuals converge to a function of the demeaned and detrended Wiener process W*(r) (KPSS (1992), equation (26)), i.e., (A4 . 17) T"’28[m => oW* (r) . Hence, we have - T A (A4.18) T 2318‘?- => 02f3W*(r)2dr. Next consider the numerator. Assumption B implies that [mg is a general stationary process. Let $(j) and 1(j) be its jth sample and population autocovariance, respectively. Only the first term in the expression for the GLS residuals e:(0.) in (A4.3) matters (see the discussion following equation (A4.5)). So we have T ~ 9 (A4.19) T-1tzlet (0.)?- IT‘E [6*CLLNX]2 + o (1) t-l ‘ t p T—l = (l-ofr‘fim) - 2 g 1 10990)] + 0,,(1) a) » (1-0.2)°1[7(0) - 2219910)]. From (A4.18) and (A4.19), T ~ co . T'1Ze'0.2 o -22 9.1 ' t-l t ( ) [7( ) -1 7(3)] (A4.20) TP,(0.) = T A => , '1"th 1e,2 can-of) jaw (r)2dr and P709.) is OP(T") under Assumption B. Hence, comparing (A4.20) with (A4.15) shows that the P,(0.) test is consistent. CHAPTER 4 CHAPTER ‘ CONCLUDING RBNARKB In this thesis, we have applied the theory of point optimal testing to the problem of testing whether a time series is trend stationary or whether it contains a'unit root. We have considered the point optimal invariant (POI) tests of the unit root hypothesis and of the hypothesis of trend stationarity. Furthermore, we have stressed the connection of the POI tests to the detrending of the series by generalized least squares (GLS), based on an empirically plausible value of the relevant parameter under the alternative hypothesis. Our most important finding is that, compared to other standard tests, POI tests offer large enough gains in power over'a wide enough range of the parameter space to make them potentially attractive. For the unit root testing problem, our results are fairly complete. The POI test is very similar to a test of Dickey- Fuller type, but based on GLS detrending instead of OLS detrending. The asymptotic properties of these tests are straightforward, and they lead naturally to asymptotically valid corrections for error autocorrelation. The main question yet to be addressed is how' well these autocorrelation-corrected tests work in finite samples. In particular, it is important to observe that, if p. is the value of the autoregressive root assumed in the construction 107 108 of the POI test (and used in GLS detrending) , we have considered the asymptotic properties of our tests as T e m with p. fixed. Elliott, Rothenberg and Stock (1992) have considered the asymptotic properties of the same statistics as T -> 00, assuming that p. = 1 - c./T with c. fixed, so that p, -> 1 as T 4 m. This results in very different asymptotics than ours, and it also results in different forms of corrections for error autocorrelation than we have. Which form of asymptotic analysis is more useful is basically a matter of which leads to autocorrelation-corrected statistics with better small sample properties; that is, with smaller size distortions and higher size-adjusted power. This is an important issue yet to be settled. For the stationarity testing problem, our results are less complete. The POI test does offer a substantial gain in power relative to the KPSS test, which is an important and optimistic result" However, while the POI test depends on.GLS residuals (that is, on the series detrended by GLS) and is consistent, the KPSS statistic based on GLS residuals does not yield a consistent test. More thought is needed to understand the reason for this result, and to see what forms of statistics based on GLS residuals lead to consistent tests. Furthermore, although we have derived the asymptotic distribution of the POI statistic under general forms of error autocorrelation, the asymptotic distribution depends on the covariance structure of the errors in a complicated way that does not lead to simple asymptotically-valid corrections for 109 autocorrelation. The practical usefulness of the POI test is small unless a version that is asymptotically valid under autocorrelation is available. This is another important topic for further research. LIST 0!“ REFERENCES LIST OP REPERENCES Bhargava, A. (1986), "On the Theory of Testing for'Unit Roots in Observed Time Series," Review of Economic St di s, 52 369-384. Brockwell, P.J. and R.A. Davis (1991), Timc Series; Thcory and Methods, New York:Springer-Verlag. Campbell, T.Y. and N.G. Mankiw (1987) , "Are Output Fluctuations Transitory?." Quartsrlx_lsurnal_2f_zsgngmis§. 102. 857- 880. Campbell, T.Y. and.N.G. Mankiw (1989),"International Evidence on the Persistence of Economic Fluctuations," l22£§él.2£ Monetaty Economicc, 23, 319-333. Dickey, D. A. (1984), "Powers of Unit Root Tests," Proceedings th ' ' Ass c t o us'n 85 a Economic Statistics Scction, 489— 493. Dickey, D.A. and W.A. Fuller (1979),"Distribution of the Estimators for Autoregressive Time Series with a Unit Root," a o 'c t is ' ss c' t' 74, 427-431. Dickey, D.A. and W.A. Fuller (1981),"Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root." Econometrics. 49. 1057-1072- Dufour, J-M and.N.L. King (1991),"Optimal Invariant Tests for the Autocorrelation Coefficient in the Linear Regressions with Stationary and Nonstationary AR(1) Errors,"Joutgai of Economctrics, 47, 1155143. Elliott, G, T.J. Rothenberg and.J. H. Stock (1992),"Efficient Tests for an Autoregressive Unit Root, " Unpublished Paper of Harvard University. Fuller, W.A. (1976), Inttcggcticg to Stgtisticai Time Series, New York:Wiley. Granger, C.W.J. and P. Newbold (1974), "Spurious Regressions in Econometrics," icczngi cf Econometrics, 2, 121-130. 110 111 King, HQL. (1980),"Robust Tests for Spherical Symmetry and Their Application to Least Squares Regression, " Annniflt §tgtistiQ§, 8, 1265-1271. King, M.L. (1988) ,"Towards a Theory of Point Optimal Testing, " Econometris_3szis!§. 6. 169-218- King, M. L. and G. H. Hillier (1985),"Locally Best Invariant Tests of the Error Covariance Matrix of the Linear Regression Model " 122W. B47, 98- -102. Kiwatkowski, D., P.C.B. Phillips, P. Schmidt, and Y.C. Shin (1992),"Testing the Null Hypothesis of Stationarity against the Alternative of a Unit Root: How Sure Are We That Economic Time Series Have a Unit Root?," EQBIQQL_Q£ Eccnometrics, 54, 159-178. Leybourne, S.J. and B.P.M.‘McCabe (1992),"An Alternative Test for a Unit Root," Unpublished Paper. McCallum, B. (1992),"Unit Roots in.Macroeconomic Time Series: A Critical Overview," Unpublished Paper of Carnegie Mellon University. Muth, J.F. (1960),"Optima1 Properties of Exponentially Weighted Forecasts," o a 0 th Am ica S a i ’ a Associntion, 55, 299-306. Nelson, C.R. and C.I. Plosser (1982),"Trend versus Random Walks in macroeconomic Time Series: Some Evidence and Implications," Jcntnni of Monetaty Economics, 10, 136- 162 . Nyblom, J. (1986),"Testing for Deterministic Linear Trend in Time Series," ou f th er' a t'st' Association, 81, 545-549. Park, J. Y. and P. C. B. Phillips (1988),"Statistical Inference in Regressions with Integrated Processes: Part I," £22.22stri__1h_211 4 468-498- Phillips, P.C.B. (1986) , "Understanding Spurious Regressions in Econometrics," JOQIan of Econcnettics, 33, 311-340. Phillips, P.C.B. (1987),"Time Series Regression with Unit Roots," Eco om t ' , 55, 277-301. Phillips, P.C.B. and S. Ouliaris (1990) ,"Asymptotic Properties of residual Based Tests for Cointegration, " W, 58, 165-194. 112 Phillips, P.C.B. and P. Perron (1988) ,"Testing for a Unit Root in Time Series Regression," Biomcttikn, 75, 335-346. Press, W. H., B. P. Flannery, S.A. Teukolsky and W;T. Vetterling(1986), Ngncticni Recipes: Tnc Art ct Scicntific Computing, Cambridge: Cambridge University Press. Said, S.E. and D.A. Dickey (1984),"Testing for Unit Roots in Autoregressive - Moving Average Models of Unknown Order, " Biomettikn, 71, 599-608. Saikkonen, P. and L. Luukkonen (1992a),"Testing for Moving Average Unit Root in Autoregressive Integrated Moving Average Models,” Manuscript, University of Helsinki. Saikkonen, P. and L. Luukkonen (1992b),"Point Optimal Tests for the Moving Average Unit Root Hypothesis, " Manuscript, University of Helsinki. Sargan, J.D. and A. Bhargava (1983),"Testing Residuals from Least Squares Regression for Being Generated by the Gaussian Random Walk," Econometrica, 51, 153-174. Schmidt, P. and J. Lee (1991),"A Modification of the Schmidt- Phillips Unit Root Test," Econonic Lctters, 36, 285-293. Schmidt, P. and P.C.B. Phillips (1992),"LM Tests for a Unit Root in the Presence of Deterministic Trends," Qxfcrd Bulletin cf Econcnics nnd Statistics, 54, 257-287. Schmidt, P. (1992),"Some Results on Testing for Stationarity Using Data Detrended in Differences," Econonic Lctters, forthcoming. Shepard N.G. and A.C. Harvey (1990),"On the Probability of Estimating a Deterministic Component in the Local Level Model," qutnal of Ting Seties Analysis, 11, 339-347. Shively, T.S. (1988),"An Exact Test for a Stochastic Coefficient in a Time Series Regression Model," qurnnl cf Tine Sctics Anniysis, 9, 81-88. Tanaka, K. (1990),"Testing for a Moving Average Unit Root," Econometric Theoty, 6, 433-444. Tanaka, K. and S.E. Satchell (1989),"Asymptotic Properties of the Maximum Likelihood and Nonlinear Least Squares Estimators for Noninvertible Moving Average Models," Econometris_Ihsgr¥ 5 333- 353- White, H. (1984), Asynntctic Tnecty for Econometticians, New York: Academic Press.