5....“ «wwwlhah... a. .. . ‘ . . . . 4a xv . .. ,. A . . . . . . . 9|: ..‘...nb......ar,.ufl, ‘ . . . .. _ . . . ‘ , : ‘ Lu? 3...? Ruhr. . . . , . . .. . . { I 0- {7.4:}. fiF1...’:t -.v. we; a . . r . . V . A ‘ . . . . . . . . , . Olin V .‘o .v . zwnduél. .2: .s .T . . . , . .. ‘ . .. .. . ‘ , . V 3:11.4vlufikl . . . ., . . . . . , . . .. . .. 95.. €3.32... . . (kin. .. £5.22. . . . . . . ‘ . .. . . . . .3514... .5. a . . Liv a: . cl 1..., V . . . . . v0.1.9. OfV'OII . _ : 9.va ..?lc.;..i.. . ., A .. . f..- z; . ’DII-orfi'lftrv .L’l“! . . . u n u . . ¢ bpf.h\l\(“f: . , . ‘ , ‘ ‘ , .lt...’..-.:I . ‘ . . . . . . . . . A . . u 1 ,3 E . , ‘ ‘ . , Ewanfarth . . . . . ,. . . . 2.31;. .H-ufl.’ 3.. . . ‘ . u..l- l....h'uoc»_ P. . r ‘ u t“, o. . m......§. adv-315'?» 53.}: 0-3.1. . .afvffiu!ule.h“€£.g.;cbb I A043. .113" g gulf-«V: ... 5.7!... . ‘ .. u.)9vfl....!!!..l. 2-. PE; 9 $10 JFK.» zwnt‘l’ . .0" "I. .|5..I. vlul .I. vliollrlvnfiu Tryfl» \K 60' I t a L a n: . e. . {ll rlf¢ioivibuf .:.'v|7nc.L. If . , z oil-lull... n r n v“? t. 2" y . 3 0‘...'Ob”t-.A . .. , gall flunk-.rp‘lF'.‘ I!!! v If]... . . .. . 29.5. 3“!!!“ ~ III. .0 I. . 1“. .zhw :1 dz. I? n In!“ « . a . u 0: . . III. 35”.“? 1.1... .. Ll..- . 1 .. .0. Qufir OI. ulu 2. rt ‘96. i vyrfiifi I DIR! i...- . A§p|ft (£15 a: 1.3:... t .ic.l.|1¥.....¢(lrrrl§0!k E3f1§l>riinionl ’2'!!! WI; '2. iii-33-v. .I. . b .vvorslvll.v.!,gfl.nwu r . It... I. .. .lrllr‘ III! p ctr-Iii? ‘ .. 5‘15: 2. 3 $1. gablrlu IYQ‘I: ., Q‘hb’u. I! A l. . 7‘ . [FL .1 :22}. - it e ).1Ilo':vfiali\lnb.lv.lfc Ital! A ...».....9! 5.21:1.3zhi 7‘:vl.-.'l‘l.t01 .1”: Ii: - I: i'iggtth‘c o Eur... .91...» .9 (a. 3 z : e .1... . . 5-: 4293.6... .. :5:.!,...:.J «FEE ‘ {a 7 s‘ '5‘ g .D 557.3"..63 9.8L vent-Jrcvlt I #1er . . .t, . llfihole.‘ .ID-ul so .er. . u. Era-CID! a!) 1 .3 i|1¥|¢1 ‘ .rli 8 ‘Y r. (v I l ‘ 5 . ,3 . 5'. ' ) r! In. 0 . It s? ‘1!!! ‘1'. t3 . {5'- "Ifikl! '.b{l !:x rtxorf- 11 ‘5' . . 41:6 ‘.o|lt..l. . L! it’lv’é.‘5fibiiz -.'ii .u .11..!...I¢'vn In \-I‘ A v v il. . li-Ilo. . .t‘" » ' lllllHHllHillllllllllllllltlllllmliIIMIHHI ' é“ ‘1 '1 U “0 ““9 3 1293 005920 _ « i», 5%.“! MERARY Michigan State L University This is to certify that the dissertation entitled TESTS OF TREND VERSUS RANDOM WALK IN MACROECONOMIC TIME SERIES presented by Denis Eugene Kwiatkowski has been accepted towards fulfillment of the requirements for Pb . D . degree in Economics Major professor Date 7ULY 311‘,qu .MSU i: an Affirmative Action/Equal Opportunity Ithl'tution 0» 12771 PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES mum on or Mon date duo. DATE DUE DATE DUE DATE DUE 'r‘n ' L‘: ’ s fit \‘flA’ 0’ {>5i‘r’ =# '29:: t. , PCB 0 5 1997 2 4 ‘3‘; JAN 1 2 2009 ‘ L*_JL_ JJ______ MSU Is An Affirmdivo AetionlEqual Opportunity Institution TESTS OF TREND VERSUS RANDOM WALK IN MACROECONOMIC TIME SERIES BY Denis Eugene Kwiatkowski A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Economics 1990 ABSTRACT TESTS OF TREND VERSUS RANDOM WALK IN MACROECONOMIC TIME SERIES BY Denis Eugene Kwiatkowski This dissertation investigates unit-root tests in macroeconomic time series. If a time series is regressed on itself, lagged one period, a constant and a time trend, it is said to have a unit root if the coefficient on the lagged variable is equal to one. This is of interest because a shock to the series will persist over time, instead of dissipating. The first chapter is a general introduction to the subject. The second chapter provides extensive tabulations of two statistics proposed by Dickey and Fuller to test the null hypothesis of a unit root when a time trend is present. The critical values for these statistics depend on sample size and the standardized coefficient of time trend. These tabulations differ from those of Dickey and Fuller because they assumed the standardized coefficient of time trend was equal to zero in the data-generating process. The third chapter uses these corrected critical values to test the data set of Nelson and Plosser for unit roots. When an unrestricted estimate of the standardized coefficient of time trend is used, the Dickey-Fuller statistics reject the null hypothesis of a unit root more often than when restricted or unbiased estimates of the coefficient are used. Unit-root tests proposed by Ouliaris, Park and Phillips and by Schmidt and Phillips fail to reject the unit-root hypothesis for most series. The fourth chapter derives two new test statistics and their asymptotic distributions. These take "level stationarity" or "trend stationarity" as their null hypotheses. For almost all of the Nelson-Plosser data set level stationarity can be rejected, but for many series trend stationarity cannot. This suggests that for many series the existence of a unit root is in doubt. A final chapter summarizes findings and suggests some avenues for future research. Dla Danusi -- Nareszcie moZemy Zyé iv ACKNOWLEDGMENTS While it is a commonplace that the writer of any dissertation owes debts both too numerous and too great to be ever repaid, it is also true. Accordingly, I would like to take this opportunity at least to acknowledge the debts that I have incurred, realizing that this in no way constitutes a repayment. My greatest debt is owed to my chairman, Peter J. Schmidt. His wonderfully clear lectures in econometrics first helped arouse my interest in the subject. And it has been his patience, his availability and his encouragement to get the thing over with that has allowed this task to come to fruition. While that debt I owe him cannot be repaid, I can try to follow his good example in dealing with my students. I would also like to thank the other members of my committee, Richard T. Baillie, Robert H. Rasche and Christine Amsler, for their suggestions. I would also like to acknowledge the support of my wife, Danuta, without whose love and encouragement this dissertation would not have been completed. She bore much of the burden of the sacrifices we had to make, when trips were not taken, concerts were not attended and holidays cut short so that I could continue work on the dissertation. V I know how difficult it was for her, especially in the final months of work on the dissertation. While completing the dissertation I was a member of the faculty of the Department of Economics at Central Michigan University, and I would like to acknowledge the generous support I received in being able to use the department's computer facilities. I would like to thank department chairman Richard Clemmer, and department members Najla Bathish, Jeffrey Barbour, Gregory Falls, James Richard Hill and Paul Natke in particular, and the members of the department generally, for making CMU a good place in which to teach and complete my research. I would also like to thank Barbara Sharp and Mary Ellen Ruark of the department's office staff for prompt and accurate typing of some parts of the dissertation. Finally, I would also like to recognize the support of our families. Thanks are due to Danuta's parents, Stanislaw and Aniela Motylinscy, for helping provide me with a quiet place to work while we visited last summer. Thanks are also due to my parents, Eugene and Jane, who always wanted us to do our best -- although I don't think they foresaw this outcome. At last, Dad, it's OK to call me "Professor." vi TABLE OF CONTENTS LIST OF TABLES Chapter 1. UNIT ROOTS: THEORY, TESTS AND RESULTS 1. 5. Introduction The Theory of Unit Roots Detecting Unit Roots Unit-Root Tests in Practice Conclusion 2. DICKEY-FULLER TESTS WITH TREND 1. 2. 3. 4. Introduction Tabulation of Critical Values for the t- Statistic Tabulation of Critical Values of T(§ - 1) Conclusion 3. TESTING THE NELSON-PLOSSER DATA SET 1. 2. 3. 4. Endnotes Introduction Testing for Unit Roots with Polynomial Time Trends A New Test for a Unit Root in the Presence of a Time Trend Conclusions vii ix 14 26 34 36 40 52 62 65 67 95 109 111 Chapter 4. A TEST OF THE NULL HYPOTHESIS OF STATIONARITY AGAINST A UNIT-ROOT ALTERNATIVE 1. Appendix A. Introduction The LM Statistic for the Stationarity Hypothesis Asymptotic Theory Level-Stationary Hypothesis Trend-Stationary Hypothesis Application to the Nelson-Plosser Data Conclusion Derivation of the LM Statistic 5. SUMMARY AND SOME SUGGESTIONS FOR FUTURE RESEARCH 1. 2. Summary Some Suggestions for Future Research LIST OF REFERENCES viii 113 117 119 121 123 124 129 131 133 140 142 Table 1. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. One percent critical values for the t-statistic 2.5 percent critical values for the t-statistic Five percent critical values for the t-statistic 10 percent critical values for the t-statistic 90 percent critical values for the t-statistic 95 percent critical values for the t-statistic 97.5 percent critical values for the t-statistic 99 percent critical values for the t-statistic One percent critical values for T(§ - 1) 2.5 percent critical values for T(§ - 1) Five percent critical values for T(§ - 1) 10 percent critical values for T(§ - 1) 90 percent critical values for T(§ - 1) 95 percent critical values for T(§ - 1) 97.5 percent critical values for T(§ - 1) 99 percent critical values for T(§ - 1) Critical values for the Kp(§) and Sp(§) statistics Unit-root tests with linear time trend Critical values for the 3, and ;, statistics Unit-root tests with a second-order polynomial in time LIST OF TABLES ix 41 42 44 45 46 47 48 53 54 56 57 58 82 87 91 Table 21. 22. 23. 24. 25. 26. 27. 28. Unit-root tests with a third-order polynomial in time Unit-root tests with a fourth-order polynomial in time Selected critical values for the p and 1 statistics Unit root tests using the p and 1 statistics Unit root tests with corrected p and 1 statistics Upper tail critical values for a” and a, Stationarity tests applied to the Nelson-Plosser data (I) Stationarity tests applied to the Nelson-Plosser data (II) 93 96 101 103 106 123 126 127 CHAPTER 1 UNIT ROOTS: THEORY, TESTS AND RESULTS This chapter serves as a general introduction to unit roots in econometrics and as a survey of some of the literature on unit root tests. 1. INTRODUCTION There has been considerable interest recently in econometrics in a statistical property of economic time series which is called a "unit root." To try to briefly convey what a unit root is, consider the case of a time series regressed on a series made up of its own values, but lagged one period. If the coefficient on the lagged value of the time series is equal to one, then the series is said to have a unit root. While testing for the presence of unit roots presents a number of problems that are of interest to econometricians, perhaps the greater general interest in the topic stems from the light that might be shed on "real business cycle" theories of the macroeconomy by the presence or absence of unit roots in macroeconomic time series. Advocates of real business cycle theories hold that, because economic downturns or expansions are due to fundamental changes in technology, the labor force, or other 1 2 factors, these fluctuations in the economy will persist over time. In contrast, more traditional theories of the business cycle emphasize the role of monetary policy in creating recessions and expansions. But these monetary factors are thought to be transitory. However, these differences over the persistence of changes in the economy are mirrored in the statistical properties of time series that either have or do not have unit roots. It is the case that a time series with a unit root will show "persistence," that is, a one-time change in the level of the series will cause it to evolve from that level, while a series with root less than one will evolve with no particular tendency with regarding a change in level. Because of this feature, testing for unit roots has been seen as a means to help determine which of these competing theories better describes the economy. See Plosser (1989) for a recent discussion of real business cycle theory and its implications. This dissertation is concerned with various aspects of testing for unit roots, and it is hoped that it makes a contribution to the correct use of these tests. The plan of the dissertation is as follows: the rest of this chapter discusses the theory of unit roots, some of the tests that have been proposed for detecting unit roots and some of the findings of previous investigators. Chapter 2 discusses the statistics proposed by Dickey and Fuller (1976, 1979) and offers corrected critical values for two of their statistics, 3, and $,. These statistics are based on the 3 model yt = a + pyf_1-+ 6t + at, and we consider the distribution of these statistics when 6 is not equal to zero. The importance of using Dickey-Fuller tests in studying time series has been emphasized by Nelson and Kang (1984) and more recently by Stock and Watson (1988). Chapter 3 is an empirical investigation of a number of tests for unit roots, using the data set first compiled by Nelson and Plosser (1982). Also considered in that chapter are higher-order time trends. Chapter 4 proposes and investigates two new tests, not for unit roots per se, but for testing whether or not a time series is stationary. This is also an implication of the presence or absence of unit roots. In that chapter we assume that a time series can be decomposed into a deterministic trend, a random walk and a stationary error. The model then is yt = it + rt + at, with rt = rf_1 + ut. With the ut distributed as iid (0, ouz), the stationarity hypothesis is that ch? = 0. Chapter 5 presents conclusions and suggests some directions for future research. 2. THE THEORY OF UNIT ROOTS In this section we consider how unit roots arise, their meaning and the problems unit roots cause in estimation and in econometric testing. The following section will review and discuss some of the tests that have been proposed for detecting unit roots. To begin our discussion, let us consider the simplest 4 case under which unit roots may arise, the case when a time series is merely regressed on its own past values. This sort of regression - often elaborated — has been a frequent feature of recent empirical work exploring the issue of the presence of unit roots, because the original question motivating much of this work was the econometric nature of the time series themselves. The presence of unit roots may also indicate what sort of statistical corrections might have to be applied to the time series before using them in more usual regressions. Suppose that we have the following regression: Yt=3Yt-1+€tr t=1l “’IT (1) with initial value Y0 given and error 6t assumed to be independently and identically distributed, without specifying the distribution. We can solve this by making a series of substitutions: Y1 = 5Y0 + 51 then y2 = fiyl + £2 and y3 = fiyz + 63. But then substituting, we note that Y2 = mm + e1\,+ .2, = fizyo + ael + e2 ,1 and 5 3 2 Y3 = 3Y2 + 63 = 5(32Yo + fi€1 + 52) = 5 Yo + 5 £1 + fi€2 + 63, Continuing in this fashion until T, the terminal period of the data set, gives: T-l - yT = fiTyo + (,6 £1 + if 2e2 + + fieT_1 + at). (2) Now consider two cases: first, suppose that 3 = 1, or that the time series has a unit root. If B = 1, then at each realization of the time series the initial value yo has the same weight, or the same importance, regardless of how "far away" in time the current realization of the time series is from the origin of the time series. Also, the weights on each of the errors are not only equal, but they also do not decline in value as the time series evolves; the initial error is as important to the current realized value of the time series as is the current error. Consider the case of lfi i < 1 now. In this situation it is clear that as time passes, the influence of the initial value yo on the present realization of the time series will lessen continually. In other words the influence of Y0 on y2, say, will be much greater than the influence of Y0 on Y20- But also a similar situation applies to the errors. As time passes, the errors most distant in time have less effect on the current value of the time series, while the present error has the most effect. The implications of a unit root can already be seen, even in this simple model. A unit root, by giving equal 6 values to the weights on the errors and on the initial value of the series, implies "persistence," that is, that changes in the series will not vanish quickly, but instead will remain and be propagated over time. A root less than one in absolute value, on the other hand, implies that innovations in the series will not have lasting effects. As time passes, their influence on the future evolution of the series becomes less and less. Note that the situation we used to illustrate some of the implications of having a unit root present in a time series was a very simple autoregressive process. The terminology "unit root" stems from these sorts of processes. Two other terms that are frequently encountered in the literature on unit-root tests are "stationarity" and "integration." Both are used to describe particular types of error structures. It may be advantageous at this point to briefly define these terms. "Stationarity" is divided into two main types: strict stationarity and weak stationarity. "Strict stationarity" exists if the joint probability distribution of the random variables X1, ... , XI is the same as the joint probability distribution of X1”, X2”, .. . , XTH, for all t = 1, ... , T and 1. So shifting the probability distribution in time does not affect the distribution: its mean, variance and any higher-order moments that may exist are all identical, regardless its position in time (Johnston 1984, p. 373; Priestley 1987, p. 104). 7 This requirement is rather severe. There are weaker forms of stationarity that approximate strong-sense stationarity. We will look at one specific type of weak stationarity, second-order (covariance) stationarity. For a given time series, Xt, t = 1, ... , T, the series is covariance stationary if: E(Xt) = 0 for all t var(Xt) = 02 for all t (3) cov(xt,xt_s) = y for all t. 8 Thus the mean and variance of the time series are constant, finite and independent of time. The covariances, while they depend on s, are the same for all t and are also finite. More generally, a time series is said to be stationary up to order m, say, if for the time series Xt, all the joint moments up to order m of X1, ... , XT exist for any t = 1, ... , T and r and equal the corresponding joint moments up to order m of X1”, X2", .. . , XTH. Note that this doesn't require that the probability distributions be the same, but only that the moments up to a certain order be equal and that the joint moments exist (Johnston 1984, p. 373; Priestley 1987, p. 105). Series that are not stationary are called "nonstationary." Another term that is more and more frequently encountered in the unit-root literature is "integration." Integration also exists in varying orders, but generally 8 refers to whether or not a time series can be made stationary by differencing it a sufficient number of times. One familiar definition of integration is the following: a series with no deterministic components and which has a stationary, invertible autoregressive-moving average (ARMA) representation after differencing d times is said to be "integrated of order d." This would be designated by writing that the series X: is I(d) (Engle and Granger 1987, p. 252). This, of course, provides another, complementary, way of characterizing a series. A time series that is I(O) is a stationary series, since the order of differencing required is zero. And as Engle and Granger (1987) point out, a time series that is 1(0) and has zero mean has a finite variance and innovations have only temporary effects on the future evolution of the series. They also note that in the case of an I(l) series with initial value equal to zero, the variance now goes to infinity as time goes to infinity, and note that an innovation now has a permanent affect on the series's evolution, because each xt is the sum of all previous th. That said, let us turn to the source of the term "unit root." Suppose that we have the following linear homogeneous difference equation: Yt ' an-1 = 0. (4) This can be rewritten as Yt = aYt-l and, by repeated substitution, can be seen to have the solution yt = «3%. (5) where yo is some initial condition. Given yo, it is clear that the value of "a" will guide the evolution of this series. If la | < 1, this series will tend to zero; if a = 1, this series will be constant at yo; if a = -1, this series will alternate in value, between Y0 and -yo: and if la I > 1, the series will diverge away from yo, to either -w or +m. The value "a" is a root of this equation, and note the similarity between equation (4) and the first-order autoregressive model: In higher-order cases, the conditions on these roots become more complicated. Suppose we look at the second-order autoregressive process Xt = ¢1Xt-1'+ ¢2xt-2 + 5t° In lag-operator notation, where the lag operator "L" just means that th = xt,1, this can be rewritten as: X - LX + 13X + t ‘ ¢1 t ¢2 t 6t or 10 2 _ Xt ‘ ¢1th ' ¢2L Xt - Et 2 Designating (l - ¢1L - ¢2L2) by ¢(L), the last line above is just O(L)Xt = 6t. Now 6(L) can be factored as 9(L) = (1 - clL)(l - ch), ‘with (Cl).1 and (c2)'1{as the roots of the polynomial. Now' from the above, -1 Xt = §(L) 6t = [(1 - clL)(l - cZL)]'1et. If we take d = Cl/(Cl - c2), then the rational expression above can be written as: x.- = (1 - clm'lde. + (1 - <=2L>'1(1 ' an. = d(€t + Cl‘t-1 + clzet,2 + ) + (1 " ‘1) (6t + C2'5t:-1 + 2 C2 6t-2+ooo )0 So Xt has constant, finite variance only if |c1| and |c2| are both less than one. Equivalently, in terms of the original parameters, we must have that ¢2 + ¢1 < 1 (7) ¢2 ‘ ¢1 < 1 (Johnston 1984, p. 374). 11 More generally, if X has the AR(m) representation 6(L)Xt = et, (8) we can factor 6(L) as 0(L) = (1 - clL)(1 - c2L) ... (1 - cmL), where the ch'are the roots of §(c) = 0. Then X has a unit root if one of the c's (say c1) equals unity, in which case (1 - L)xt = [(1 - 02L) (1 - can'let and so Xt = (1)X84.+'V£v where the stationary error vts vt = et/[(1 - c2L) ... (1 - cmL)]. (9) Thus, differencing X once makes it stationary. Let us now turn to some of the reasons why it is important to be able to detect unit roots in time series. These reasons have to do with the effects of a unit root on the estimation and statistical testing of time series. It should be pointed out that most of the very recent work on unit roots has not centered on the implications for estimation and testing, but instead solely on the detection aspect. Most empirical work assumes that the errors are stationary. However, when a series is nonstationary, this implicitly violates some of the "usual" assumptions used in empirical work and so must be dealt with. One possible method of dealing with the nonstationarity is to difference 12 the series, usually once. This is part of the idea behind designating certain series as integrated, because differencing these types of series yields stationary series. An alternative is to detrend the series, often by including time as a regressor. This is perhaps suggested by the fact that nonstationary series often look as though they contained a time trend. What are the consequences of inappropriately detrending, of inappropriately differencing, or of treating a nonstationary series as if it were stationary? Granger and Newbold (1974) initially suggested taking first differences of series that appeared highly autocorrelated. They were troubled by the appearance of high R25 and low values of the Durbin-Watson statistic simultaneously in much empirical work. The consequences of autocorrelation, they noted, can be serious: inefficient estimates of regression coefficients, suboptimal forecasts if based on the regression equation and invalid significance tests on the regression coefficients. In a simulation experiment, they created regressions between independent random walks and variables that should have had little or no explanatory power, yet found "evidence" of relationships indicated by R2. They called these "spurious regressions," and warned that the time series properties of data could not be ignored. Phillips (1986) explained why these situations exist. He noted that in the case of regressions of 13 independent random walks, the usual t-statistic significance tests do not possess limiting distributions, but instead diverge as sample size approaches infinity. Thus, the bias toward the rejection of ng_r§l§rign§hip_rises with sample size. Further, he determined that the Durbin—Watson statistic converges in probability to 0, while R? has a nondegenerate limiting distribution as sample size grows. Thus, if nothing is done when differencing is called for, the consequences can be spurious regressions that appear to show statistically-significant relationships when a nonstationary series is regressed on time and on other, indepedent nonstationary series. In this situation, linear detrending is not very helpful. Nelson and Kang (1984) find that regressing a random walk on time using ordinary least squares tends to result in R2 of about 0.44 regardless of sample size, when in fact the variable has no dependence on time. If the random walk contains drift, the R? will be higher and will rise with sample size. Further, they report that the residuals of a random walk regressed on time will have a variance that is only about 14 percent of their true stochastic variance. If these residuals are thought to be "detrended," their variance will understate the true variance of the series. They also note that regressing one random walk on another, with time included in order to account for trend, is subject strongly to the spurious regression phenomenon. However, if a series is differenced when this 14 operation is unnecessary, the consequences can be shown to be much less serious. Now parameter estimates are inefficient, although they are still unbiased and consistent (Dickey, Bell and Miller 1986, p. 13). Plosser and Schwert (1978) also argued that the case of "overdifferencing," or taking differences when this is unnecessary, is much less troublesome than the case of "underdifferencing." Consider the case of a data series whose errors are serially uncorrelated in first differences. In this case, the errors ig_;§yg1§ will follow a nonstationary random walk and there may be problems with estimation under OLS. This results in the spurious regression situation, with possibly inconsistent coefficient estimates. Plosser and Schwert argue that if a linear specification is correct, it makes little difference in estimation whether levels, first differences or second differences are used, as long as the autocorrelation properties of the series are correctly taken into account. 3. DETECTING UNIT ROOTS Given that there can be serious consequences if an investigator uses a time series which contains a unit root as if it were a stationary series, econometricians have devised a number of tests to detect the presence of unit roots. The earliest tests are those of David A. Dickey and Wayne A. Fuller, known as "Dickey-Fuller tests." These tests were presented by Fuller (1976) and Dickey and Fuller 15 (1979). These authors proposed two types of tests, one a kind of "t statistic," and one based on the difference between the estimated coefficient on the lagged value of the time series and one (because the hypothesis of interest is that there is a unit root), normalized by sample size. They proposed three sets of these statistics, for a total of six statistics. To see where these statistics originate, consider the following regressions: Yt = th-l'+ 6t (10) Yt = u + PYt-l + 5t (11) Y: = I‘ + ”:4 + ’3‘: + 6t (12) Then for (10) the tests that Dickey and Fuller formulated are 3 = T(p* - 1) and ;, where T = sample size, 3* = ordinary least squares estimate of p, and ; = usual t statistic for the null hypothesis p = 1. Under the null hypothesis of a unit root (p = 1), these statistics have nonstandard distributions (for example, ; is not Student's t) and so Fuller (1976) provides tables of empirical cumulative distributions for 2(3 - 1) and ; in his text (p. 371, 373). Dickey and Fuller extended these statistics to two particular cases: when an intercept is present (equation 11) and when a time trend and an intercept are both present (equation 12). Both of these cases require modification of 16 the original critical values, and these are included in Fuller (1976). For (11) above the tests are designated as A 3” and r“, and for (12) they are designated as 6; and 4;. When an intercept is present, it is often said that the series includes "drift," while when time is included as an explanatory variable the situation is referred to as a time series with "trend." The distributional properties of these tests in the presence of drift and trend were also studied by Evans and Savin (1981, 1984), Nankervis and Savin (1985, 1987) and Schmidt (1990). However, there are some potentially serious problems with these tests -- namely, the presence of autocorrelation or conditional heteroskedasticity in the error terms of the time series. There have been two types of responses to these problems: one has been to use "augmented" Dickey- Fuller statistics, the second to try to correct the test statistics using error-covariance corrections. We turn to the earlier of these two approaches, that of the augmented Dickey-Fuller statistics. Said and Dickey (1984) present a version of an augmented Dickey-Fuller statistic in order to test the unit— root hypothesis in ARMA(p,q) time-series models. The problem they set out to address is that the tests then in use required either estimation of or knowledge of the parameters p and q; p is the number of autoregresssive parameters in the model and q is the number of moving- average parameters. They propose an autoregressive 17 approximation to the ARMA model that would allow testing of the unit root wirhgur knowledge of p or q. Their suggestion is as follows: estimate the coefficients of the ARMA model by regressing the first difference of the dependent variable (say yt - y£_1) on yf,1, Ayt_1, Ayt_2, , AYt-kv where A indicates first difference and where k is a "suitably chosen integer." Consistent estimation of these coefficients requires that k be a 1/3k function of sample size n, so they assume that n' converges to zero. These results allow testing for a unit root using the Dickey-Fuller statistics and critical values. Since the asymptotic theory doesn't indicate the value of k for any given n, Said and Dickey suggest using a variety of values for k, then using the standard regression F test to determine which coefficients are simultaneously zero. A different sort of generalization was provided by Phillips (1987) and his co-workers. Phillips sought to develop unit-root tests that did not require the assumptions of either independence or homoskedasticity, because, he argued, these were strong assumptions to make about the errors for most time series used in empirical work in economics. Phillips considers initially the time series generated by Yt = ayt-l + utl t = 1' 2’ .00, T (13) and makes the following assumptions about the innovations 18 E(ut) = O; supt El ut | B < no, for some 8 > 2; (14) {uthm is strong mixing, and 02 = lim mar-13%) 1“. For the latter condition, 02 must exist and be greater than zero, while for the third condition the mixing coefficients am must satisfy Eat?” < co 1 Also, ST is a "partial sum," defined as 1 ST: $11.1 Phillips notes that these conditions allow for a wide variety of possible error-generating mechanisms, and include all Gaussian and other stationary finite-order ARMA models. The second condition establishes the existence of moments and controls the allowable heterogeneity in the process by ruling out unbounded growth in the 8th absolute moments of ut, and Phillips calls this the "weakening moment condition." The third condition, the mixing condition, controls the extent to which innovations are time-dependent. As an example, it might allow for fairly strong dependence among events that are close in time, while events that are farther away in time might be nearly independent. The 19 summability condition also controls the "mixing decay rate" with respect to the probability of outliers. Under the second condition, as B approaches two the probability of outliers increases; but the summability condition requires that the effects of the outliers wear off more quickly, because the mixing rate increases. "Mixing conditions" are conditions on the dependence of a sequence of errors: "strong mixing" is also called "alpha mixing," and the quantity a(m) indicates how much dependence exists between events separated by at least m periods (White 1984, p. 45). The fourth condition is a convergence condition on the average variance of the partial sum ST, and 02 is required to be finite in order to avoid degenerate results. Using these assumptions (which were also used in Phillips (1986) to study spurious regressions), Phillips derives asymptotic distributions for statistics he names T(3 - 1) and to, but these are just the 3 and ; statistics of Dickey and Fuller. However, Phillips's derivation requires 2 2 estimation of two unknown parameters, Gu and a . These parameters are defined as: T 2 ' - 2 = l:m:T1§:1? a2 = lim Eur-13;) 1"" Phillips goes on to show how these variances can be consistently estimated, and these estimates are used in re- defined statistics whose asymptotic distributions are 20 independent of these parameters. With su2 and sT ,2 as the consistent estimators of an2 and 02, Phillips gives as new statistics A 2 2 «2 Za = T(a - l) - (1/2) (sou - su )/(T S) (16) and 1/2 -1 zt = 51/2“? - l)/su - (1/2)(:sm2 - su2)[su(T'28) 1 (17) Here, we define S as: S = 21:3’2-1 He notes that Za is a transformation of the 3 statistic, and that Zt is a transformation of the ; statistic. The limiting distributions of these statistics are invariant within a "very wide class" of weakly dependent and possibly heterogeneously distributed innovation sequences, Phillips 2 = 02, the limiting says. Also, in the case when au distributions of Zn and of zt are identical to those of T(3 - 1) and ta, respectively, meaning that the Dickey- Fuller tables could be used immediately with the tests that Phillips proposes. As he concludes, " ... much of the work of these authors on the distribution of the OLS estimator 8 and the regression t statistic under iid innovations remains relevant for a very much larger class of models. In fact, our results show that their tabulations appear to be relevant in almost any time series with a unit root." (Phillips 1987, p. 288) He goes on to say that one need 21 only compute Za or Zt given above and one can then turn to the relevant critical values given in Fuller (1976) or in Evans and Savin (1981). This approach was extended by Phillips and Perron (1988) to the case of models with drift and/or time trend. This paper builds on the approach of the 1987 paper by Phillips, making the same assumptions on the error process. However, Phillips and Perron now define transformations of the 20 and 2t statistics to allow testing in the following situations: yt = u + ayf_1 + ut (18) yt = p + fi(t - (l/2)T) + ayt_1 + ut. (19) These are the cases with drift and with drift and a time trend, respectively. For the case of drift they define the statistics 2(3), mg) and Z(t;), and for the case with drift and a time trend present they are 2(6), Z(t;), Z(t;) and Z(t§). Again, the problem is the dependence of the limiting distributions on the unknown variances on2 and 02, and again consistent estimators are available and these enter into the corrections. Also once again, the limiting distributions of these statistics are the same as those of the untransformed statistics when ah? = 02. "Thus, the critical values derived in the studies of Dickey and Fuller under the assumption of independently and identically distributed errors (ut) may be used with the new tests 22 proposed here, which are valid under much more general conditions" (Phillips and Perron 1988, p. 341). Another recent development of the basic tools Phillips introduced in the 1987 paper is found in Ouliaris, Park and Phillips (1988). This paper will be dicussed in more depth in Chapter 3, where the tests proposed are applied to the Nelson-Plosser data set. Ouliaris, Park and Phillips note that previous work by Dickey and Fuller (1979) and Phillips and Perron (1988) had used a unit root with drift but no trend as the null hypothesis; their paper explicitly allows for a deterministic time trend under the null hypothesis. Also, their new tests are invariant to the presence of drift and polynomial time trend in the trgg data-generating process. Again, more detailed discussion of the test procedures that Ouliaris, Park and Phillips propose will be found later in this study. Before turning to some of the empirical results from the unit-root literature, it might be worthwhile to review briefly some other approaches to unit-root testing, approaches that have relied on other implications of a unit root. Park and Choi (1988) propose a pair of tests that are based on the spuriousness of regressions that involve integrated processes. This paper is also discussed in greater detail in Chapter 3, so the discussion at this point will be brief. Their tests involve testing for a unit root with a time trend present, and the statistics they propose are transformations of the F statistic for the regression 23 q (20) yt = 20: akt" + 2,, which contains superfluous time-polynomial terms. The authors argue that if the errors in the regression are stationary, then the Wald test must show that the true coefficients are zero. But if the errors are integrated, then the regression is spurious and so the Wald test would ngt indicate the superfluous nature of the time-polynomial terms added to the above regression. Their J(p,q) statistic takes intggrgtign as the null hypothesis, while their G(p,q) statistic assumes stgtionarity as the null. They also propose a test for the order of the time polynomial involved, noting that overfitting the regression results in "substantial loss in power for a given finite data," while underfitting would also distort test results, because any deterministic, growing trends which were not accounted for would lend support to the unit-root hypothesis. Park and Choi argue that pretesting for order of time trend is necessary before attempting to test for the unit root itself. Accordingly, they propose another modified Wald statistic, which they call GA(p,q), and propose that this statistic be used to test stepwise for the significance of additional polynomial time terms. Another approach to testing for the presence of a unit root depends on the effects of a unit root on the variance of a time series. As we have already seen, a unit 24 root means that the variance is not constant, but instead grows over time. This has led to the development of some nonparametric tests for the unit-root hypothesis. Workers in this area have been Campbell and Mankiw (1987a, 1987b) and Cochrane (1986, 1987, 1988). Cochrane presents a test for unit root based on what he calls the "variance of long differences." As he notes, the variance of ever-lengthening differences (in time) of a random walk grows linearly with the differences; in contrast, if the series is stationary, the variance of these long differences approaches a constant, namely, twice the unconditional variance of the series. His suggestion is to use the variance ratio aAzz/aAyz, where the numerator is the innovation variance of the random-walk component of the time series and the denominator is the variance of first- differences of the series (Cochrane 1986, 1988). In that paper, Cochrane also showed that the innovation variance of the random-walk component is equal to the spectral density of Ayt at frequency zero. His 1987 paper is devoted to developing the use of the spectral density at frequency zero of a time series as a means of trying to capture the long-run mean reversion. Campbell and Mankiw (1987a, 1987b) also build on this approach. They note that Cochrane's variance ratio test can also be written as: (21) 1 var (Yt4k41 - Yc) k+1 va1:(ycu - yc) k = 1 + 2; (1 - (j/(k+1)))pj .1 25 where p) is the jth autocorrelation of Ayt and k is some period in the future. Campbell and Mankiw note that for a random walk, this ratio is equal to one because the variance of the (k + 1) lagged differences is (k + 1) times the variance of the once-lagged differences. For stationary series the ratio will approach zero as k grows. This is because the variance of the (k + 1) lagged differences approaches twice the variance of the series. Campbell and Mankiw name this ratio "9k," and note that this can be estimated consistently by using the sample auoocorrelations 33 in place of the population autocorrelations pj, as long as k increases with sample size. They also propose a different, though related, test in their papers. This test is based on the infinite sum of mgyigg;§ggrgg§ coefficients from a moving-average representation of a time series. They motivate this test, which they call A(1), as follows. Suppose that the change in the logarithm of a time series is a §t§tign§ry_prgtg§§ and has moving-average representation Ayt = A(L) 6t! with 6t as a white-noise error process and A(L) as an infinite polynomial in the lag operator. Then the impact of a shock on the lgggl of the time series in period t + k is 1 + A1 + A2 + ... + Ak. A(1) is defined by Campbell and Mankiw as the infinite sum of these moving-average coefficients, and will equal one for a random walk: for a 26 series that is stationary about a deterministic trend, it will equal zero. If R2 is defined as R? = 1 - var(e)/var(Ay), then A(1) can be expressed as: A(1) = (Iv/(1 - R2) In estimation, A(1) can be computed by: (22) flu) = (Wk/(1 - 6%) using the square of the first autocorrelation, 312, as an estimate of R2. However, except for the case of an AR(l) process, they note that this is an underestimate of R2. As can be seen by the above brief survey, the question of whether or not unit roots exist in economic time series has led to a wide variety of attempts to find an answer. We turn now to some of the answers that investigators have offered. 4. UNIT-ROOT TESTS IN PRACTICE One of the first applications of the Dickey-Fuller tests was a 1982 article by Charles Nelson and Charles Plosser in the J9nrngl_gt_flgngtgry_fitgngmig§. This article, if not still influential, is certainly much-cited and seems to have opened the way for Dickey-Fuller tests to be of interest to a wider audience. In this paper, titled "Trends and Random Walks in Macroeconomic Time Series -- Some Evidence and Implications," Nelson and Plosser considered a data set of 14 individual data series. The data series are 27 as follows: real gross national product (GNP), nominal GNP, real per-capita GNP, industrial production, employment, the unemployment rate, the GNP deflator, the Consumer Price Index, wages, real wages, the narrow money stock, velocity, bond yield and common stock prices. Except for bond yields, all data series were transformed to natural logarithms prior to further operations. These data series vary in length, because while they all have a common endpoint (namely 1970), the shortest series begin in 1909 (those involving GNP -- real, nominal and real per-capita), while the longest begin in 1860 (industrial production and the CPI). So the shortest series have 62 annual observations while the longest have 111 annual observations. Nelson and Plosser conclude, on the basis of the Dickey-Fuller tests and other statistical evidence, that the majority of these series behave in a way that is consistent with the presence of a unit root. These authors distinguish between series being DS, or "difference stationary," and series that are T8, or "trend stationary." They also devote several pages to a statistical decomposition of the real GNP data series; that is, they assume that the series can be viewed as the sum of a secular or growth component and a cyclical component. If this latter is assumed stationary, or transitory, then any sort of nonstationarity that exists must be due to the secular component, which they believe to be difference-stationary. In one of the concluding sections of their paper, 28 Nelson and Plosser draw some conclusions from their investigation: These inferences have potentially important implcations for business cycle research. For example, most of the recent developments in business cycle theory stress the importance of monetary disturbances as a source of output fluctuations. However, the disturbances are generally assumed to have only transitory impact (i.e., monetary disturbances have no permanent real effects). Therefore, the inference that the innovations in the non-stationary component have a larger variance than the innovations in a transitory component implies that real (non-monetary) disturbances are likely to be a much more important source of output fluctuations than monetary disturbances. This conclusion is further strengthened if monetary disturbances are viewed as only one of several sources of cyclical disturbances. (Nelson and PlOsser 1982, p. 159) And while they focus most of their attention on the analysis of real GNP, Nelson and Plosser note that since other data series in their data set showed similar characteristics, this seems to provide further evidence for their vieWpoint. They also note that they empirically cannot prove that the cyclical fluctuations are stationary; while the data gag reject some unobserved components models, it cannot reveal its true structure by itself. Perron (1986, 1988), using the statistics developed by Phillips (1987), reports that applying these statistics to the Nelson-Plosser data set "strongly support the conclusions reached by Nelson and Plosser" (Perron 1986, p. 24). Further, he concludes that The conclusions with regard to the Nelson-Plosser series are as follows: the unemployment rate and industrial production series are stationary around a linear trend (albeit a zero trend for the former). The following series are characterized by the presence of a unit root without a drift: real per-capita GNP, consumer prices, 29 velocity, interest rate and common stock prices. The ”remaining series have a unit root with a drift. (Perron 1986, p. 25) Perron notes, however, that these test statistics "may have quite low power against relevant alternatives which are close to unity, even for quite large sample sizes" (Perron 1986, p.28). Interestingly enough, Perron modified these conclusions somewhat later. In a more recent paper Perron sought to take into account the effects on time series of one-time changes in the level or in the slope of the trend function. In the case of the Nelson-Plosser data set, which concludes in 1970, the shift in the level or in the slope was taken by Perron to come from the 1929 collapse of the New York Stock Exchange. Perron derives test statistics that allow him to distinguish the DS hypothesis from the TS hypothesis when a break is present, and he applies these to the Nelson-Plosser data set, as well as to a series made up of quarterly observations on real GNP for the period from the first quarter of 1947 to the third quarter of 1986. This quarterly series includes the oil embargo of 1973-1974 and the subsequent oil-price "shock." Perron concludes that "most macroeconomic time series are not characterized by a presence of a unit root and that fluctuations are indeed transitory." The only events that have had permanent effects were the Great Crash and the oil embargo and price hike. The problem, Perron shows in his paper, is that the "usual tests" are not able to reject the unit-root 30 hypothesis if the deterministic trend of the series has a single break in either slope or intercept (Perron 1989). In the paper by Park and Choi (1988), which also included testing on the Nelson-Plosser data set, unit roots were again found to be pervasive. Park and Choi tested the series in both the level and logarithmic form. Recall that the procedure Park and Choi propose involves an initial test for grggr of the time trend, then a test for the unit root using either stationarity (i.e., root less than one) or integration (i.e., unit root) as the null hypothesis. Park and Choi reported that with the data in lgggl figrm they detected quadratic time trends in the series real GNP, GNP deflator, consumer prices, velocity and common stock prices. Third-order polynomial time trends were detected for the nominal GNP, industrial production, wages and money stock series. Linear trends were found for real per-capita GNP, employment and real wages, while the unemployment rate and bond yield series did not disclose any trend. When the data were tested in lgggrithmig_figrm, however, they report that most of the trends disappeared. In logged form, the highest order trend was just linear, and such trends were found in the real GNP, nominal GNP, industrial production, employment, GNP deflator, wages, real wages and money stock series. The remaining six series, they report, did not indicate the presence of a time trend. On the question of the presence of unit roots Park 31 and Choi found that, with the exception of the unemployment rate series, all the series fail to reject the unit root hypothesis, based on the use of their J(p,q) statistic. This holds regardless of whether the series is in logged or level form. They note that there were some conflicts between their J(p,q) and G(p,q) statistics (recall that the latter takes stgtigngrity as its null hypothesis). Specifically, the G(p,q) test was unable to reject stationarity for real GNP in levels or for the money stock in log form. Note also that Park and Choi conduct the tests here using the time trends that the first part of their procedure tells them are present. These papers have all explicitly considered the Nelson-Plosser data set, which is also analyzed later in this study. We now turn to some work that has been done on somewhat different data sets, with the idea that these might shed some light -- if indirect -- on the issue of integration in macroeconomic series. Campbell and Mankiw (1987a, 1987b) analyzed both U.S. and international data in two separate papers, in order to get some "feel" for the amount of "persistence" versus ”trend reversion" in macroeconomic data. In their paper in the Amgriggg_§ggngmig Egyigg they look at the log of quarterly real GNP for the period from the second quarter of 1952 to the third quarter of 1984, and also use the unemployment rate in an effort to separate the trend and the cyclical components in real GNP. They report finding more persistence for real GNP than is 32 implied by a stationary AR(2) process they use as a sort of "paradigm" of previous thinking about business cycles. Later in their paper they attempt a decomposition of output fluctuations into a cyclical component, associated with the business cycle, and a persistent component, not so related. To identify their bivariate model, they assume that the cyclical and trend components of real GNP are uncorrelated, but they also assume that the cyclical component is the part of GNP correlated with unemployment at leads and lags, while the trend component is that part of GNP that is uncorrelated with unemployment. They then regress the change in log real GNP on eight leads, eight lags and the contemporaneous, detrended unemployment rate. The fitted value of the regression is taken as a measure of the change in real GNP's cyclical component; the residual represents the change in the trend component. They report little difference in the magnitude of persistence of these two components. It appears that the cyclical component is somewhat more persistent, perhaps, but Campbell and Mankiw note that this can be manipulated, by eliminating leads and lags of the unemployment rate, or by including the unemployment rate only in differenced form. These sort of manipulations make the cyclical component appear less persistent than the transitory component. "In contrast to what many economists have assumed, fluctuations associated with the business cycle are not obviously more trend-reverting than other fluctuations in output," they conclude. (Campbell and 33 Mankiw 1987b, p. 776) In their working paper Campbell and Mankiw use the same methodology as in the paper discussed above, but now apply it to data for seven industrialized nations: Canada, the Federal Republic of Germany, France, Italy, Japan, the United Kingdom and the United States. The data is quarterly, and is on either real GNP or real gross domestic product (GDP), and begins in 1957 for four countries, in 1960 for two nations and in 1965 for one. Generally, the data end in 1986. They conclude that the data from other nations shows still more persistence than does the 0.8. data -- with the exception of the data from Britain, which shows somewhat less persistence. (Campbell and Mankiw 1987a) Cochrane (1986, 1988), using his variance-ratio approach on real per-capita GNP for annual periods from 1869 to 1985, finds that "annual growth rates of GNP contain a large mean reverting component." (1988, p. 894) He notes that postwar data shows the same behavior, so that this result is not due to the presence of prewar data. Also, he says that real GNP alone, unadjusted for population growth, shows the same sort of development, qualitatively, for both time periods. On the basis of his work, Cochrane argues that "an AR(2) about a deterministic trend or a difference stationary ARMA process with a very small random walk component is a good in-sample characterization of the behavior of GNP, contrary to the results of previous research" (Cochrane 1988, p. 916). 34 He argues that the reason other workers have found more persistence is because the random-walk component is a property of all autocorrelations together. Other workers have concentrated on the first few autocorrelations, driven by a desire to capture short-run dynamics parsimoneously. He argues that the Nelson-Plosser approach "cannot match the short-run dynamics and the small random walk component in the long-run dynamics at the same time (Cochrane 1988, p. 915). Instead, they capture the short-run dynamics and these incorrectly imply large random-walk components. But when used to estimate the size of the random-walk component, this imposes identifying restrictions across the frequency range, in order to infer long-run propoerties from short-run dynamics. Cochrane's other (1987) paper looks again at the variance ratio, but also at spectral-density estimation of real GNP, the unemployment rate, the log of real durable consumption, the log of real nondurable consumption, the log of real gross private domestic investment and the log of industrial production, and he finds that only nondurable consumption behaves "very much" like a random walk. He concludes by saying that the "unit root components are quite small," according to the evidence he has presented in his work. 5. CONCLUSION This chapter has sought to serve as a general introduction to the unit-root question in econometrics and 35 in economics. As we have seen, there are econometric reasons for wanting to know whether a unit root exists in a given time series, as well as economic reasons for desiring this information. Econometrically, the presence of a unit root indicates what sort of means should be used to render a nonstationary series into a stationary one; in economics, the presence of a unit root has been thought to shed light on the issue of which of two competing views of the business cycle is the one closest to reality. This chapter has also reviewed the various tests for a unit root that have been proposed. Some of the tests involve estimating and testing the unit root itself, while others involve using some implications of the presence of a unit root in order to make inferences about it. As we have seen, much of the early evidence was in favor of the unit- root hypothesis, but there have also been papers that have questioned strongly whether unit roots are as pervasive as some investigators have claimed. CHAPTER 2 DICREY-FULLER TESTS WITH TREND This chapter discusses Dickey-Fuller tests in the presence of a time trend, and offers corrections to the critical values found in Fuller (1976). 1. INTRODUCTION Following the original work by Nelson and Plosser (1982), there has been a great deal of interest in discriminating between trend-stationary and difference- stationary models for economic time series. Usually such attempts to discriminate between these models take the form of a statistical test of the difference-stationary (unit root) hypothesis. The most commonly-used tests have been the Dickey-Fuller 3, and ;, tests, based on regression of the variable in question on an intercept, time trend and its one-period lagged value: that is, a regression of the form yt = a0 + fiyt_1-+ alt + ut, t = l, ... , T. (1) (The 3, statistic equals T(§ - 1), where 5 is the least squares estimate, and the ;, statistic is the usual t statistic for the hypothesis 8 = 1.) In this chapter, we assume that the model (1) is correct, in the sense that the 36 37 data-generating process is of this form, with the errors ut independently and identically distributed as N(0, auz). The distributions of these statistics are non- standard, and Dickey (1976) and Fuller (1976) provide commonly-used critical values for a selection of sample sizes. However, the distributions of the test statistics 3, and ;, depend on the value of the coefficient of the time trend, and it is important to realize that the Dickey-Fuller tabluations assume that, in the data-generating process, the value of this coefficient is zero. The use of the Dickey- Fuller tables is not justified unless the coefficient of the time trend is zero, or so close to zero as to make no difference. The distributions of the 3, and ;, statistics when the coefficient of the time trend is not equal to zero (in the data-generating process) have been considered by Nankervis and Savin (1987) and by DeJong et al. (1988). They write the model (1) as: yt* = 60 + fiyt_1* + 61t + ut*, (2) where ya = (yt - yo)/au. so = [a0 + yaw - 111/0... 61 = 021/0u and ut* = uo/ou. Under the null hypothesis 8 = 1, the distributions of the statistics 3, and ;, depend only on sample size (T) and on the standardized coefficient of trend (61), and can therefore be tabulated. However, these papers provide only very limited tabulations of these 38 distributions. In this chapter we provide tabulations that are detailed enough to allow accurate interpolation. It is intuitively reasonable to estimate 61 = al/ou from the regression (1), and then to test the unit—root hypothesis using the appropriate critical value found by interpolation in these tables. DeJong et al. (1988) refer to this as an "interpolation test," and they show how to construct an estimate of 61 that is unbiased under the null hypothesis (B = 1) from the regression (1) with the unit root imposed. The main function of these tables is to make such an interpolation test practical. In the following chapter these tests are carried out, with various estimates of 61 used. This chapter can also be regarded as an extension to a different model of the analysis of Schmidt (1990). That paper considered the simpler data—generating process yt = a + fiyt'l + ut’ t = 1, 000 , T, (3) which clearly corresponds to a1 = 0 in (1) above. Running the regression (3) generates the Dickey-Fuller statistics 3” and $fi, whose distributions under the null hypothesis depend on sample size (T) and on standardized drift (a/au). Schmidt (1990) therefore provides tabulations of the A distributions of the 3” and 7p sample size and of standardized drift. (These tables extend statistics as a function of earlier but limited tabulations made by Evans and Savin (1984) and Nankervis and Savin (1985).) Guilkey and Schmidt 39 (1989) provide evidence that the interpolation test using these tables is reasonably accurate under the null hypothesis, and that it is more powerful than the 3, and ;, tests (whose distributions do not depend on the value of the drift parameter). Of course, the Schmidt (1990) and Guilkey and Schmidt (1989) results are relevant only when the data- generating process does not include a nonzero coefficient of the time trend. When the coefficient of the trend variable is nonzero in the data-generating process, the tests based on 3” and ;p (1986) and West (1987) have pointed out that these tests are are not appropriate. For example, Perron inconsistent against trend-stationary alternatives. Since trend-stationary alternatives are clearly relevant, this constitutes a strong argument for using the 3, and ;, tests and for allowing nonzero al in (1). An interpolation test using the tables of this chapter is one obvious path to follow. 'An alternative path is to find a test statistic whose distribution does not depend on 61. One such test has recently been proposed by Ouliaris, Park and Phillips (1988), based on a regression that includes squared time as well as time in the regression. A comparison of the results obtained from the interpolation test and the test of Ouliaris, Park and Phillips also will be undertaken in the next chapter. 40 2. TABULATION OP CRITICAL VALUES FOR THE t-STATISTIC This section considers the tabulation of the distribution of the t-statistic for the hypothesis 8 = 1 in the regression (1), which is the Dickey-Fuller ;, statistic. Without loss of generality we take yo = 0, on = 1 and a0 = 0. The distribution of the t-statistic under the null hypothesis depends on only two parameters: sample size, T; and the standardized coefficient of the trend variable, 61. (Note that the standardized trend variable 61 = al/au just equals a1 because we take au = 1.) We used 10 values of T, ranging from 10 to 2000, and 30 values of 61, ranging from zero to three. The entries in the tables were calculated by a Monte Carlo simulation, using 50,000 replications. The calculations were performed in double-precision FORTRAN using the Lahey FORTRAN compiler F77L. Random normal deviates were created using the FORTRAN subroutines GASDEV and RAN3 of Press et al. (1986); some checks on the adequacy of this random-number generator are described in Guilkey and Schmidt (1989). Tables 1 through 8 give the critical values for the t-statistic at the critical levels one percent, 2.5 percent, five percent, 10 percent, 90 percent, 95 percent, 97.5 percent and 99 percent, respectively. Two different types of asymptotic behavior can be discerned in these tables. First, for any nonzero value of 61, the distribution of the t-statistic converges to standard normal as T approaches ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.003 0.01 0.015 0.02 0.025 0.03 0.05 “NHHHHHOOOOOO C OOGO‘hNOQO‘bNi-‘g ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.003 0.01 0.015 0.02 0.025 0.03 10 -5.32 -5.32 -5.32 -5.32 -5.32 -5.32 -5.33 -5.33 -5.35 -5.33 -5.33 -5.30 -5.29 -5.30 -5.29 -5.32 -5.28 -5.24 -5.01 -4.30 -3.86 -3.62 -3.50 -3.42 -3.36 -3.29 -3.29 -3.26 -3.17 -3.00 1000 -3.98 -3.76 -3.32 -2.99 -2.81 -2.72 -2.52 -2.42 -2.39 -2.37 -2.36 -2.34 -2.33 25 -4.38 -4.38 -4.38 -4.38 -4.37 -4.38 -4.39 -4.38 -4.37 -4.37 -4.36 -4.35 -4.32 -4.30 -4.28 -4.13 -3.92 -3.55 -3.02 -2.78 -2.69 -2.64 -2.60 -2.58 -2.57 -2.56 -2.55 -2.55 -2.49 2000 -3.94 -3.02 -2.68 -2.57 -2.51 -2.47 -2.40 -2.36 -2.35 -2.34 -2.34 -2.33 -2.33 50 -4.14 -4.14 -4.14 -4.14 -4.14 -4.13 -4.15 -4.13 -4.12 -4.10 -4.06 -3.96 -3.83 -3.70 -3.56 -3.12 -2.92 -2.76 -2.58 -2.50 -2.47 -2.45 -2.44 -2.43 -2.43 -2.43 -2.42 -2.42 -2041 -3.96 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 -2.33 41 TABLE 1 100 -4.04 -4.05 -4.04 -4.05 -4.04 -4.04 -4.05 -3.97 -3.87 -3.74 -3.58 -3.23 -3.02 -2.88 -2.79 -2.61 -2.53 -2.48 -2.41 -2.38 -2.37 -2.36 -2.36 -2.35 -2.35 -2.35 -2.37 0000 150 -4.04 -4.03 -4.01 -4.01 -4.00 -4.01 -3.95 -3.74 -3.46 -3.21 -3.05 -2.83 -2.71 -2.64 -2.59 -2.51 -2.47 -2.44 -2.41 -2.39 -2.38 -2.38 -2.38 -2.38 -2035 NHOO" \JUI 200 -4.02 -4.02 -4.02 -4.01 -3.99 -3.98 -3.84 -3.42 -3.09 -2.90 -2.80 -2.65 -2.57 -2.53 -2.50 -2.44 -2.41 -2.40 -2.37 -2.36 -2.36 -2.35 -2.35 1000 -2.32 -2.32 -2.33 250 -4.01 -4.02 -4.01 -3.99 -3.94 -3.91 -3.67 -3.15 -2.89 -2.72 -2.68 -2.57 -2.52 -2.49 -2.46 -2.42 -2.40 -2.39 -2.37 ONE PERCENT CRITICAL VALUES FOR THE t-STATISTIC 500 -4.00 -3.94 -3.86 -3.73 -3.60 -3.45 -2.92 -2.64 -2.54 -2.49 -2.46 -2.42 -2.39 -2.38 -2.37 -2.36 -2.36 -2.35 -2.34 2000 '2.33 -2033 -2.33 -2.33 -2.33 -2.33 -2.33 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 UNHHHHHOOOOO o 0 o o o o o o o o o oommbwommhup 8 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 -4.55 -4.55 -4.55 -4.55 -4.55 -4.55 -4.56 -4.55 -4.55 -4.55 -4.54 -4.54 -4.54 -4.55 -4.55 -4.53 -4.50 -4.47 -4.21 -3.49 -3.08 -2.91 -2.81 -2.74 -2.69 -2.63 -2.63 -2.60 -2.54 -2.37 1000 -3.66 -3.45 -2.96 -2.64 -2.48 -2.38 -2.17 -2.07 -2.03 -2.02 -2.01 -1099 -1098 25 -3.94 -3.94 -3.94 -3.94 -3.94 -3.94 -3.94 -3.94 -3.94 -3.94 -3.93 -3.92 -3.89 -3.88 -3.84 -3.68 -3.45 -3.11 -2.59 -2.33 -2.25 -2.21 -2.18 -2.17 -2.15 -2.14 -2.14 -2.13 -2.07 2000 -3.66 -2.67 -2.32 -2.21 -2.15 -2.11 -2.04 -2.00 -1.99 -1.98 -1.98 -1.97 -1.97 50 -3.80 -3.79 -3.79 -3.79 -3.79 -3.79 -3.78 -3.78 -3.75 -3.72 -3.70 -3.58 -3.45 -3.31 -3.16 -2.72 -2.52 -2.37 -2.19 -2.10 -2.06 -2.05 -2.04 -2.03 -2.02 -2.02 -2.02 -2.02 -2.01 -3.66 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 -1.96 42 TABLE 2 100 -3.71 -3.72 -3.71 -3.71 -3.72 -3.72 -3.72 -3.64 -3.52 -3.37 -3.20 -2.83 -2.62 -2.48 -2.40 -2.23 -2.16 -2.10 -2.04 -2.00 -1.99 -1.99 -1.98 -1.98 '1.98 -1.98 -1.98 150 -3.71 -3.71 -3.70 -3.70 -3.70 -3.70 -3.64 -3.39 -3.11 -2.83 -2.66 -2.44 -2.32 -2.26 -2.21 -2.12 -2.08 -2.05 -2.01 -1.99 -1.99 -1.98 -1.98 -1.98 200 -3.71 -3.71 -3.70 -3.68 -3.66 -3.64 -3.51 -3.07 -2.72 -2.52 -2.42 -2.28 -2.21 -2.16 -2.13 -2.07 -2.04 -2.02 -2.00 -1.99 -1.98 -1.98 -1.97 1000 -1.98 -1.98 -1.96 250 '3.70 -3.69 -3.67 -3.65 -3.63 -3.60 -3.33 -2.78 -2.51 -2.35 -2.29 -2.19 -2.14 -2.10 -2.08 -2.04 -2.02 -2.01 -1.99 -1.97 2000 -1.97 -1.97 -1.96 2.5 PERCENT CRITICAL VALUES FOR THE t STATISTIC 500 -3.69 -3.65 -3.55 -3.42 -3.26 -3.09 -2.54 -2.26 -2.17 -2.12 -2.09 -2.05 -2.03 -2.02 -2.01 -2.00 -1.99 -1.99 -1.97 -1.96 -1.96 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 0.07 UNHHHHHOOOOO 0 8 O I I O O O O O C O O OOWO‘hMOGO‘hNH ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 -4.00 -4.00 -4.00 -4.00 -4.00 -4.00 -4.00 -4.00 -4.00 -4.00 -4.00 -3.99 -4.00 -3.99 -3.99 -3.97 -3.95 -3.90 -3.62 -2.91 -2.57 -2.40 -2.30 -2.24 -2.19 -2.13 -2.13 -2.11 -2.04 -1.90 1000 -3.41 -3.18 -2.66 -2.33 -2.17 -2.06 -1.87 -1.76 -1.73 -1.71 -1.70 -1.68 -1.68 25 -3.60 -3.60 -3.60 -3.60 -3.60 -3.60 -3.60 -3.60 -3.60 -3.60 -3.60 -3.58 -3.56 -3.52 -3.49 -3.32 -3.08 -2.74 -2.22 -1.97 -1.88 -1.84 -1.82 -1.80 -1.79 -1.78 -1.77 -1.76 -1.72 2000 -3.41 -2.37 -2.02 -1.90 -1.84 -1.80 -1.73 -1.69 -1.68 -1.67 -1.67 -1.66 -1.66 50 -3.51 -3.51 -3.51 -3.51 -3.51 -3.51 -3.50 -3.48 -3.45 -3.44 -3.40 -3.29 -3.14 '2.98 -2.83 -2.38 -2.18 -2.03 -1.85 -1.76 -1.73 -1.72 -1.71 -1.70 -1.69 -1.69 -1.69 -1.69 -1.68 a -3.41 -1.65 -1.65 -1.65 -1.65 ~1.65 -1.65 -l.65 -1.65 -1.65 -1.65 -1.65 -1.65 43 TABLE 3 100 -3.45 -3.45 -3.46 -3.46 -3.46 -3.46 -3.44 -3.37 -3.23 -3.06 -2.89 -2.51 -2.30 -2.17 -2.10 -1.93 -1.85 -1.80 -1.74 -1.70 -1.69 -1.69 -1.68 -1.68 -1.68 -1.68 -1.66 150 -3.45 -3.45 -3.44 -3.45 -3.44 -3.43 -3.38 -3.11 -2.79 -2.53 -2.35 -2.12 -2.01 -1.94 -1.89 -1.80 -1.76 -1.73 -1.69 -1.67 -1.67 -1.66 -1.66 -1.66 200 -3.44 -3.44 -3.43 -3.41 -3.40 -3.37 -3.23 -2.75 -2.40 -2.22 -2.11 -1.96 -1.89 -1.84 -1.81 -1.74 -1.72 -1.70 -1.67 -1.66 -1.66 -1.66 -1.65 1000 -1.67 -1.67 -1.65 250 -3.44 -3.43 -3.42 -3.40 -3.37 -3.33 -3.04 -2.46 -2.20 -2.05 -1.99 -1.88 -1.82 -1.79 -1.77 -1.72 -1.70 -1.69 -1.67 -1.65 2000 -1.66 -1.66 -1.65 FIVE PERCENT CRITICAL VALUES FOR THE t STATISTIC 500 -3.44 -3.40 -3.28 -3.15 -2.97 -2.78 -2.24 -1.96 -1.86 -1.81 -1.78 -1.74 -1.72 -1.71 -1.70 -1.68 -1.68 -1.67 -1.65 -1.65 -1.65 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 UNHHHHHOOOOOO O OOflGbNOQGbND—‘S ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 -3.45 -3.45 -3.45 -3.45 '3.45 -3.45 -3.45 -3.44 -3.45 -3.45 -3.45 -3.45 -3.44 -3.45 -3.44 -3.42 -3.39 -3.34 -3.05 -2.36 -2.02 -1.87 -1.78 -1.72 -1.67 -1.62 -1.62 -1.60 -1.54 -1.42 1000 -3.12 -2.86 -2.30 -1.97 -1.80 -1.70 -1.50 -1.40 -1.36 -1.34 -1.33 -1.31 -1.30 25 -3.25 -3.25 -3.25 -3.25 -3.25 -3.25 -3.24 -3.24 -3.24 -3.24 -3.23 -3.21 -3.19 -3.16 -3.11 -2.93 -2.66 -2.31 -1.81 -1.57 -1.48 -1.44 -1.42 -1.40 -1.39 -1.38 -1.38 -1.37 -1.32 2000 -3.12 -2.00 -1.64 -1.52 -1.46 -1.42 -1.35 -1.31 -1.30 -1.29 -1.29 -1028 -1.28 50 -3.19 -3.19 -3.19 -3.18 -3.18 -3.19 -3.18 -3.16 -3.14 -3.12 -3.08 -2.96 -2.80 -2.62 -2.44 -1.99 -1.80 -1.65 -1.48 -1.39 -1.36 -1.35 -1.34 -1.33 -1.33 -1.32 -1.32 -1.32 -1.30 a -3.12 -1.28 -1.28 -1.28 -1.28 -1.28 -1.28 -1.28 -1.28 -1.28 -1.28 -1.28 -1028 44 TABLE 4 100 -3.16 -3.15 -3.15 -3.15 -3.15 '3.14 -3.13 -3.06 -2.91 -2.73 -2.52 -2.15 -1.94 -1.81 -1.73 -1.57 -1.49 -1.44 -1.37 -1.34 -1.33 -1.32 -1.32 -1.32 -1.31 -1.31 -1.29 10 PERCENT CRITICAL VALUES FOR 150 -3.15 -3.15 -3.15 -3.14 -3.14 -3.14 -3.06 -2.77 -2.43 -2.16 -1.99 -1.76 -1.65 -1.58 -1.53 -1.43 -1.39 -1.36 -1.33 -1.31 -1.30 -1.30 -1.30 -1.30 THE t 200 -3.14 -3.14 -3.13 -3.12 -3.10 -3.08 -2.90 -2.39 -2.03 -1.85 -1.74 -1.59 -1.51 -1.47 -1.44 -1.38 -1.35 -1.33 -1.31 -1.29 -1.29 -1.29 -1.29 1000 -1.30 -1.30 -1.28 STATISTIC 250 500 -3.14 -3.14 -3.14 -3.10 -3.12 -2.99 -3.10 -2.83 -3.07 -2.63 -3.03 -2.43 -2.70 -1.88 -2.09 -1.59 -1.82 -1.50 -1.69 -1.45 -1.61 -1.42 -1.50 -1.38 -1.45 -1.36 -1.41 -1.35 -1.39 -1.34 -1.29 -1.28 2000 n -1.28 -1.28 -1.27 -1.28 -1.28 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 UNHHHHHOOOOO 0 80000000000. OOQO‘bNOQO‘hNH ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 -0.96 -0.96 -0.96 -0.96 -0.96 -0.96 -0.96 -0.96 -0.96 -0.96 -0.96 -0.95 -0.95 -0.94 -0.93 -0.88 -0.80 -0.65 -0.15 0.48 0.78 0.95 1.04 1.10 1.15 1.18 1.20 1.22 1.28 1.42 1000 -1.25 -0.50 0.18 0.54 0.73 0.84 1.07 1.18 1.21 1.23 1.24 1.26 1.27 25 -1.15 -1.15 -1.15 -1.15 -1.15 -1.15 -1.15 -1.14 -1.14 -1.12 -1.10 -1.04 -0.95 -0.85 -0.75 -0.37 -0.05 0.29 0.80 1.06 1.14 1.19 1.21 1.23 1.24 1.25 1.26 1.26 1.32 2000 -1.25 0.49 0.89 1.03 1.09 1.13 1.21 1.25 1.26 1.27 1.27 1.28 1.28 50 '1.20 -1.20 '1.20 -1.20 -1.20 -1.19 -1.18 -1.13 -1.06 -0.96 -0.85 -0.56 -0.31 -0.10 0.08 0.54 0.76 0.93 1.12 1.21 1.24 1.25 1.26 1.27 1.27 1.28 1.28 1.28 1.30 a -1.25 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 1.28 45 TABLE 5 100 -1.23 -1.23 -1.23 -1.22 -1.22 -1.20 -1.12 -0.83 -0.51 -0.25 -0.02 0.36 0.59 0.73 0.82 1.01 1.09 1.15 1.21 1.25 1.26 1.26 1.27 1.27 1.27 1.27 1.29 0000 8 O O O O 90 PERCENT CRITICAL VALUES FOR 150 -1.23 -1.23 -1.22 -1.19 -1.17 -1.13 -0.88 -0.34 0.06 0.33 0.52 0.77 0.90 0.97 1.02 1.12 1.17 1.20 1.23 1.25 1.26 1.26 1.26 1.27 1.29 ”HOOP ‘10! THE t 200 -1.24 -1.24 -1.21 -1.16 -1.10 -1.02 -0.58 0.07 0.45 0.65 0.78 0.95 1.03 1.08 1.11 1.17 1.20 1.22 1.24 1.25 1.26 1.26 1.29 1000 1.27 1.28 STATISTIC 250 500 -1.24 -1.25 -1.22 -1.11 -1.17 -0.78 -1.07 -0.44 -0.97 -0.16 -0.85 0.07 -0.28 0.65 0.40 0.98 0.69 1.08 0.84 1.13 0.93 1.16 1.04 1.20 1.10 1.22 1.14 1.23 1.16 1.24 1.29 1.28 2000 a 1.28 1.28 1.28 1.28 1.28 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 uNI-‘Hl-‘HI-‘OOOOO O OOQObNOQO-fiNH ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 -0.61 -0.61 -0.61 -0.61 -0.61 -0.61 -0.61 -0.61 -0.61 -0.61 -0.60 -0.60 -0.59 -0.58 -0.57 -0.50 -0.41 -0.26 0.24 0.89 1.21 1.38 1.48 1.55 1.59 1.63 1.65 1.67 1.74 1.90 1000 -0.95 -0.15 0.53 0.89 1.08 1.20 1.43 1.54 1.57 1.59 1.60 1.62 1.63 25 -0.83 -0.83 -0.83 -0.83 -0.83 -0.83 -0.82 -0.81 -0.80 -0.79 -0.76 -0.69 -0.59 -0.49 -0.38 0.01 0.31 0.65 1.18 1.45 1.54 1.59 1.61 1.63 1.64 1.65 1.66 1.66 1.72 2000 -0.94 0.84 1.25 1.39 1.45 1.49 1.57 1.61 1.62 1.62 1.63 1.64 1.64 50 -0.88 -0.88 -0.89 -0.88 -0.88 -0.88 -0.86 -0.80 -0.71 -0.60 -0.49 -0.21 0.04 0.26 0.45 0.90 1.12 1.29 1.48 1.58 1.61 1.62 1.63 1.64 1.64 1.65 1.65 1.65 1.68 a -0.94 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 1.65 46 TABLE 6 100 -0.91 -0.91 -0.91 -0.91 -0.90 -0.89 -0.78 -0.47 -0.16 0.10 0.32 0.71 0.95 1.09 1.18 1.37 1.45 1.51 1.58 1.62 1.63 1.63 1.63 1.64 1.64 1.64 95 PERCENT CRITICAL VALUES FOR 150 -0.93 -0.92 -0.90 -0.88 -0.84 -0.80 -0.53 0.01 0.40 0.68 0.87 1.13 1.26 1.34 1.39 1.49 1.54 1.56 1.60 1.62 1.62 1.63 1.63 1.63 00000! \IU'I 80 0 0 0H NHOO THE t 200 -0.93 -0.92 -0.89 -0.83 -0.76 -0.67 -0.24 0.41 0.79 1.01 1.14 1.31 1.40 1.44 1.48 1.54 1.57 1.59 1.62 1.63 1.63 1.64 1000 1.63 1.64 STATISTIC 250 500 -0.93 -0.93 -0.91 -0.77 -0.84 -0.42 -0.74 -0.09 -0.62 0.19 -0.49 0.42 0.08 1.01 0.74 1.34 1.05 1.44 1.21 1.49 1.30 1.52 1.42 1.56 1.48 1.58 1.51 1.59 1.54 1.60 1.58 1.62 1.60 1.62 1.62 1.63 1.63 1.65 1.65 2000 a 1.64 1.65 1.64 1.65 1.65 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 UNHHHHHOOOOO O oommbuommbww ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 -0.28 -0.28 -0.28 -0.28 -0.28 -0.28 -0.28 -0.28 -0.27 -0.27 -0.27 -0.28 -0.25 -0.24 -0.22 -0.16 -0.07 0.08 0.58 1.26 1.61 1.83 1.94 2.01 2.06 2.10 2.14 2.16 2.24 2.37 1000 -0.68 0.15 0.82 1.18 1.38 1.51 1.73 1.85 1.88 1.90 1.91 1.93 1.94 25 -0.53 '0.53 -0.53 -0.53 -0.53 -0.53 -0.53 -0.52 -0.51 -0.49 -0.46 -0.38 -0.28 -0.19 -0.08 0.31 0.63 0.98 1.53 1.82 1.91 1.95 1.98 1.99 2.01 2.02 2.03 2.03 2.07 2000 0.65 1.14 1.56 1.70 1.77 1.81 1.88 1.92 1.94 1.94 1.95 1.95 1.96 50 -0.61 -0.61 -0.61 -0.61 -0.60 -0.58 -0.51 -0.41 -0.29 -0.17 0.11 0.35 0.56 0.73 1.20 1.44 1.61 1.81 1.90 1.93 1.94 1.95 1.96 1.96 1.97 1.97 1.97 47 TABLE 7 100 -0.63 -0.63 -0.63 -0.62 -0.60 -0.59 -0.48 -0.16 0.14 0.40 0.62 1.01 1.25 1.40 1.50 1.69 1.77 1.83 1.89 1.93 1.94 1.95 1.95 1.95 1.96 1.96 0000 150 -0.65 -0.65 -0.63 -0.61 -0.57 -0.52 -0.23 0.31 0.71 0.98 1.18 1.45 1.58 1.66 1.71 1.81 1.86 1.89 1.93 1.95 1.96 1.96 1.96 1.96 NHOOH \IUI 200 -0.64 -0.64 -0.59 -0.54 -0.46 *0.37 0.07 0.73 1.11 1.33 1.45 1.63 1.71 1.76 1.79 1.86 1.88 1.91 1.93 1.94 1.94 1.94 1.97 1000 1.95 1.95 250 -0.65 -0.62 -0.55 -0.43 -0.31 -0.18 0.37 1.05 1.35 1.51 1.60 1.73 1.79 1.82 1.85 1.89 1.91 1.93 1.95 2000 1.96 1.96 97.5 PERCENT CRITICAL VALUES FOR THE t STATISTIC 500 -0.65 -0.47 -0.11 0.21 0.48 0.71 1.31 1.63 1.74 1.79 1.82 1.87 1.89 1.90 1.91 1.92 1.93 1.93 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 O \l UNHHHHHOOOOOO O OOGfibNOGGbNO—J ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 10 0.13 0.13 0.13 0.13 0.13 0.13 0.13 0.13 0.13 0.13 0.14 0.15 0.15 0.16 0.17 0.23 0.33 0.48 1.01 1.77 2.17 2.39 2.51 2.59 2.66 2.70 2.73 2.77 2.84 3.00 1000 -0.33 0.49 1.16 1.53 1.74 1.86 2.10 2.22 2.25 2.27 2.28 2.30 2.30 25 -0.19 -0.19 -0.19 -0.19 -0.19 -0.19 -0.19 -0.17 -0.15 -0.13 -0.10 -0.01 0.08 0.19 0.29 0.69 1.01 1.38 1.96 2.26 2.36 2.40 2.43 2.45 2.46 2.47 2.48 2.48 2.49 2000 -0.32 1.49 1.91 2.05 2.11 2.15 2.23 2.27 2.29 2.29 2.30 2.31 2.31 50 -0.28 -0.27 -0.27 -0.26 -0.26 -0.26 -0.22 -0.15 -0.04 0.08 0.19 0.47 0.72 0.92 1.09 1.58 1.82 1.98 2.19 2.29 2.31 2.33 2.43 2.35 2.35 2.35 2.36 2.36 48 TABLE 8 100 -0.28 -0.28 -0.28 -0.27 -0.25 -0.23 -0.10 0.21 0.50 0.75 0.96 1.35 1.59 1.75 1.85 2.06 2.14 2.20 2.27 2.31 2.32 2.33 2.33 2.33 2.33 2.33 99 PERCENT CRITICAL VALUES FOR 150 -0.31 -0.30 -0.28 -0.24 -0.19 -0.15 0.15 0.68 1.07 1.35 1.55 1.82 1.95 2.03 2.09 2.18 2.23 2.26 2.30 2.31 2.32 2.32 2.32 2.32 THE t 200 -0.31 -0.28 -0.24 -0.16 -0.08 0.01 0.45 1.08 1.45 1.68 1.81 1.99 2.08 2.13 2.16 2.22 2.25 2.27 2.29 2.30 2.31 2.31 1000 2.31 2.31 STATISTIC 250 500 -0.33 -0.32 -0.31 -0.12 -0.21 0.25 -0.10 0.56 0.02 0.82 0.15 1.03 0.73 1.66 1.38 2.01 1.69 2.12 1.85 2.17 1.95 2.20 2.07 2.25 2.13 2.27 2.17 2.28 2.19 2.29 2.23 2.31 2.25 2.32 2.26 2.32 2.28 2.34 2.33 2000 a 2.31 2.33 2.31 2.33 2.33 49 infinity, as was pointed out by Dickey and Fuller (1979, p. 429). However, when 61 is equal to zero there is little change in the critical values past T = 100. Secondly, for any fixed value of T, the distribution of the t-statistic converges to Student's t with (T - 3) degrees of freedom, as noted by Nankervis and Savin (1987). Basically this means that as either 61 or T increases, the distribution of the t- statistic shifts to the right: the lower tail critical values increase (decrease in absolute value) and the upper tail critical values also increase. The changes in critical values as 61 and T change are not always monotonic in Tables 1 through 8, however. First, in the upper tail (Tables 5 through 8), the critical values for 61 = 0 decrease as T increases, and therefore for small values of 61 the critical values decrease at first as T increases, before eventually increasing as T gets sufficiently large. For example, for 61 = 0.0006, the 95 percent critical value decreases as T increases from 10 to 25, 50 and 100, but it increases for further increases in T above T = 100. Second, for very large values of 61 (e.g., 61 > 1) and small values of T, the 97.5 percent and 99 percent critical values decrease as T increases. Third, there are some minor failures of monotonicity just because of the randomness of the simulation. Most noticeably, for small values of T the critical values often do not increase monotonically in 61. This suggests that some "smoothing" of 50 the tables would be in order. (It may be noted that Schmidt (1990) also found some smoothing to be necessary.) It is obvious from the tables that the critical values tabulated in Fuller (1976) are appropriate only when the coefficient of the time trend (61) is zero, or when both 61 and sample size (T) are small. Also, there is something of a tradeoff: the smaller 61 is, the larger the value of T that it takes to move away from the Dickey-Fuller distribution, and conversely. For example, for 61 = 0.0006 and T = 500, the five percent lower-tail critical value is -3.15, and this is considerably closer to the Dickey-Fuller value of -3.44 than it is to the asymptotic value of -1.645. On the other hand, for 61 = 0.05 we are reasonably close to the asymptotic value even for T = 200. It may be of interest to ask how large the standardized coefficient of the time trend must be before the actual critical value is closer to that given by the t- distribution than it is to that given by the Dickey-Fuller distribution. For the five-percent critical level, a partial answer to this question is as follows: for T = 25, 61 = 0.11; for T - 50, 61 - 0.04; for T = 100, 61 = 0.014: for T = 250, 61 = 0.004; and for T = 500, 61 = 0.0015. To put this into perspective it is important to have some feel for how large 61 is likely to be in empirical applications. See the following chapter for some estimates of this parameter. For the 14 data series used by Nelson 51 and Plosser (1982), all measured in logarithms with the exception of bond yield, DeJong et al. (1988) report estimates of 61 very close to zero, using their unbiased estimator. The largest value is 0.007, which is not large enough to make an appreciable difference in critical values for the sample sizes in this data set (between 62 and 111). However, for the same data measured in lgyglg the estimates of 61 are considerably higher, ranging from essentially zero to a maximum of 0.058 (for the series nominal GNP), and this would cause a non-trivial difference in the critical values for many of these series. Similarly, when an unrestricted estimator of 61 is used, estimates range from close to zero up to a maximum of 0.076 for logged data. It should be noted that DeJong et al. (1988) use a restricted estimate of 61 (based on the regression with the unit root imposed); this estimate is unbiased under the null hypothesis but has a serious downward bias under the alternative (8 < 1). This is important because it is suspected that if an estimator of 61 could be found that would have less bias when 8 < 1, yet would still be precise enough in the case when 8 = 1 to give a test of the right size, then the interpolation tests using 3, and ?, may show more power than competing tests. If it is possible to find an estimate of 61 that is precise enough to allow an accurate interpolation test, but less biased than the restricted estimate when p < 1, estimated values of 61 large enough to make a difference in 52 empirical work would be found more often. In any case, not many empirical studies have paid attention to the value of 61, and perhaps the strongest message of this chapter is simply that the ;, test should not be performed without checking the value of 61. If 61 is not close to zero, the tables just presented should be used instead of the Dickey- Fuller tables. 3. TABULATION OF THE CRITICAL VALUES OP T(3 - 1) We now turn to the Dickey-Fuller statistic designated 3,, which is just T(§ - 1). This statistic is based on the deviation of 3 from unity, not scaled by any estimated standard error. Dickey and Fuller (1979, 1981) present some limited Monte Carlo evidence that indicates that such a test may be more powerful than a test based on a t-statistic. The usual Fuller tabulations assume that 61 = 0. Here we provide tabulations of critical values for the same values of 61 and T as were considered for the t-statistic in the last section. Specifically, Tables 9 through 16 provide the one percent, 2.5 percent, five percent, 10 percent, 90 percent, 95 percent, 97.5 percent and 99 percent critical values, respectively. The distribution of 3 when 61 is nonzero is much less skewed and less dispersed than the distribution when 61 = 0. Therefore the critical values of T(§ - 1) are generally much smaller in absolute value than the ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 UNHHHHHOOOOO oommhuommawp ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 ONE PERCENT 10 -15.66 -15.67 -15.66 -15.66 -15.67 -15.67 -15.67 -15.65 -15.64 -15.64 -15.64 -15.62 -15.62 -15.61 -15.62 -15.55 -15.47 -15.37 -14.14 -8.12 -4.20 -2.86 -2.20 -1.79 -1.51 -1.32 -1.16 -1.04 -0.69 500 -29.45 -28.37 -25.74 -22.84 -18.65 -14.00 -4.03 -1.63 -1.03 -0.75 -0.59 -0.39 -0.29 -0.23 -0.19 -0.12 25 -22.00 -22.00 -21.99 -22.01 -22.01 -22.01 -21.95 “21.90 -21.85 -21.85 -21.89 -21.70 -21.46 -21.06 -20.69 -19.00 -15.54 -9.99 -3.19 -1.37 -0.89 -0.66 -0.52 -0.43 -0.37 -0.32 -0.29 -0.26 1000 -29.43 -23.47 “11.39 -5.30 -3.36 -2.48 -1.08 -0.52 -0.34 -0.25 -0.20 -0.10 -0.07 -0.04 53 TABLE 9 CRITICAL VALUES FOR T(§ - 1) 50 -25.13 -25.16 -25.09 -25.10 -25.11 -25.08 -25.00 -24.82 -24.66 -24.36 -23.93 -21.97 -19.64 -16.84 -13.71 -5.69 -3.32 -2.09 -0.95 -0.46 -0.31 -0.23 -0.18 -0.15 -0.13 -0.12 -0.10 -0.09 2000 -28.92 -5.68 -2.10 -1.31 -0.95 -0.75 -0.36 -0.18 -0.12 -0.09 -0.08 -0.04 -0.03 -0.02 100 -27.07 -27.02 -27.09 -26.90 -26.95 -26.93 -26.78 -25.24 -22.76 -19.43 -15.62 -7.97 -4.77 -3.38 -2.63 -1.41 -0.97 -0.67 -0.33 -0.16 -0.11 -0.08 -0.07 -0.06 -0.05 -0.04 00 150 -28.14 -27.92 -27.64 -27.63 -27.58 -27.38 -26.12 -20.65 -14.07 -8.44 -5.52 -2.95 -2.04 -1.55 -1.25 -0.73 -0.51 -0.35 -0.18 -0.09 -0.06 -0.05 -0.04 -0.03 500 -0.09 -0.06 200 -28.92 -28.28 -28.06 -27.85 -27.51 -27.29 -24.06 -13.52 -6.37 -3.90 -2.85 -1.72 -1.23 -0.97 -0.79 -0.46 -0.33 -0.23 -0.12 -0.06 -0.04 -0.03 1000 -0.03 250 -28.92 -28.60 -28.31 -27.96 -27.21 -26.30 -20.23 -7.42 -3.67 -2.47 '1.87 -1.17 '0.86 -0.68 -0.56 -0.33 -0.23 -0.16 -0.08 2000 -0.01 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 “NHHHHHOOOOOO oommbNOQGQ-NHS ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 2.5'PERCENT 10 “14.35 “14.36 “14.35 “14.35 “14.35 “14.35 “14.34 “14.34 “14.34 “14.33 “14.33 “14.30 “14.31 “14.28 “14.28 “14.20 “14.13 “13.90 “12.38 “6.51 “3.53 “2.43 “1.87 “1.53 “1.30 “1.13 “1.00 “0.89 “0.59 500 “24.94 “24.09 “21.79 “18.65 “14.85 “10.92 “3.38 “1.38 “0.87 “0.64 “0.50 “0.33 “0.25 “0.20 “0.16 “0.10 25 “19.44 “19.44 “19.44 “19.44 “19.46 “19.45 “19.47 “19.43 “19.38 “19.34 “19.14 “18.86 “18.49 “18.15 “16.05 “12.78 “7.96 “2.67 “1.17 “0.75 “0.55 “0.44 “0.37 “0.31 “0.27 “0.24 “0.22 1000 “24.88 “19.59 “8.98 “4.37 “2.82 “2.10 “0.93 “0.44 “0.29 “0.22 “0.17 “0.09 “0.06 “0.04 54 TABLE 10 CRITICAL VALUES FOR T(§ - 1) 50 “21.94 “21.89 “21.95 “21.93 “21.94 “21.95 “21.98 “21.68 “21.37 “20.95 “20.50 “18.64 “16.30 “13.57 “10.86 “4.61 “2.79 “1.77 “0.81 “0.39 “0.26 “0.19 “0.16 “0.13 “0.11 “0.10 “0.09 “0.08 2000 “24.75 “4.80 “1.80 “1.12 “0.81 “0.64 “0.31 “0.15 “0.10 “0.08 “0.06 “0.03 -0002 “0.02 100 “23.37 “23.31 “23.29 “23.30 “23.32 “23.32 “22.96 “21.41 “19.01 “15.85 “12.42 “6.38 “3.94 “2.83 “2.22 “1.20 “0.83 “0.56 “0.28 “0.14 “0.09 “0.07 “0.06 “0.05 “0.04 “0.04 00 150 -23.93 -23.93 -23.92 -23.78 -23.57 -23.54 -22.30 -17.20 -11.02 -6.67 -4.52 -2.48 -1.72 -1.32 -1.07 -0.62 -0.43 -0.30 -0.15 -0.08 -0.05 -0.04 -0.03 -0.03 500 “0.07 “0.05 200 “24.29 “24.32 “24.05 “23.73 “23.41 “23.03 “20.10 “10.55 “5.11 “3.25 “2.39 “1.45 “1.05 “0.82 “0.67 “0.39 “0.28 “0.20 “0.10 “0.05 “0.04 “0.03 1000 “0.03 250 “24.42 “24.33 “24.17 “23.69 “23.14 “22.38 “16.42 “5.88 “3.08 “2.09 “1.60 “1.00 “0.73 “0.57 “0.47 “0.28 “0.20 “0.14 “0.07 2000 “0.01 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 UNHHHHHOOOOOO oommbwommpwwg ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 55 TABLE 11 FIVE PERCENT CRITICAL VALUES FOR T(6 “ 1) 10 “13.21 “13.21 “13.21 “13.21 “13.21 “13.21 “13.21 “13.20 “13.20 “13.19 “13.18 “13.17 “13.14 “13.13 “13.13 “13.05 “12.91 “12.67 “10.99 “5.36 “2.99 “2.09 -1061 “1.31 “1.11 “0.96 “0.85 “0.76 “0.50 500 “21.56 “20.80 “18.50 “15.61 “11.98 “8.81 “2.85 “1.19 “0.75 “0.55 “0.43 “0.28 “0.21 “0.17 “0.14 “0.09 25 “17.40 “17.40 “17.40 “17.40 “17.41 “17.41 “17.42 “17.38 “17.35 “17.29 “17.24 “17.04 “16.79 “16.45 “16.05 “13.74 “10.65 “6.44 “2.29 “1.01 “0.65 “0.48 “0.38 “0.31 “0.27 “0.23 “0.21 “0.19 1000 “21.42 “16.44 “7.20 “3.65 “2.42 “1.81 “0.80 “0.38 “0.25 “0.19 “0.15 “0.08 “0.05 “0.03 50 “19.44 “19.42 “19.41 “19.41 “19.38 “19.36 “19.28 “19.06 “18.78 “18.38 “17.86 “16.12 “13.68 “11.15 “8.74 “3.82 “2.37 “1.51 “0.69 “0.33 “0.22 “0.16 “0.13 “0.11 “0.09 “0.08 “0.07 “0.07 2000 “21.52 “3.96 “1.54 “0.96 “0.69 “0.55 “0.26 “0.13 “0.09 “0.07 “0.05 “0.03 “0.02 “0.01 100 “20.48 “20.44 “20.44 “20.42 “20.39 “20.37 “19.94 “18.58 “16.16 “13.18 “10.14 “5.23 “3.31 “2.41 “1.90 “1.03 “0.71 “0.48 “0.23 “0.12 “0.08 “0.06 “0.05 “0.04 “0.04 “0.03 CD 00 150 “20.89 “20.87 “20.85 “20.75 “20.67 “20.45 “19.19 “14.36 “8.90 “5.44 “3.76 “2.13 “1.48 “1.14 “0.92 “0.53 “0.37 “0.26 “0.13 “0.07 “0.05 “0.04 “0.03 “0.03 500 “0.06 “0.04 200 “21.12 “20.91 “20.84 “20.65 “20.31 “19.97 “16.93 “8.45 “4.26 “2.75 “2.05 “1.24 “0.89 “0.70 “0.57 “0.33 “0.24 “0.17 “0.08 “0.04 “0.03 “0.02 1000 “0.03 250 “21.23 “21.13 “20.87 “20.47 “19.91 “19.19 “13.49 “4.84 “2.62 “1.79 “1.36 “0.85 “0.62 “0.49 “0.40 “0.24 “0.17 “0.12 “0.06 2000 “0.01 ‘1 0.0 0.0002 0.0004 0.0006 0.0008‘ 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 MNHHHHHOOOOOO OOflGhNOGG-fiNO—‘g O 00! OH 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 A 10 PERCENT CRITICAL VALUES FOR T(B “ 1) 10 “11.86 “11.86 “11.86 “11.86 “11.86 “11.86 “11.86 “11.85 “11.85 “11.86 “11.85 “11.83 “11.83 “11.80 “11.78 “11.66 “11.56 “11.23 “9.34 “4.27 “2.43 “1.71 “1.32 “1.07 “0.90 “0.78 “0.69 “0.61 “0.40 500 “18.13 “17.42 “15.35 “12.45 “9.36 “6.77 “2.33 “0.96 “0.60 “0.44 “0.34 “0.23 “0.17 “0.13 “0.11 “0.07 25 “15.10 “15.11 “15.11 “15.10 “15.09 “15.09 “15.07 “15.06 “15.02 “14.98 “14.93 “14.73 “14.43 “14.08 “13.69 “11.38 “8.41 “5.04 “1.86 “0.81 “0.52 “0.38 “0.30 “0.25 “0.21 “0.18 “0.16 “0.15 1000 “17.99 “13.23 “5.62 “2.94 “1.96 “1.46 “0.64 “0.30 “0.20 “0.15 “0.12 “0.06 “0.04 “0.03 56 TABLE 12 50 “16.59 “16.59 “16.56 “16.54 “16.55 “16.54 “16.51 “16.31 “15.99 “15.58 “15.03 “13.28 “11.09 “8.77 “6.80 “3.08 “1.93 “1.23 “0.55 “0.26 “0.17 “0.13 “0.10 “0.09 “0.08 “0.07 “0.06 “0.05 2000 “18.09 “3.19 “1.25 “0.76 “0.55 “0.43 “0.21 “0.10 “0.07 “0.05 “0.04 “0.02 “0.02 “0.01 100 “17.36 “17.34 “17.34 “17.27 “17.27 “17.20 “16.83 “15.46 “13.13 “10.36 “7.86 “4.12 “2.68 “1.97 “1.55 “0.84 “0.57 “0.39 “0.19 “0.09 “0.06 “0.05 “0.04 “0.03 “0.03 “0.03 0:: o. #40“ q 150 “17.66 “17.63 “17.60 “17.57 “17.46 “17.31 “15.92 “11.51 “6.83 “4.30 “3.04 “1.74 “1.21 “0.92 “0.75 “0.42 “0.29 “0.20 “0.10 “0.05 “0.04 “0.03 “0.02 “0.02 500 “0.05 “0.04 200 “17.81 “17.75 “17.60 “17.38 “17.10 “16.74 “13.71 “6.54 “3.40 “2.24 “1.66 “1.01 “0.72 “0.56 “0.46 “0.26 “0.19 “0.13 “0.07 “0.04 “0.03 “0.02 1000 “0.02 250 “17.86 “17.80 “17.57 “17.18 “16.62 “15.84 “10.69 “3.83 “2.12 “1.45 “1.10 “0.68 “0.49 “0.39 “0.32 “0.19 “0.13 “0.09 “0.05 2000 “0.01 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 UNPHHHHOOOOOO OOQGhNOQG-fiNI-‘S ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 V A 90 PERCENT CRITICAL VALUES FOR T(fi “ 1) 10 “3.09 “3.09 “3.09 “3.09 “3.09 “3.09 “3.09 “3.09 “3.09 “3.08 “3.09 “3.08 “3.04 “3.01 “2.97 “2.73 “2.44 “1.92 “0.40 0.94 1.10 0.99 0.87 0.76 0.68 0.61 0.55 0.50 0.34 500 “3.76 “3.19 “2.01 “1.01 “0.33 0.13 0.75 0.58 0.43 0.34 0.27 0.19 0.14 0.11 0.09 0.06 25 “3.54 “3.54 “3.54 “3.54 “3.54 “3.53 “3.52 “3.50 “3.46 “3.42 “3.35 “3.07 “2.76 “2.40 “2.06 “0.88 “0.10 0.54 0.84 0.56 0.40 0.31 0.25 0.21 0.18 0.16 0.14 0.13 1000 “3.78 “1.19 0.33 0.72 0.75 0.70 0.45 0.25 0.17 0.13 0.10 0.05 0.03 0.02 57 TABLE 13 50 “3.65 “3.65 “3.64 “3.63 “3.63 “3.62 “3.56 “3.35 “3.06 “2.70 “2.32 “1.41 “0.70 “0.20 0.17 0.76 0.80 0.69 0.41 0.22 0.15 0.11 0.09 0.07 0.06 0.05 0.05 0.04 2000 “3.78 0.68 0.66 0.51 0.41 0.34 0.18 0.09 0.05 0.04 0.03 0.01 0.01 0.00 100 “3.73 “3.73 “3.73 “3.69 “3.67 “3.62 “3.27 “2.20 “1.25 “0.53 “0.04 0.58 0.75 0.76 0.73 0.54 0.41 0.30 0.16 0.08 0.05 0.04 0.03 0.02 0.02 0.02 95”.. F‘O”’ q 150 “3.71 “3.70 “3.65 “3.56 “3.44 “3.30 “2.36 “0.76 0.11 0.53 0.71 0.74 0.65 0.57 0.50 0.32 0.24 0.17 0.09 0.04 0.03 0.02 0.01 0.01 500 0.04 0.03 200 250 “3.76 “3.74 “3.71 “3.65 “3.60 “3.42 “3.41 “3.07 “3.16 “2.67 “2.88 “2.24 “1.42 “0.60 0.15 0.61 0.65 0.76 0.74 0.70 0.72 0.63 0.59 0.47 0.48 0.37 0.40 0.31 0.35 0.26 0.22 0.16 0.16 0.11 0.11 0.08 0.05 0.04 0.02 0.01 0.01 1000 2000 0.01 0.00 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 WNHHHHHOOOOOO OOQGDNOQO‘éNl-‘S o oo. OH 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 95 PERCENT CRITICAL VALUES FOR T(fi “ 1) 10 “1.96 “1.96 “1.96 “1.96 “1.96 “1.96 “1.96 “1.96 “1.95 “1.94 “1.94 “1.91 “1.88 “1.85 “1.80 “1.55 “1.26 “0.76 0.63 1.73 1.70 1.45 1.23 1.06 0.93 0.82 0.74 0.67 0.46 500 “2.65 “2.10 “1.03 “0.19 0.41 0.79 1.18 0.81 0.58 0.45 0.36 0.25 0.19 0.15 0.12 0.07 25 “2.43 “2.43 “2.43 “2.43 “2.43 “2.42 “2.43 “2.41 “2.37 “2.30 “2.23 “1.96 “1.67 “1.34 “1.02 0.02 0.68 1.20 1.25 0.77 0.54 0.41 0.33 0.28 0.24 0.21 0.19 0.17 1000 “2.70 “0.34 0.94 1.20 1.14 1.02 0.61 0.32 0.22 0.16 0.13 0.06 0.04 0.02 58 TABLE 14 50 “2.57 “2.56 “2.56 “2.56 “2.55 “2.53 “2.46 “2.24 “1.95 “1.60 “1.26 “0.51 0.11 0.56 0.87 1.28 1.19 0.97 0.55 0.29 0.20 0.15 0.12 0.10 0.08 0.07 0.06 0.06 2000 “2.68 1.18 0.93 0.69 0.54 0.44 0.23 0.12 0.08 0.06 0.04 0.02 0.01 0.00 100 “2.63 “2.63 “2.62 “2.58 “2.56 “2.51 “2.14 “1.19 “0.37 0.23 0.64 1.15 1.22 1.15 1.05 0.73 0.55 0.40 0.21 0.10 0.07 0.05 0.04 0.03 0.03 0.02 00 150 “2.66 “2.64 “2.58 “2.49 “2.36 “2.23 “1.36 0.02 0.77 1.10 1.20 1.09 0.92 0.78 0.67 0.43 0.32 0.22 0.11 0.05 0.03 0.02 0.02 0.01 500 0.05 0.03 200 “2.65 “2.61 “2.48 “2.29 “2.06 “1.79 “0.56 0.78 1.16 1.17 1.07 0.83 0.66 0.55 0.46 0.29 0.21 0.15 0.07 0.03 0.02 0.01 1000 0.01 250 “2.65 “2.56 “2.34 “2.00 “1.61 “1.24 0.18 1.15 1.17 1.02 0.88 0.64 0.50 0.41 0.34 0.21 0.15 0.10 0.05 2000 0.00 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 O \l UNPHHHHOOOOOO OOQGbNOGOi-FNH ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 A 97.5 PERCENT CRITICAL VALUES FOR T(B “ 1) 10 “0.91 “0.91 “0.91 “0.90 “0.90 “0.90 “0.90 “0.90 “0.88 “0.88 “0.87 “0.83 “0.80 “0.76 “0.71 “0.49 “0.20 0.25 1.56 2.54 2.29 1.88 1.56 1.33 1.15 1.01 0.91 0.82 0.55 500 “1.81 “1.24 “0.28 0.46 0.99 1.35 1.55 0.99 0.70 0.54 0.43 0.29 0.22 0.18 0.15 0.09 25 “1.52 “1.52 “1.52 “1.52 “1.51 “1.52 “1.52 “1.49 “1.43 “1.39 “1.31 “1.04 “0.76 “0.49 “0.21 0.76 1.39 1.84 1.63 0.95 0.66 0.50 0.40 0.34 0.29 0.25 0.23 0.20 1000 “1.84 0.36 1.47 1.61 1.45 1.27 0.73 0.39 0.26 0.20 0.16 0.08 0.05 0.03 59 TABLE 15 50 “1.68 “1.69 “1.68 “1.69 “1.68 “1.67 “1.59 “1.39 “1.08 “0.77 “0.44 ,0.27 0.79 1.19 1.46 1.73 1.53 1.21 0.67 0.35 0.23 0.18 0.14 0.12 0.10 0.09 0.08 0.07 2000 “1.78 1.62 1.17 0.85 0.66 0.54 0.28 0.14 0.09 0.07 0.05 0.02 0.01 0.01 100 “1.73 “1.72 “1.70 “1.67 “1.64 “1.58 “1.24 “0.39 0.32 0.85 1.25 1.65 1.62 1.48 1.33 0.90 0.67 0.48 0.25 0.12 0.08 0.06 0.05 0.04 0.03 0.03 00 150 “1.78 “1.76 “1.71 “1.63 “1.52 “1.37 “0.55 0.77 1.35 1.62 1.64 1.40 1.16 0.97 0.83 0.53 0.38 0.27 0.14 0.07 0.04 0.03 0.02 0.02 500 0.06 0.04 200 “1.78 “1.72 “1.61 “1.43 “1.21 “0.95 0.18 1.36 1.62 1.53 1.38 1.03 0.82 0.67 0.56 0.35 0.25 0.18 0.09 0.04 0.03 0.02 1000 0.02 250 “1.77 “1.72 “1.47 “1.14 “0.79 “0.43 0.81 1.62 1.51 1.28 1.09 0.78 0.60 0.49 0.41 0.25 0.18 0.13 0.06 2000 0.00 ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 wNi-‘HHHHOOOOOO OOQQbNOGGfiNI-‘S ‘1 0.0 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.015 0.02 0.025 0.03 0.05 60 TABLE 16 99 PERCENT CRITICAL VALUES FOR T(§ “ 1) 10 0.46 0.46 0.46 0.45 0.45 0.45 0.45 0.46 0.46 0.46 0.46 0.50 0.51 0.53 0.56 0.75 1.03 1.50 2.86 3.63 3.06 2.45 1.99 1.68 1.44 1.26 1.12 1.01 0.67 500 “0.86 “0.29 0.59 1.26 1.68 1.99 2.01 1.22 0.85 0.65 0.52 0.35 0.27 0.21 0.18 0.11 25 “0.53 “0.53 “0.52 “0.52 “0.52 “0.51 “0.50 “0.45 “0.39 “0.33 “0.26 “0.02 0.24 0.50 0.78 1.71 2.27 2.61 2.09 1.17 0.80 0.60 0.49 0.41 0.35 0.30 0.27 0.24 1000 “0.89 1.12 2.08 2.09 1.84 1.58 0.89 0.46 0.31 0.23 0.19 0.09 50 “0.75 “0.72 “0.72 “0.72 “0.70 “0.69 “0.61 “0.40 “0.11 0.20 0.50 1.14 1.62 1.97 2.19 2.32 1.96 1.51 0.81 0.42 0.28 0.21 0.17 0.14 0.12 0.10 0.09 0.08 2000 “0.81 2.12 1.44 1.03 0.79 0.65 0.33 0.17 0.11 0.08 0.06 0.03 0.02 0.01 100 “0.75 “0.74 “0.72 “0.69 “0.64 “0.58 “0.25 0.49 1.15 1.63 1.93 2.22 2.13 1.91 1.69 1.11 0.81 0.58 0.30 0.15 0.10 0.07 0.06 0.05 0.04 0.03 . . m tho“ 0 CO 150 “0.82 “0.79 “0.72 “0.64 “0.49 “0.37 0.36 1.46 2.04 2.25 2.20 1.80 1.45 1.20 1.02 0.64 0.46 0.32 0.16 0.08 0.05 0.04 0.03 0.02 500 0.07 0.05 200 “0.80 “0.76 “0.60 “0.41 “0.20 0.03 1.04 2.01 2.19 2.01 1.75 1.27 0.99 0.81 0.68 0.42 0.30 0.21 0.10 0.05 0.03 0.02 1000 0.02 250 “0.86 “0.83 “0.56 “0.25 0.07 0.39 1.54 2.17 1.91 1.59 1.33 0.94 0.72 0.58 0.49 0.30 0.21 0.15 0.07 2000 0.00 61 Dickey-Fuller tabulations indicate. The effect of nonzero 61 on the critical values is much more dramatic than it was in the last section for the t-statistic. For example, for T = 100 and 61 = 0.05, the five-percent critical value is about -1.0 rather than about -20.0. This is a potentially relevant choice of T and 61 for data in levels, but not in logs. With data in logs 61 = 0.005 is perhaps more reasonable, and then considerable changes in critical values (compared to those for 61 = 0) require T of about 150 or more. For example, with T = 200 the five-percent critical value is about -21.0 for 61 = 0, and about -6.5 for 61 = 0.005. The practical implication is that it is a potentially serious mistake to use the 3, test without checking the value of 61. The asymptotic results in Tables 9 through 16 are somewhat unrevealing since the critical values all converge to zero either as 61 increases with T fixed, or as T increases with 61 fixed and not equal to zero. This reflects the fact that the normalization of (3 - 1) by T is appropriate only when 61 = 0. When 61 is not equal to zero, Té/2(§ - 1) is asymptotically normal, and this explains the rather rapid decrease in the magnitude of the critical values of T(§ - 1) as T increases. This decrease in the magnitude of the critical values as T increases is generally not monotonic, however. The reason is that when 61 = 0, the magnitude of the critical values typically increases with T for small values of T, and then decreases to zero as T 62 increases further. On the other hand, the decrease in the magnitude of the critical values as 61 increases (with T fixed) is generally monotonic. 4. CONCLUSION This chapter provides tabulations of critical values for the Dickey-Fuller a, and ;, statistics, as a function of standardized coefficient of the time-trend variable (61) and sample size (T). The original Dickey-Fuller tabulations assume 61 = 0, and they result in tests that are conservative when 61 is not equal to zero. This is especially so for the a, test. These tables are intended to be used in an interpolation test, in which one first estimates 61 and then uses the appropriate critical value given 61 and T. There are two main competitors to the interpolation test: first, the tests using the original Dickey-Fuller tabulations; and second, the tests of Ouliaris, Park and Phillips (1988), which are based on a regression that includes a polynomial time trend, and whose distributions therefore depend on the coefficient of the polynomial time trend (assumed to be zero) but not on 61. The motivation for the interpolation test is that it is hoped that it will be more powerful than these competing tests. It may be more powerful than the original Dickey-Fuller tests because they use very conservative critical values, when 61 is nonzero, and it may be more powerful than the tests of Ouliaris, Park and 63 Phillips (1988) because those tests are based on a regression that includes a variable that is irrelevant (has a coefficient of zero). In the simpler model that does not include a trend, Guilkey and Schmidt (1989) provide evidence that the interpolation test based on the estimated standardized drift is reasonably accurate under the null hypothesis and has good power properties. It will require further work to see whether this optimistic finding will carry over to the present model. DeJong et al. (1988) report that the interpolation test using the t-type statistic is not very different from the Dickey-Fuller 7, test, except when 61 or T are quite large or when B is very close to one. The similarity of the interpolation test to the 7, test is due to the fact (mentioned in section 2 above) that the estimate of 61 that they use is unbiased under the null hypothesis but has a serious downward bias when B < 1. If B is less than one by enough for the test to have any power, the estimated 61 is very close to zero unless the true 61 is very large. Since the interpolation test is based on the estimated value of 61, it will be very close to the ?, test (which just assumes 61 = 0). The interpolation test based on the latter statistic would display more power if it were possible to find an estimate of 61 that is less biased when B < 1 (but still precise enough when B = 1 to yield a test of the right size). As of yet, however, such an estimate has not been found. 64 DeJong et al. do not make a comparison to the tests of Ouliaris, Park and Phillips, and they do not consider the interpolation test using the 0, statistic. The potential gain in power from the interpolation test may be larger for the test using the 3, statistic than for the test using the 7, statistic, because the distribution is more sensitive to the value of 61 in the former case than in the latter. Of course, the higher sensitivity of the distribution of the 3, statistic to the value of 61 also raises more serious worries about the accuracy of the test under the null hypothesis, as error in the estimation of 61 potentially will be more troublesome. The following chapter will make comparisons between the interpolation tests (using both statistics) and the tests proposed by Ouliaris, Park and Phillips. It will also present results for the tests based on various estimates of 61, as well as results based on a new set of statistics recently proposed by Schmidt and Phillips (1989). CHAPTER 3 TESTING THE NELSON-PLOSSER DATA SET This chapter presents tests of the unit-root hypothesis applied to the data set collected by Nelson and Plosser (1982). Tests considered include the interpolation test, the tests of Ouliaris, Park and Phillips (1988) and the tests of Schmidt and Phillips (1989). 1. INTRODUCTION This chapter takes up applications of some of the tests that have been proposed for determining whether or not a time series can be characterized as having a unit root present. The primary focus of this chapter is on two sets of proposed tests: the Kp(a) and Sp(3) statistics proposed by Ouliaris, Park and Phillips (1988), and the p and 7 statistics proposed by Schmidt and Phillips (1989). Both of these sets of statistics are meant to deal with a weakness of the original Dickey-Fuller statistics, that weakness being difficulty in dealing with the presence of a time trend. The primary difference between these statistics is that the Ouliaris, Park and Phillips statistics are designed to test for the presence of a unit root with a polynomial time-trend present, while the Schmidt and Phillips 65 66 statistics, as originally proposed by Schmidt (1989), deal with a linear time trend. The importance of including a time trend is noted by West (1987), who points out that applying the Dickey-Fuller ordinary least squares tests gitnggt including a time trend as a regressor results in tests that are inconsistent against the alternative that the series is in fact stationary about a time trend. He also cites some evidence indicating that when the time trend is omitted, then considerably less evidence ggginst the unit-root null hypothesis is found. The evidence is from a 1986 paper by Kleidon. The data used in the empirical applications that follow is that collected by Nelson and Plosser (1982). The following is a brief review of the data set used: real gross national product (GNP), nominal GNP and real per-capita GNP, all from 1909 to 1970; industrial production and consumer prices, both from 1860 to 1970; employment and the unemployment rate, both from 1890 to 1970; the GNP deflator and the narrow measure of the money stock, both from 1889 to 1970: wages, real wages and bond yields, all from 1900 to 1970: velocity of money, from 1869 to 1970, and common stock prices, from 1871 to 1970. All the data are annual and generally represent averages for each year. To briefly summarize, Nelson and Plosser, working with these data in natural logs (except for the data on bond yields), found evidence that these series are difference-stationary, while 67 also recognizing that their tests do not have much power against trend-stationary alternatives with autoregressive root close to unity (p. 152).1 2. TESTING FOR UNIT ROOTS WITH POLYNOMIAL TIME TRENDS One issue that should be addressed in testing macroeconomic time series for the presence of unit roots is the correct specification of the time trend involved, because the form of the time trend can have important effects on the outcome of the tests applied to the time series. For example, a recent paper by Perron (1987, 1989) sought to specify more carefully the time trend by allowing for "breaks," one-time changes in the trend structures, in both the Nelson-Plosser (1982) data set and also a quarterly real GNP series. For the Nelson-Plosser data set, which ends in 1970, Perron supposed that the New York Stock Exchange crash of 1929 constituted a break, while for the quarterly real GNP series, which is post-World War II, the 1973 oil-price shock formed a break. He considered three alternative models: a "crash model," which allowed a one- time change in the intercept of the trend; a "changing growth model," in which a change in the slope of the trend function without a sudden change in level is allowed; and a "trend model" that allowed a sudden change in level, followed by a different growth path. Perron found that for 11 of the 14 Nelson-Plosser series he could reject the unit- root hypothesis, and that he could also reject that 68 hypothesis for the quarterly real GNP series. Another approach is to ask whether the trend is best modelled as a polynomial in terms of the time trend. In a recent paper, Park and Choi (1988) proposed a test not only of the unit-root hypothesis, but also of the polynomial time trend. Their test is a two-step procedure, with the order of the polynomial time trend in the data being estimated first, then performing the test for the presence of a unit root. The reason for following this procedure, according to Park and Choi, is that the behavior of the standard statistics used to test for the polynomial order (such as the F statistic) depends on whether the errors in the series are stationary or integrated. ‘So they argue that testing for the order of the time trend prior to testing the unit-root hypothesis renders the standard tests invalid. The Park and Choi test for the polynomial order is based on the differences regression q-l Ayc = 2 cat" + e; (1) 0 with residuals given as either e: = ut or e: = vt - vt,1. The null hypothesis is Ho 8 up = ... = aq;1:= 0. The residuals vary based on the specification of the time series. The time series itself is modelled as p )r = a t*+-y’ ‘ 2.: " “ (2) 69 with either yt* = y,_r* + ut or yt* = vt, with the former indicating that a deterministic polynomial time trend is present, along with a unit root and an integrated error, while the latter specifies a general stationary process about a deterministic polynomial time trend. Both {ut) and {vt} are random and are assumed to satisfy an invariance principle.2 Then the statistic they use to test for the polynomial order is GA(p.q) = (53/ 02)F<&p,q-1) (3) where F(apflr1) is the Wald statistic for testing the null hypothesis up = ... = aq-1== 0 in the above regression, 0* 2 is the usual error variance estimate and 0 is an estimated asymptotic variance. This asymptotic variance is given by n I n ('02 = %21:e§+ (%)z:1c,(k) 2 ece,_k. (4) t-k+1 Here 0 = lag-truncation number and c£(k) = lag weights. The residuals et that are used in the computation of these variances come from 7 Y1: = BY:-1 + Z aktk + at (5) 0 and are the least-squares residuals. The authors note that the presence or absence of a unit root does not affect the validity of the test, and propose testing the polynomial 70 order in a stepwise fashion, by setting q = p + 1 and then testing the significance of each additional time-polynomial term. If the test statistic is large, then the GA test rejects the null hypothesis of some time-polynomial terms being irrelevant. The second part of their procedure involves determining whether or not the time series has a unit root, given that its polynomial-trend order has been determined. Park and Choi propose two statistics for the unit-root test, which respectively take either the unit-root hypothesis as the null hypothesis or take stationarity as the null. These statistics are: A2 ~2 ~2 J(P.Q) = (0 - 0 )/ 0 (6) and G(p,q) = [r1032 - 32n/ «32. (7) These are essentially transformations of the standard Wald or F-statistics for the null hypothesis “k = 0 for k = p + 1, ... , q in the regression q = a t*+-é'. yt: ; k C (8) To derive these tests, they consider the models P n yt = Zaktk + 6,, 02 = (1/n)£é§ (9) 0 1 71 and q 11 Y: = Eaktk + écl 62 = (21./11);: §:- (10) 0 1 (Note that this latter regression is the same as (8) above.) The at and 6t only designate that the errors come from the regression with either p or q total coefficients. They assume that p < q and that the regression coefficients ak, k = p+1, ... , q, actually have value zero. As noted, these tests differ in what they take the null hypothesis to be: for the J(p,q) test the null hypothesis is that the series is integrated, and the alternative hypothesis is that of stationarity, while for the G(p,q) test the null hypothesis is that the errors are stationary and the unit-root hypothesis provides the alternative. For the first statistic, the null is rejected if J(p,q) is "too small," while for G(p,q) the null is rejected if the statistic is "too large." However, both of these cases imply that a large value of either J(p,q) or G(p,q) gives favorable evidence for the unit-root hypothesis. Using these tests, Park and Choi analyze the Nelson- Plosser data set, with the data used both in level form and in logarithmic form. With the data in levels, they found that only the unemployment rate and bond-yield series could be modelled without a deterministic polynomial time trend. The series for per-capita real GNP, employment and real 72 wages all gave evidence of a linear time trend, while the series for real GNP, the GNP deflator, the consumer price index (CPI), velocity and stock prices all supported second- order polynomials. Third-order polynomial time trends were indicated for nominal GNP, the industrial production index, wages and the money stock. The situation was considerably different for the data set with logarithms taken. In this case real GNP, nominal GNP, industrial production, employment, the GNP deflator, wages, real wages and the money stock all indicated the presence of at most a linear trend, while the remaining six series indicated that no deterministic trend was present. Whether in level form or in logarithmic form, only the unemployment rate series accepted the hypothesis that the series was stationary. For the series in level form, using the J test, the null of integration could not be rejected at the five-percent level for real GNP and employment; it could not be rejected at the 10-percent level for the CPI series; and the null of integration could not be rejected at the 20-percent significance level for nominal GNP, per-capita real GNP, industrial production, the GNP deflator, wages, real wages, the money stock, velocity, bond yield and stock prices. For the series in logged form, only the money stock series could not reject the null of integration at the five-percent level: real GNP, industrial production, employment, CPI and wages all failed to reject integration at the 10-percent significance level, while 73 nominal GNP, per-capita real GNP, the GNP deflator, real wages, velocity, bond yield and stock prices all failed to reject the null at the 20-percent level. There were some conflicts between tests based on the "J statistic" and tests based on the "G statistic." Under the "G test," real GNP and the unemployment rate in levels were both unable to reject the null hypothesis that the errors were stationary, while the logged versions of the unemployment rate and the money stock were likewise unable to reject the null of stationarity. However, Park and Choi conclude by saying that evidence for the presence of unit roots appears "ubiquitous" in the time series data, even with the higher-order polynomial specifications. Park and Choi claim that their findings "basically confirm" the Nelson-Plosser (1982) and Perron (1987, 1989) papers, but this is only true of the use of at most a linear trend when using logged versions of these macroeconomic data sets. As noted above, Perron's paper states "that the unit root hypothesis is not a feature of most macroeconomic variables." Also, it should be noted that Perron did not attempt in his paper to model time trends of greater than linear order. A different approach is used here, one based on an earlier paper by Ouliaris, Park and Phillips (1988). In the present chapter, successive powers of a polynomial time trend were added to a simple autoregression of each of the series in the Nelson-Plosser data set, in order to test 74 whether this had any affect on the conclusion that a unit root was present in the series. Polynomials up to the fourth degree were tested, with little effect on the conclusion that generally the series could be characterized as having a unit root present, provided that at least a second-order polynomial was used to model the series. The tests were based in part on two statistics presented in the paper by Ouliaris, Park and Phillips: the Kb(a) statistic and the Sp(§) statistic, where "p" indicates the order of the time polynomial involved. Later in the chapter we will alter their notation slightly so that the term in parenthesis will be B instead of a. They had used a for the coefficient on the lagged variable: we will use B instead. The authors consider the following least-squares regression: p y ==a + a tk+ fly_ +-u c 0 g k c 1 t (11) with the null hypotheses being: i.) B = 1, and ii.) B = 1 and up = 0. We will also at this point define standardized coefficients 5k = ak/a, k - 0, ... , p. To test for i.) in particular, they considered the following test statistics: 11,03) = ncfi - 1) (12) and tp(B) = usual t statistic for B = 1 (13) 75 Both of these tests are based on the least-squares regression given above. Note that these statistics are generalizations of those proposed by Dickey and Fuller. In the case of a linear trend, when p = l, h1(B) = 3, and t1(B) = 7,, in terms of the notation that Dickey and Fuller introduced for these statistics.3 The authors derive asymptotic distributions of these statistics, but because these depend on nuisance parameters they also define transformations of the statistics that eliminate these nuisance parameters. Their transformations are as follows: Kp(§) = n(§ - 1) - [n2(82 - 32)1/2so2 (14) and sp(fi) = (3/$)t<%> - [n(&2 - 32)J/28so. (15) In these expressions s02 is the residual sum of squares from the regression of y,,1 on 1, t, ... , tp; 32 is any consistent estimator of the long-run variance 02. Note that if the errors are assumed to be identically and independently distributed, the correction terms will vanish, 2 2 2 since 0 = a . This follows from the definition of 0 : (16) (02 lim 3 ELSE) mm .n 2n1;(0) 02+-21 Here 02 = E(u12), fu(0) is the spectral density of the error 76 at frequency zero, Sn is the partial sum process defined by [m s;-=ZPIA: and 1 is defined as l . :2 (uluj) Under the i.i.d. assumption, A = 0. Note also that without these correction terms, these statistics are the same as the T(B - 1) and t statistics discussed in the previous chapter. TABLE 17 A CRITICAL VALUES FOR THE xpu’é) AND Sp(B) STATISTICS Significance A A A A A A 1€V91 32(3) 33(3) 54(5) 32(3) 33(5) 34(5) 20.0 “19.86 “24.73 “29.50 “3.24 “3.59 “3.92 10.0 “23.89 “29.28 “34.53 “3.56 “3.92 “4.25 5.0 “27.48 “33.45 “38.72 “3.83 “4.21 “4.51 1.0 “36.05 “41.65 “48.29 “4.38 “4.74 “5.06 Source: Ouliaris, Park and Phillips 1988, pp. 29-30. Table 17 provides some selected critical values for these statistics from Appendix 2 of the paper by Ouliaris, Park and Phillips (1988). These were computed by them using 500 observations and 25,000 replications. Their critical values have been rounded to two decimal places. In this chapter, the Nelson-Plosser (1982) data set was used for all estimation, with each data series in logarithmic form (except for the bond yield series). The 77 series were estimated in two ways: performing ordinary least squares on the data in logarithms and also using an ”adjusted form" for the regression. The original model used was: p .y ==a +- a t*+-Byx_ +11, 8 0 g k Cl C (17) for t = 1, 2, ... , T and with ut assumed to be i.i.d. N(0, 02). This model was estimated for p = 1 to p = 4, and this model provided the statistics used for the tests reported here. An "adjusted" form was also estimated, in which yo was subtracted from the time series, then OLS was performed and the resulting parameter estimates were divided by s, the estimate of a. While this method is preferred for theoretical derivations and for simulation work (see, for example, DeJong et al. (1988)), it makes no difference here because of the invariance of the estimate of B to the transformation. The t-statistic is also invariant to this transformation (DeJong et al. 1988, p. 8). This method also gives parameter estimates that are identical to the standardized parameters listed in the second section of Chapter 2. In their paper, DeJong et al. (1988) propose an estimator of the coefficient on the time trend that is unbiased WWW trtg. This unbiased estimator comes from taking the ordinary least squares estimates of the coefficient on the time trend 78 and dividing it by the standard deviation of equation (17), i.e., “1 in the case when p = 1, assuming B = 1. Then 61* comes from taking the above value and dividing it by a known constant, which is in part the ratio of two gamma functions. Explicitly, unbiased 61 is given by 61* = (51/5)/(v/2)1/2(r</2)/r(v/2)1. (18) where 61 = the OLS estimate of al, 6 = the OLS estimate of a, v = T-2 and r = the gamma function. The estimate of a1 comes from doing OLS in the regression Ayt = yt - y,_1== aoi+ alt + ut. In the remainder of this section we will examine the performance of the Ouliaris, Park and Phillips (1988) statistics when used with the Nelson-Plosser (1982) data set. Polynomial time trends of up to the fourth order will be included in the regressions. For the linear case we will examine the conclusions reached when using the Ouliaris, Park and Phillips tests (or, what amounts to the same thing, the "2 tests" of Phillips and Perron (1988)), the interpolation tests and the 3, and ;, tests with the original Dickey-Fuller critical values. As was seen in Chapter 2, the presence of a time trend must be taken into account in the data-generating process used to create the critical values for the 3, and ;, statistics. These critical values depend on both 61 and sample size T. So in order to use the critical values offered in Chapter 2, it is necessary to interpolate in 79 those tables for the standardized coefficient of the time trend and for sample size. Such tests have been termed "interpolation tests" (DeJong et al. 1988, Guilkey and Schmidt 1989). The null hypothesis for these tests is that B = 1, or that a unit root is present. A value of the statistic exceeding the critical value means rejection of the maintained hypothesis. Following Park and Choi (1988), the outcomes of these tests are compared using a variety of critical values, one percent, 2.5 percent, five percent and 10 percent. In the linear case a number of comparisons will be made. Because of the importance of the estimate of 61 to the interpolation test, we will consider the performance of these tests using an unrestricted estimate of 61, the unbiased 61 of DeJong et al. (1988) and an estimate of 61 subject to the restriction B = 1. These are in addition to considering the Ouliaris, Park and Phillips (1988) statistics and 0, and ;, using the original Dickey-Fuller (1976) critical values. Tables 18, 20, 21 and 22 contain the results of applying the estimation methods of DeJong et al. (1988) to the Nelson-Plosser (1982) data set. The following is the notation used in those tables: B is the estimated value of the coefficient on the lagged value of the series, while 30 is the estimated value of the coefficient on the intercept, 01 is the estimated coefficient on time, 82 is the estimated 80 coefficient on time squared, and so forth. We use "s" to designate the value of the standard deviation of the errors for the equation belonging to each data series. In all four tables asterisks on the coefficient and standard deviation symbols indicate their values when the restriction B x 1 is imposed. hp(B), tp(B), K,(B) and Sp(B), with p = 1, ... , 4, designate the test statistics for each value of the included time trend. h1(B) is the same statistic as 3, of Dickey and Fuller (1976), while t1(B) is the same as their 7,. Asterisks on the Kp and SI) statistics indicate that lag-length 12 was used in computing the long-run variance used in these statistics. Lag length four was used for those statistics without asterisks. In Table 18 31 designates the standardized coefficient on time computed without restriction, while 31* is the unbiased estimate and 31** is the estimate subject to the restriction discussed above. Table 18 presents the results of testing for a unit root with only a linear time trend present. 0f the two "Dickey-Fuller"-type statistics, the h1(B) statistic tends to reject the unit-root hypothesis far more readily when only a linear time trend is present. However, the performance of these statistics depends strongly on the estimate of 61 used for the interpolation. Consider first the results using the unrestricted estimate of 61. Then for real gross national product (GNP), per-capita real GNP, industrial production, employment, the 81 GNP deflator, real wages, the money stock and stock prices the null hypothesis of a unit root is rejected at all four significance levels using the h1(B) statistic. This statistic does not reject the null at the one, 2.5 and five percent levels for nominal GNP, but it does reject it at the 10 percent level. For the unemployment rate the null hypothesis is not rejected at the one and 2.5 percent levels, but it is rejected at the remaining two levels of significance. The nominal wage series does not reject the unit root at the one percent level using this statistic, but the null is rejected at the remaining three levels. The remaining series -- the Consumer Price Index (CPI), velocity and bond yield -- all fail to reject the null hypothesis of a unit root at all levels of significance. For t1(B) the only data series for which the unit-root hypothesis is rejected at all four levels is industrial production. For seven series -- nominal GNP, the CPI, the nominal wage, the money stock, velocity, bond yield and stock prices -- the statistic fails to reject the null at all four levels. And for the remaining six series t1(B) generally fails to reject for significance levels with lower numerical percentage values, but does give some rejections at higher numerical percentages. For example when used with real GNP this statistic does not reject B = 1 at one percent or 2.5 percent, but does so at five and 10 percent. For the remaining series the results are: per-capita real GNP: statistic fails to reject the null at one, 2.5 and five Series Real GNP Nominal GNP Per-capita real GNP Industrial production Employment Unemployment rate GNP deflator Consumer Price Index Nominal wage Real wage Money stock Velocity Bond yield Stock prices Series Real GNP Nominal GNP Per-capita real GNP Industrial production Employment Unemployment rate GNP deflator Consumer Price Index Nominal wage Real wage Money stock Velocity Bond yield Stock prices Series Real GNP Nominal GNP Per-capita real GNP Industrial production Employment Unemployment rate GNP deflator Consumer Price Index Nominal wage Real wage Money stock Velocity Bond yield Stock prices 82 TABLE UNIT-ROOT TESTS WITH A “0 0.5864 0.7060 0.9252 0.0494 1.1340 0.4924 0.2063 0.0409 0.4071 0.3770 0.1281 0.0520 “0.3632 0.0791 A 00* 0.0186 0.0419 0.0053 0.0481 0.0191 0.0200 0.0059 -0.0015 0.0305 0.0128 0.0623 -0.0322 -0.0793 -0.0035 A ‘1 0.0662 0.0394 0.0429 0.0676 0.0431 -0.0031 0.0355 0.0072 0.0408 0.0759 0.0464 -0.0048 0.0137 0.0179 18 LINEAR TIME TREND A “1 0.00418 0.00386 0.00273 0.00662 0.00162 -0.00142 0.00182 0.00041 0.00276 0.00273 0.00287 -0.00032 0.00393 0.00285 3 . 0.00036 0.00043 0.00036 -0.00009 -0.00008 -0.00043 0.00036 0.00027 0.00027 0.00016 -0.00010 0.00040 0.00396 0.00065 A 5* 0.0055 0.0043 0.0054 “0.0008 -0.0019 -0.0019 0.0068 0.0046 0.0040 0.0041 -0.0016 0.0059 0.0134 0.0040 I 0.0632 0.0980 0.0637 0.0978 0.0376 0.4625 0.0511 0.0568 0.0678 0.0359 0.0619 0.0671 0.2870 0.1586 gt 0.0649 0.0987 0.0656 0.1016 0.0385 0.4920 0.0519 0.0570 0.0683 0.0371 0.0623 0.0677 0.2921 0.1609 1108) “7.5522 “3.9355 “8.0320 “17.4994 “8.8485 “20.1485 “5.4386 “1.4767 “4.3603 “9.0131 “4.1458 “5.9514 5.2490 “7.8017 A B 0.8762 0.9355 0.8683 0.8409 0.8894 0.7481 0.9330 0.9866 0.9377 0.8712 0.9488 0.9410 1.0750 0.9212 A 5100 0.00558 0.00044 0.00549 “0.00083 “0.00195 “0.00088 0.00684 0.00466 0.00402 0.00418 “0.00161 0.00595 0.01357 0.00406 1:103) “2.0262 “1.3500 “2.1222 “3.0776 “2.1728 “3.3555 “1.8291 “0.6523 “1.4621 “2.3313 “1.4364 “1.6626 1.8565 “1.9424 A A A A series 1‘1 (B) 81(B) K; (B) * 81(B) * Real GNP “10.17 “2.33 “5.55 “1.76 Nominal GNP -6.48 -1.76 -6.30 -1.73 Per-capita real GNP -10.68 -2.41 -6.12 -1.89 Industrial production -18.62 -3.17 -l4.87 -2.86 Employment -12.05 -2.51 -8.18 -2.10 (Unemployment rate -21.58 -3.46 -17.26 -3.14 .GNP deflator -9.44 -2.30 -10.52 -2.41 Consumer Price Index -4.09 -1.27 -4.29 -1.31 Nominal wages -7.82 -1.97 -7.93 -1.98 Real wages -10.41 —2.47 -6.47 -2.06 Money stock -10.02 -2.24 -9.39 -2.16 Velocity “5.52 “1.59 “3.74 “1.29 Bond yield 3.50 1.03 2.03 0.53 Stock prices -8.73 -2.06 -7.67 -1.93 83 TABLE 18 (cont'd.) percent, but rejects at 10 percent; employment: fails to reject at one and 2.5 percent, but rejects at five and 10 percent; the unemployment rate: fails to reject at one, 2.5 and five percent, but does reject at the 10 percent level of significance; GNP deflator: fails to reject at one, 2.5 and five percent levels and is equal to the critical value at 10 percent; real wage: fails to reject the null of a unit root at the one percent level, but rejects the unit-root hypothesis at the remaining three levels of significance. These results are based on interpolating for sample size and for standardized coefficient of time trend in the appropriate tables given in Chapter 2. Some examples of these interpolations follow. For real GNP, h1(B) = -7.55 and t1(B) - -2.03. The interpolated critical values for the first test in this case are: one percent, -2.64; 2.5 percent, -2.21: five percent, -1.87 and 10 percent, -1.52. 84 The interpolated critical values for the second test are: one percent, -2.83: 2.5 percent, -2.43; five percent, -2.1 and 10 percent, -1.73. For nominal GNP, h1(B) = -3.94 and t1(B) = -1.35. Now the critical values for the first test are: one percent, -6.21: 2.5 percent, -5.00; five percent, -4.09 and 10 percent, -3.24. The interpolated critical values for the t-type test are now: one percent, -3.11: 2.5 percent, -2.72; five percent, -2.39 and 10 percent, -2.01. In contrast, when either the restricted estimate of 61 or the unbiased estimate of 61 is used, then there are far fewer rejections of the unit-root hypothesis. For real GNP, nominal GNP, per-capita real GNP, employment, the GNP deflator, the CPI, nominal wages, real wages, the money stock, velocity, bond yield and stock prices, the h1(B) statistic fails to reject the null at all four of the levels of significance used. This is true for both of these estimators of 61. For the t1(B) statistic, the series real GNP, nominal GNP, per-capita real GNP, industrial production, employment, the GNP deflator, the CPI, nominal wage, real wage, the money stock, velocity, bond yield and stock prices all fail to reject the null of a unit root. This is true of all four significance levels and is true whether restricted 61 or unbiased 61 is used. With either version of 61, the unemployment rate series fails to reject the null at the one, 2.5 or five percent levels of significance, but does reject it at the 10 percent level. 85 These results are not surprising in light of the discussion in Chapter 2. Also, as equation (18) in this chapter shows, the unbiased estimator of 61 takes the estimate of 61 (with the restriction B = 1 imposed) and multiplies it by a ratio of two gamma functions. A brief glance at Table 18 shows how close the restricted and the unbiased estimates of 61 are. The numerical contribution of the gamma-function ratio is relatively small. Again, some examples of the results of interpolation in the tables in Chapter 2 follow. For real GNP, using 31** (the restricted estimate), the h1(B) critical values are: one percent, -24.05; 2.5 percent, -20.61; five percent, -17.93 and 10 percent, -15.09. The t1(B) critical values with the restricted estimate are: one percent, -4.04: 2.5 percent, -3.68; five percent, -3.38 and 10 percent, -3.07. For nominal GNP the critical values for h1(B) with the restricted estimate are: one percent, -24.68; 2.5 percent, -21.35: five percent, -18.64 and 10 percent, -15.75. Critical values for the t1(B) statistic for nominal GNP and the restricted estimate are: one percent, -4.07; 2.5 percent, -3.72; five percent, -3.43 and 10 percent, -3.11. The following are the interpolated critical values for these two series when 31*, the unbiased estimate, is used. For real GNP, the critical values for the "h statistic" are: one percent, -24.12: 2.5 percent, -20.66; five percent, -17.97 and 10 percent, -15.08. Critical values for the "t statistic" for real GNP and unbiased 61 86 are: one percent, -4.05; 2.5 percent, -3.69: five percent, -3.40 and 10 percent, -3.08. For nominal GNP, the critical values for the "h statistic" and unbiased 61 are: one percent, -24.68: 2.5 percent, -21.35; five percent, -18.64 and 10 percent, -15.75. Critical values for t1(B) for nominal GNP and the unbiased estimator are: one percent, -4.07; 2.5 percent, -3.73; five percent, -3.43 and 10 percent, -3.12. It may be worthwhile at this point to compare the results when using the critical values provided in Chapter 2, which were tabulated under the null hypothesis that a time trend is present in the data-generating process, with the Dickey-Fuller 0, and ;, statistics, which assume a zero time trend in the data-generating process. The original Dickey-Fuller critical values for the case with trend are given in Table 19. Using the 3, statistic gives failure to reject the unit-root hypothesis for real GNP, nominal GNP, per-capita real GNP, industrial production, employment, the GNP deflator, the CPI, nominal wages, real wages, the money stock, velocity, bond yields and stock prices. This occurs at significance levels up to 10 percent. Only the unemployment rate series shows different behavior. When applied to the unemployment rate series the 3, test fails to reject the unit-root hypothesis at the one, 2.5 and five percent significance levels, but then rejects it at the 10 percent level. Considering the 7, test proposed by Dickey and Sample size 25 50 100 250 500 Sample size 25 50 100 250 500 w Source: Fuller 1976, p. 87 TABLE 19 A A CRITICAL VALUES FOR THE p, and T, STATISTICS 0.01 “22.5 “25.7 “27.4 “28.4 “28.9 “29.5 0.01 “4.38 “4.15 “4.04 “3.99 “3.98 “3.96 0.025 “19.9 “22.4 “23.6 “24.4 “24.8 “25.1 0.025 “3.95 “3.80 “3.73 “3.69 “3.68 “3.66 Pr Significance 0.05 0.10 “17.9 “15.6 “19.8 “16.8 “20.7 “17.5 “21.3 “18.0 “21.5 “18.1 “21.8 “18.3 1? Significance 0.05 0.10 “3.60 “3.24 “3.50 “3.18 “3.45 “3.15 “3.43 “3.13 “3.42 “3.13 “3.41 “3.12 371, 373. level 0.90 “3.66 -3.71 -3.74 -3.75 -3.76 -3.77 level 0.90 -1.14 -1.19 -1.22 -1.23 -1.24 -1.25 Fuller, the results are very similar -- shows any signs of rejecting the unit-root hypothesis. 0.95 “2.51 “2.60 “2.62 “2.64 “2.66 0.95 “0.80 “0.87 “0.90 “0.92 “0.93 “0.94 0.975 “1.53 “1.66 “1.73 “1.78 “1.78 “1.79 0.975 “0.50 “0.58 “0.62 “0.64 “0.65 “0.66 only one series the series listed above now fail to reject the null hypothesis of a unit root, at levels, with the exception of And even that series fails to hypothesis at the one percent rejecting the null hypothesis 10 percent significance levels. reject the unit-root and 2.5 percent levels, 0.99 “0.43 “0.65 “0.75 “0.82 “0.84 “0.87 0.99 “0.15 “0.24 “0.28 “0.31 “0.32 “0.33 A11 all four of these significance the unemployment rate series. for only the five percent and Again, these conclusions are based on using the original critical values in Fuller 88 (1976) and interpolating for sample size only. Note that this shows that h1(§) and t1(B) when unrestricted 61 is used give considerably different results from those obtained when either the restricted 61 or the unbiased 61 estimates are used, or when the original Dickey- Fuller critical values are used. While h1(B) and t1(§) tend generally to give the same answers whether the original statistic critical values or restricted 61 or unbiased 61 are used, the difference is more pronounced when unrestricted 61 is used in the interpolation tests. When unrestricted 61 used, the h1(B) statistic rejects the unit- root hypothesis much more frequently than does its t- statistic counterpart. We turn now to the matter of the performance of the K1(B) and 81(5) statistics proposed by Ouliaris, Park and Phillips (1988). Recall that these statistics are, respectively, "corrected" versions of the h1(B) and t1(B) statistics. Ouliaris, Park and Phillips do not provide critical values for the case p = 1 in their paper, but recall that they argue that their model does contain "all of the unit root models considered previously in the literature as special cases." They specifically cite the paper of Phillips and Perron (1988) in which the "2 statistics" were presented (Ouliaris, Park and Phillips 1988, p. 4). In that paper, in turn, Phillips and Perron argue that the Dickey-Fuller critical values can be used with the 2(a) and 2(tg) statistics, because the limiting 89 distributions of these statistics are identical to those of the original, untransformed statistics when 0&2 = a (Phillips and Perron 1988, p. 341). So we will use those critical values in determining the results that the K1(B), K1(B)*, 81(5) and 51(B)* statistics give.“ The results are similar to those we have already seen when we looked at the interpolation tests for restricted 61 and unbiased 61, namely that in almost all cases the null of a unit root is not rejected. The K1(B) statistic fails to reject the unit-root hypothesis at the four selected significance levels for all series except industrial production and the unemployment rate. For the industrial production series, the null hypothesis is rejected at the 10 percent level, but is not rejected at the other three levels of significance; for the unemployment rate series the null is rejected at the five and 10 percent levels, but not at the other two levels. When we use K1(B)* (which uses lag length 12 for the long-run variance computation), this statistic fails to reject the unit root for any series at any level of significance. The same pattern is followed for the two "S statistics." Again, the null of a unit root is not rejected for any series except industrial production and the unemployment rate. For the former, the null is not rejected for the one, 2.5 and five percent levels, but is rejected at the 10 percent level; for the latter series the pattern is exactly the same. Also once again the version of the "S 90 statistic" using lag length 12 (designated by an asterisk) fails to reject the null at all. It may be useful again to give some examples of the use of these statistics. The unemployment rate series yields the following statistics: K1(B) = -21.58, K1(B)* = “17.26; S103) = “3.46, Sl(fi)* -= -3.14. Critical values for the first pair of statistics, performing interpolation in the Fuller (1976) tables for sample size, are: one percent, -27.00; 2.5 percent, -23.32: five percent, -20.49 and 10 percent, -17.34. For the second pair of statistics the critical values are: one percent, -4.07: 2.5 percent, -3.75: five percent, -3.46 and 10 percent, -3.16. For the Consumer Price Index the statistics are K1(B) = -4.09, K1(B)* = -4.29; 31(3) = -1.27, 81(B)* = -1.3l9. Performing the same sort of interpolation, the critical values for K1 are: one percent, -27.57: 2.5 percent, -23.73: five percent, -20.80 and 10 percent, -17.58. For the 51 statistics the critical values are: one percent, -4.03; 2.5 percent, -3.73: five percent, -3.45 and 10 percent, -3.15. Table 20 contains estimates of the statistics Ouliaris, Park and Phillips call KQ(B) and 82(B), for the case when a squared term is added to the time trend. This table also contains estimates of the intercept, the coefficients on the time-trend terms, and of the coefficient on the lagged value of the time series. Also included are h2(B) and t2(B), as well as versions of K2 and 82 when lag 91 TABLE 20 UNIT ROOT TESTS WITH A SECOND-ORDER POLYNOMIAL IN TIME A A A Series 00 “1 e2 s Real GNP 0.779 0.00332 0.0000325 0.063 Nominal GNP 1.063 0.00179 0.0000599 0.098 Per-capita real GNP 1.068 0.00189 0.0000190 0.064 Industrial production 0.031 0.00824 “0.0000088 0.098 Employment 1.216 0.00207 “0.0000040 0.038 Unemployment rate 0.503 -0.00225 0.0000103 0.466 GNP deflator 0.203 0.00187 “0.0000009 0.052 Consumer Price Index 0.298 “0.00159 0.0000243 0.056 Nominal wage 0.481 0.00214 0.0000146 0.068 Real wage 0.533 0.00263 0.0000156 0.036 Money stock 0.127 0.00350 “0.0000051 0.062 Velocity 0.386 “0.00757 0.0000502 0.064 Bond yield 0.092 “0.02019 0.0003400 0.262 Stock prices 0.247 -0.00084 0.0000563 0.156 A A A Series a * alt 02* st Real GNP 0.012 0.00098 “0.0000099 0.065 Nominal GNP 0.052 “0.00054 0.0000158 0.099 Per-capita real GNP “0.040 0.00124 “0.0000142 0.066 Industrial production 0.055 -0.00045 0.0000033 0.102 Employment 0.023 -0.00038 0.0000038 0.039 Unemployment rate 0.055 -0.00297 0.0000314 0.495 GNP deflator “0.002 0.00093 “0.0000070 0.052 Consumer Price Index 0.011 -0.00041 0.0000061 0.057 Nominal wages 0.031 0.00021 0.0000009 0.069 Real wages 0.004 0.00091 “0.0000107 0.037 Money stock 0.064 -0.00014 0.0000011 0.063 Velocity “0.03 0.00027 0.0000013 0.068 Bond yield 0.229 “0.02172 0.0003617 0.262 Stock prices 0.001 0.00036 0.0000029 0.162 A A A Series B h2(B) t2(B) Real GNP 0.839 -9.84 -2.20 Nominal GNP 0.905 “5.82 -1.75 Per-capita real GNP 0.850 -9.17 -2.13 Industrial production 0.826 -19.18 -3.16 Employment 0.881 -9.54 -2.16 Unemployment rate 0.749 -20.12 -3.32 GNP deflator 0.934 “5.37 “1.72 Consumer Price Index 0.921 -8.66 -2.41 Nominal wages 0.928 —5.07 -1.58 Real wages 0.821 -12.55 -2.35 Money stock 0.945 -4.44 -1.47 Velocity 0.768 “23.39 “3.57 Bond yield 1.031 2.19 0.81 Stock prices 0.852 -14.71 -2.77 92 TABLE 20 (cont'd.) A A A A series :2 (B) 82 (B) R2 (B) * 82 (B) * Real GNP -13.68 -2.60 -6.81 -1.83 Nominal GNP -8.97 -2.15 -8.24 -2.07 Per-capita real GNP -12.59 -2.50 -6.83 -1.84 Industrial production -20.87 -3.29 -16.87 -2.97 Employment -13.31 -2.56 -9.10 -2.11 Unemployment rate -21.30 -3.40 -17.06 -3.09 GNP deflator -9.67 -2.26 -10.85 -2.38 Consumer Price Index -15.51 -3.01 -17.74 -3.18 Nominal wages -9.02 -2.11 -8.99 -2.11 Real wages -15.92 -2.68 -7.86 -1.8 Money stock -10.78 -2.31 -10.13 -2.24 Velocity -24.83 -3.67 -17.58 -3.14 Bond yield 2.30 0.86 3.73 1.81 Stock prices -16.96 -2.96 -l4.42 -2.74 length 12 was used to compute the long-run variance (these are again indicated by asterisks). As can be seen, it now becomes much more difficult to reject the unit-root hypothesis. Both tests show some rejections of the unit root for the industrial production, unemployment rate and velocity series, but that's essentially all. For industrial production, both tests do not reject the null for the one, five and 10 percent levels of significance, but do reject it at the 20 percent level. This also holds true for the unemployment rate series. The velocity series does not reject the null at the one or five percent levels for either K2(B) or 82(3), but does reject it for the 10 and 20 percent levels of significance. Both KQ(B)* and 82(B)* fail to reject the unit root hypothesis for any series at any of these significance levels. Similarly, Table 21 presents the test statistics and 93 TABLE 21 UNIT ROOT TESTS WITH A THIRD-ORDER POLYNOMIAL IN TIME Series A A A A Go 01 C2 03 Real GNP 0.873 -0.0008 0.00021 -0.0000019 Nominal GNP 1.067 -0.0051 0.00033 -0.0000029 Per-capita real GNP 1.239 -0.0023 0.00020 -0.0000019 Industrial production -0.032 0.0168 -0.00015 0.0000008 Employment 1.563 0.0056 -0.00009 0.0000007 Unemployment rate 0.650 -0.0203 0.00056 -0.0000046 GNP deflator 0.202 0.0051 -0.00010 0.0000008 Consumer Price Index 0.431 -0.0063 0.00013 -0.0000006 Nominal wage 0.478 0.0018 0.00003 -0.0000001 Real wage 0.654 -0.0004 0.00014 “0.0000011 Money stock 0.116 0.0067 -0.00009 0.0000007 Velocity 0.387 -0.0067 0.00003 0.0000002 Bond yield 0.496 0.0684 -0.00299 0.0000328 Stock price 0.249 0.0074 -0.00014 0.0000014 A A A A Series 00* 01* 02* 03* Real GNP 0.024 -0.00128 0.00008 -0.0000010 Nominal GNP 0.092 -0.00796 0.00031 -0.0000032 Per-capita real GNP 0.001 0.00025 0.00003 -0.0000004 Industrial production 0.048 0.00028 -0.00001 0.0000001 Employment 0.018 0.00036 “0.00002 0.0000002 Unemployment rate 0.084 -0.00723 0.00016 -0.0000011 GNP deflator -0.020 0.00349 -0.00009 0.0000006 Consumer Price Index 0.039 -0.00333 0.00007 -0.0000004 Nominal wage 0.041 -0.00130 0.00005 -0.0000005 Real wage 0.015 -0.00094 0.00005 -0.0000006 Money stock 0.051 0.00162 -0.00005 0.0000004 Velocity -0.027 -0.00002 0.00001 -0.0000001 Bond yield 0.032 0.01041 -0.00076 0.0000106 Stock prices 0.010 -0.00062 0.00003 -0.0000002 A Series s 5* B Real GNP 0.063 0.066 0.824 Nominal GNP 0.098 0.099 0.908 Per-capita real GNP 0.064 0.067 0.829 Industrial production 0.097 0.103 0.772 Employment 0.038 0.039 0.844 Unemployment rate 0.467 0.498 0.737 GNP deflator 0.051 0.052 0.927 Consumer Price Index 0.054 0.056 0.896 Nominal wage 0.069 0.069 0.928 Real wage 0.036 0.037 0.787 Money stock 0.062 0.063 0.938 Velocity 0.065 0.068 0.763 Bond yield 0.244 0.254 0.799 Stock prices 0.154 0.163 0.806 94 TABLE 21 (cont'd.) Bfirififl h3(fi) t3(fl) thfi) 83(3) Real GNP “10.73 “2.35 “14.46 “2.72 Nominal GNP “5.63 “1.69 “8.21 “2.03 Per-capita real GNP -10.43 -2.31 -14.11 -2.68 Industrial production -25.09 -3.73 -27.35 -3.88 Employment -12.46 -2.59 -16.76 -2.97 Unemployment rate -21.07 -3.41 -21.80 -3.47 GNP deflator “5.92 “1.89 “9.93 “2.35 Consumer Price Index -11.45 -3.15 -l7.41 -3.47 Nominal wage -5.02 -1.52 -9.04 -2.08 Real wage “14.93 “2.73 “18.31 “3.02 Money stock -5.01 -1.62 -11.35 -2.41 Velocity “23.94 “3.59 “25.44 “3.69 Bond yield -14.05 -2.60 -14.97 -2.68 Stock price “19.24 “3.13 “22.02 “3.34 Series R3(B)* 83(B)* Real GNP “6.39 “1.83 Nominal GNP -6.34 -1.80 Per-capita real GNP -6.93 -1.89 Industrial production -19.94 -3.37 Employment -12.24 -2.57 Unemployment rate —17.42 -3.14 GNP deflator -10.74 -2.43 Consumer Price Index -19.12 -3.58 Nominal wage -9.00 -2.07 Real wage -8.40 -2.04 Money stock -10.72 -2.34 Velocity -17.97 -3.15 Bond yield -7.01 -1.80 Stock price -16.68 -2.92 coefficient values for the various data series when a third- degree polynomial is used to specify the time trend. In this case, both tests only reject the null hypothesis of a unit root at the 20-percent significance level for the industrial production and the velocity series. However, again, at numerically-lower significance levels the unit- root hypothesis cannot be rejected. Again, K3* and S3* fail to reject the unit root at all. 95 Finally, as Table 22 shows, when the time trend is modelled as a fourth-degree polynomial the null of a unit root can't be rejected even at the 20-percent level of significance. This holds true for both K4 and S4, as well as those statistics computed using lag length 12. This does indicate some progression, as higher-order polynomials appear to make it more difficult to reject the unit root null hypothesis and so make it more likely that an investigator will accept the presence of a unit root. 3. A NEW UNIT ROOT TEST IN THE PRESENCE OF A TIME TREND This section takes up application of some new tests for a unit root in the presence of a time trend. These tests were proposed by Schmidt (1989) and Schmidt and Phillips (1989), who argue that these tests should be more powerful than the Dickey-Fuller type statistics when a time trend is present. Once again, the Nelson-Plosser data set will be used to investigate how these new tests perform in an empirical application. But before discussing the results of these new tests, we will describe them briefly. These statistics are grounded in a parametrization of the unit-root problem that was proposed by Bhargava (1986): yt = Y + it + xt, (19) Here, Y represents level, and (t represents deterministic 96 TABLE 22 UNIT ROOT TESTS WITH A FOURTH-ORDER POLYNOMIAL IN TIME Series Real GNP Nominal GNP Per-capita real GNP Industrial production -0.031 Employment Unemployment rate GNP deflator Consumer Price Index Nominal wage Real wage Money stock Velocity Bond yield Stock prices Series Real GNP Nominal GNP Per-capita real GNP Industrial production Employment Unemployment rate GNP deflator Consumer Price Index Nominal wage Real wage Money stock Velocity Bond yield Stock prices Series Real GNP Nominal GNP Per-capita real GNP Industrial production Employment Unemployment rate GNP deflator Consumer Price Index Nominal wage Real wage Money stock Velocity Bond yield Stock prices A A 00 01 0.873 “0.0007 1.323 0.0080 1.239 “0.0057 0.0166 1.754 0.0117 0.948 “0.0820 0.155 0.0109 0.450 “0.0138 0.553 0.0133 0.699 0.0037 0.093 0.0124 0.402 “0.0083 0.646 0.0477 0.231 0.0095 “4 “0.0000000017 “0.0000001746 0.0000000484 0.0000000006 “0.0000000350 0.0000003931 -0.0000000373 0.0000000193 “0.0000001025 “0.0000000391 “0.0000000351 0.0000000048 0.0000002562 “0.0000000079 A A a * a * 0.034 “0.0041 0.107 “0.0125 0.016 “0.0042 0.024 0.0044 0.004 0.0036 0.243 “0.0443 “0.051 0.0105 0.087 “0.0117 0.015 0.0054 0.010 0.0003 0.028 0.0069 “0.031 0.0007 0.055 0.0045 “0.060 0.0127 A “2 0.00020 -0.00053 0.00043 -0.00014 -0.00040 0.00392 -0.00042 0.00043 “0.00065 -0.00011 -0.00039 0.00009 “0.00156 -0.00024 S 0.064 0.098 0.065 0.097 0.037 0.463 0.051 0.053 0.068 0.036 0.062 0.065 0.244 0.156 A 02* 0.00028 0.00063 0.00034 “0.00018 “0.00020 0.00219 “0.00046 0.00041 “0.00036 “0.00003 “0.00034 “0.00003 “0.00039 “0.00057 A “3 -0.00000017 0.00001882 “0.00000783 0.00000067 0.00000643 -0.00006848 0.00000687 “0.00000486 0.00001455 0.00000440 0.00000642 -0.00000082 -0.00000124 0.00000296 A B 0.824 0.879 0.831 0.771 0.823 0.719 0.934 0.903 0.909 0.767 0.937 0.759 0.778 0.811 A 03* “0.00000594 “0.00001121 “0.00000838 0.00000242 0.00000357 “0.00003984 0.00000780 “0.00000506 0.00000854 0.00000111 0.00000587 0.00000046 0.00000244 0.00000901 TABLE 22 (cont'd.) 97 A a 0 Series at Real GNP 0.0000000401 0.066 Nominal GNP 0.0000000647 0.100 Per-capita real GNP 0.0000000641 0.067 Industrial production “0.0000000105 0.103 Employment “0.0000000209 0.039 Unemployment rate 0.0000002393 0.499 GNP deflator “0.0000000437 0.052 Consumer Price Index 0.0000000210 0.055 Nominal wage “0.0000000637 0.069 Real wage “0.0000000121 0.037 Money stock “0.0000000331 0.063 Velocity “0.0000000025 0.069 Bond yield 0.0000000571 0.256 Stock price “0.0000000459 0.162 Series x4(p) 84(B) Real GNP “14.36 “2.67 Nominal GNP “11.37 “2.29 Per-capita real GNP “13.66 “2.61 Industrial production “27.33 “3.77 Employment “17.45 “3.16 Unemployment rate “21.67 “3.57 GNP deflator “8.70 “2.13 Consumer Price Index “15.00 “3.26 Nominal wage “10.02 “2.30 Real wage “19.28 “3.14 Money stock “11.07 “2.39 Velocity “25.45 “3.66 Bond yield “17.31 “2.92 Stock price “21.49 “3.05 - trend. In order to insure that the A A h4(B) 54(5) “10.74 “2.31 “7.37 “1.81 “10.34 “2.26 “25.16 “3.63 “14.19 “2.90 “22.50 “3.63 “5.35 “1.70 “10.72 “3.02 “6.34 “1.85 “16.32 “2.89 “5.12 “1.66 “24.32 “3.58 “15.53 “2.76 “18.72 “2.82 K4Cfi)* 34(3)* “6.12 “1.74 “9.10 “2.03 “6.28 “1.76 “19.53 “3.22 “11.06 “2.63 “14.84 “3.08 “8.40 “2.10 “15.35 “3.28 “8.60 “2.14 “7.57 “2.00 “9.86 “2.26 “16.87 “3.02 “10.18 “2.22 “15.55 “2.53 distribution of the test statistic does not depend on nuisance parameters, Bhargava uses this version of the model to derive a unit-root test under invariance considerations. the model to derive LM-type statistics. However, Schmidt and Phillips use the test statistics that Schmidt and Phillips propose also have the appealing quality that their distributions do not depend on the nuisance parameters Y and 5. Their model also has the 98 additional advantage of offering clear interpretations of the nuisance parameters in the cases where the null hypothesis of a unit root does not hold. Clearly, if B = 1 in the model, then the unit-root hypothesis holds. The statistics that Schmidt and Phillips give for the unit-root tests are derived from the regression Ayt = intercept + 0St_1-+ error, t = 1, ... , T (20) where 0 = 0 under the null hypothesis. In order to interpret the statistics, it is necessary to understand the source of the regression. Imposing B = 1 on the model gives the restricted maximum likelihood estimates (MLEs) of E and of (Y + X0) as E = mean Ay = (Y1- - yp/(T - 1) (21) and N " (‘P + X0) = Y1 ' E (22) We have the latter because under the null hypothesis the intercept and initial condition terms are not identified separately, but they are individually identified under the alternative hypothesis. Also note that in their derivation of these LM tests, Schmidt and Phillips assume that 6t above is distributed as i.i.d. normal, with mean 0 and variance 2 0e . Further, the variable St,1 is defined as: 99 ~ ~ .- St-l = yt,1 “ (y + X0) “ €(t“1), t = 2, ... , T (23) St_1 thus represents lagged residuals from the above model in levels, although the parameters of the model are estimated in differences. Schmidt and Phillips propose the following test statistics for the unit-root hypothesis:' 0 = T0 (24) and 1 = usual t-statistic for 0 = 0 (25) Comparing these statistics to the Dickey-Fuller 0, and 7, statistics gives an intuitive argument as to why these might be preferable to the Dickey-Fuller statistics. The Dickey-Fuller statistics are based on the regression of y on an intercept, time trend and lagged y and, as Schmidt and Phillips point out, this is equivalent to a regression of Ay on the same variables. Further, it can be shown that the latter regresssion in turn is equivalent to Ayt = intercept + p8,-1 + error, t = 2, ... , I? (26) with §t_1 the residual from an ordinary least squares regression of‘y,,1 on an intercept and a time trend. Thus, 3, is the estimated coefficient on §bq_and 7, is the t“ statistic to test p = 0. The distinction here, however, is the nature of the residuals gbd_and 5043 The parameters used to calculate St_1)come from an estimate of the model in 100 1gyg1g: parameters from an estimate of the model in gittgrgnggg are used to estimate 8&4, The problem is that, given that y is assumed to be integrated of order one (I(1)) under the null hypothesis, a regression of y,_1 on an intercept and time in levels is "spurious" and so is not to be recommended, as Schmidt and Phillips indicate. Granger and Newbold (1986, pp. 208-209) also point out in a simulation study that regressions in levels tend to be particularly prone to being spurious, and warn against routinely taking a high R-squared or high adjusted R-squared as indicating the presence of a relationship, especially when the associated value of the Durbin-Watson statistic is low. In their paper Schmidt and Phillips include a number of Monte Carlo experiments which test their 0 and 7 statistics against the Dickey-Fuller 3”, ; 6, and 7,. B' To review, the 3p and ;, tests are from a regression of y on its lagged value and an intercept, with no time trend present. Schmidt and Phillips conclude that the p and 7 tests they propose are about equally powerful, and are actually more powerful than the 3, and ;, tests, except when the standardized initial condition term Xo* ( = Xo/ae) is A large, while p” and 7” tests against 1gy§1;§tgtign§ry alternatives, but have little are more powerful than the other power against trend-stationary alternatives. In this section of the chapter we consider how these new proposed tests perform on the data collected by Nelson and Plosser (1982). 101 Table 23 presents the critical values for the two statistics given by Schmidt and Phillips, while Table 24 displays the results of using the Nelson-Plosser data set with these statistics. In the latter table, 0 and 7 are the test statistics computed from each series, while B is again the estimated value of the coefficient on the lagged value of the data series in each regression. Unlike the previous section, the data were used in level and in logarithmic form. T 25 50 100 200 T 25 50 100 200 Note that this is the same treatment that TABLE 23 SELECTED CRITICAL VALUES FOR THE 0 0.01 “20.4 “22.8 “23.8 “24.8 0.01 “3.90 “3.73 “3.63 “3.61 0.025 “17.9 “19.6 “20.4 “20.9 0.025 “3.50 “3.39 “3.32 “3.30 Critical values for p 0.05 0.10 0.20 0.90 “15.7 “17.0 “17.5 “17.9 “13.4 “14.3 “14.6 “14.9 “10.9 “11.4 “11.6 “11.8 “3.43 “3.37 “3.35 “3.31 Critical values for 1 0.05 “3.18 “3.11 “3.06 “3.04 0.10 “2.85 “2.80 “2.77 “2.76 0.20 “2.48 “2.46 “2.45 “2.45 0.90 “1.28 “1.29 “1.29 “1.29 Source: Schmidt and Phillips 1989, p. 31 AND T TESTS 0.95 “2.98 “2.78 “2.75 “2.72 0.95 “1.17 “1.16 “1.17 “1.16 0.975 “2.51 “2.39 “2.32 “2.30 0.975 “1.08 “1.08 “1.07 “1.07 0.99 “2.16 “2.03 “1.92 “1.89 0.99 “1.00 “0.99 “0.97 “0.97 Park and Choi (1988) used in examining the same data set. Note also that we will examine significance levels one percent, 2.5 percent, five percent, 10 percent and 20 percent. 102 For the data in level form, the results are quite simple. For all time series in level form, both the p and 7 tests indicate acceptance of the null hypothesis that Q = 0, at all five levels of significance. This then implies acceptance of the unit-root hypothesis for this data. The situation is a bit different with the data in logarithmic form. For logged data, both tests give acceptance of the unit-root hypothesis for real GNP, nominal GNP, per-capita real GNP, employment, the GNP deflator, the CPI, the nominal wage, the real wage, the money stock, velocity, bond yield and stock prices. However, the tests do differ with respect to the industrial production and the unemployment rate series. In the case of the industrial production series, the p test indicates acceptance of the unit-root hypothesis (that is, the test accepts the null that 0 = 0) at the one percent and 2.5 percent levels, then rejects the unit-root hypothesis for the five percent, 10 percent and 20 percent levels. The behavior of the 7 test is similar in that it also accepts the unit-root hypothesis at the higher significance levels (in this case not only the one percent and 2.5 percent levels, but also the five percent level), and rejects the hypothesis at the 10 percent and 20 percent levels. This is paralleled by the behavior of the p and 7 tests when applied to the unemployment rate series. In this case the p test fails to reject the unit-root hypothesis at 103 TABLE 24 UNIT ROOT TEST USING THE p AND T STATISTICS A Series p 1 B Real GNP “2.2140 “1.0365 “0.0357 Nominal GNP “0.7728 “0.6087 “0.0892 Per-capita real GNP “5.5300 “1.6595 “0.0892 Industrial production “1.6495 “0.8992 “0.0149 Employment “8.2059 “2.0401 “0.1013 Unemployment rate “11.2773 “2.4158 “0.1392 GNP deflator “2.1731 “1.0300 “0.0265 Consumer Price Index “1.5924 “0.8833 “0.0144 Nominal wage “0.7878 “0.6159 “0.0111 Real wage “2.5713 “1.1198 “0.0362 Money stock “0.6925 “0.5788 “0.0085 Velocity “4.2411 “1.4500 “0.0416 Bond yield “1.4145 “0.8272 “0.0199 Stock prices “2.1739 “1.0324 “0.0217 " " A Series p 1 B Log real GNP “7.1215 “1.8960 “0.1149 Log nominal GNP “4.1740 “1.4336 “0.0673 Log per-capita real GNP “7.2071 “1.9081 “0.1162 Log industrial production “16.4387 “2.9388 “0.1481 Log employment “8.2161 “2.0414 “0.1014 Log unemployment rate “19.2392 “3.2422 “0.2375 Log GNP deflator “4.4284 “1.4807 “0.0540 Log Consumer Price Index “2.3571 “1.0766 “0.0212 Log nominal wage “4.4810 “1.4885 “0.0631 Log real wage “6.6501 “1.8278 “0.0937 Log money stock “4.1716 “1.4359 “0.0509 Log velocity “6.5980 “1.8191 “0.0647 Log bond yield “1.4350 “0.8332 “0.0202 Log stock prices “8.2520 “2.0432 “0.0825 the one percent and 2.5 percent levels, and rejects it at the five percent, 10 percent and 20 percent levels. These results are matched exactly by the results for the 1 test, which also accepts for the "tighter" critical levels and reverses its conclusions for the latter three critical levels. Finally, we consider some "corrected" versions of the Schmidt and Phillips statistics. These corrected 104 statistics come from relaxing the assumption that the errors are independently and identically distributed. Schmidt and Phillips derive the asymptotic distributions for p and 7 under "fairly general" conditions (Schmidt and Phillips 1989, p. 7 and Appendix A), but these require corrections in order to remove the effects of dependent and possibly heterogeneous errors. Defining T of = lim T'1E( 82) T“- £2.31 t (27) 8 02 = lim T’1E $2) , S = e , 2e- ( T t 2:; 1 (28) then what is needed are consistent estimators of these two parameters. Multiplying p by the ratio 02/0,2 gives a corrected test statistic whose asymptotic distribution is the same that 0 would have under independently and identically distributed errors, so the critical values displayed in Table 23 are also asymptotically correct. Similarly, multiplying 7 by the consistently-estimated ratio a/ae makes the same sort of correction and again the critical values in Table 23 are asymptotically correct for this statistic. Consistent estimators of 0,2 and 02 are given by Schmidt and Phillips (1989, p. 9) as 1» 2 (29) 52 = 7“1 E ,2}, c 105 and T 4 T - 2 - 32(2) e Tip-18, + 2212 2 2,2“ (30) s-i t- 8+1 These are for 0,2 and 02, respectively. The 6; are residuals from the least-squares regression yt = a + By,_1-+ 6t + 6:. The results of correcting the Schmidt-Phillips statistics are given in Table 25. Again, because only general guidelines on the selection of lag length for the weighting scheme involved in the long-run variance computation are given, we used both lag length four and 12. A single asterisk designates the former, a double asterisk the latter. Also, the time series were again estimated in level and logged form. However, the information these series give is similar regardless of whether or not logs were taken. For the data series in level form the 0 statistic indicates that the null hypothesis of a unit root cannot be rejected for 12 of the series, regardless of lag length used in estimating the long-run variance. These series are real GNP, nominal GNP, per-capita real GNP, industrial production, the GNP deflator, the CPI, nominal wage, real wage, the money stock, velocity, bond yield and stock prices. The two series that show somewhat different behavior are employment and the unemployment rate. For employment, the null hypothesis is not rejected up to the 10 106 TABLE 25 UNIT ROOT TESTS WITH CORRECTED p AND T STATISTICS Series p* 1* p** 1** Real GNP “3.20 “1.25 “1.49 “0.85 Nominal GNP “1.05 “0.71 “0.87 “0.65 Per-capita real GNP “8.29 “2.03 “4.28 “1.46 'Industrial production “1.69 “0.91 “0.84 “0.64 Employment “12.76 “2.54 “8.81 “2.11 Unemployment rate “15.53 “2.84 “11.97 “2.49 GNP deflator “3.87 “1.38 “4.26 “1.44 Consumer Price Index “2.98 “1.21 “2.97 “1.21 Nominal wage “1.34 “0.80 “1.43 “0.83 Real wage “3.66 “1.34 “1.82 “0.94 Money stock “1.96 “0.98 “1.80 “0.93 Velocity “3.91 “1.39 “2.49 “1.11 Bond yield “2.04 “0.99 “2.55 “1.11 Stock prices “2.47 “1.10 “2.79 “1.17 Series p* 1* p** 7** Log real GNP “9.81 “2.23 “5.07 “1.60 Log nominal GNP “6.68 “1.81 “6.50 “1.79 Log per-capita real GNP “9.88 “2.23 “5.28 “1.63 Log industrial production “17.58 “3.04 “13.77 “2.69 Log employment “11.39 “2.40 “7.56 “1.96 Log unemployment rate “20.77 “3.37 “16.15 “2.97 Log GNP deflator “8.43 “2.04 “9.52 “2.17 Log Consumer Price Index “4.76 “1.53 “4.95 “1.56 A Log nominal wage “7.97 “1.99 “8.08 “2.00 Log real wage “7.89 “1.99 “4.39 “1.49 Log money stock “10.06 “2.23 “9.42 “2.16 Log velocity “6.14 “1.76 -4.32 -1.47 Log bond yield “2.06 -1.00 -2.92 -1.19 Log stock prices “9.20 “2.16 “8.11 “2.03 percent level of significance, but it is rejected at the 20 percent level. However, when lag length 12 is used, the null hypothesis is not rejected at any of these five significance levels. For the unemployment rate series, the null is not rejected at the one percent, 2.5 percent or five percent significance levels, but it is rejected at the 10 percent and 20 percent levels. When lag length 12 is used 107 instead in the correction of p, then the series fails to reject the null up to the 10 percent level of significance, but it is rejected at the 20 percent level. These results are exactly duplicated by the tests using the corrected 7 statistic and the data in level form. Again all series (except for the employment and unemployment rate series) fail to reject the unit-root null hypothesis at any of these significance levels. And once again the results for the employment and the unemployment rate series are as described in the previous paragraph. Considering the series in logarithmic form, we encounter results that are similar to those we have already seen with the hlv t1, K1 and 31 statistics. Looking first at the corrected 0 statistic, again 12 data series fail to reject the null hypothesis at any of the five levels of significance used. These series are: real GNP, nominal GNP, per-capita real GNP, employment, the GNP deflator, the CPI, the nominal wage, real wage, the money stock, velocity, bond yield and stock prices. The industrial production series fails to reject the null at the one and 2.5 percent levels, then rejects it for the five, 10 and 20 percent levels of significance. The unemployment rate series fails to reject the null at only the one-percent level, then rejects it for the remaining four significance levels. The situation is only slightly different when lag length 12 is used instead. The same 12 series again fail to reject the null hypothesis at any of the chosen levels of significance. The only 108 differences are in the industrial production and unemployment rate series, and these are very slight. Now, with lag length 12, the corrected 0 fails to reject the null up to the 10 percent level for industrial production, and rejects the null for only the 20 percent level. Similarly, the statistic for the unemployment rate series indicates failure to reject the unit root at the one, 2.5 and five percent levels, but rejection of the null at significance levels 10 percent and 20 percent. Once again, these results are almost completely duplicated when the corrected 7 statistic is used. The same 12 series fail to reject the unit root at any of the levels of significance chosen, regardless of lag length involved. Also, the results for the unemployment rate series are just as noted above and for both lag lengths. There is a very slight difference for industrial production with lag length four: the null hypothesis is not rejected at the one, 2.5 and five percent levels, but it is rejected at the 10 and 20 percent levels (in other words, the null is not rejected at one additional level of significance now). However, when lag length 12 is used instead, then the results for logged industrial production are the same as listed above for the corrected 0 statistic with lag length 12. Note that these results are generally very similar to those for the p and 7 statistics. Perhaps the biggest difference is that in level form 0 and 7 completely failed to reject the unit-root null hypothesis while the corrected 109 p and the corrected 7 statistics do show some rejections at some significance levels for the employment and unemployment rate series. However, in logged form, p and r and corrected p and corrected 1 give virtually identical results. 4. CONCLUSIONS This chapter has sought to shed some light on the performance of various proposed tests of the unit-root hypothesis, by applying those tests to a data set of 14 individual time series compiled by Nelson and Plosser (1982). The data set contains various macroeconomic series, and the shortest data set has only 62 annual observations while the longest has 111 observations. The tests of the unit-root hypothesis considered in this chapter included the Dickey-Fuller (1976, 1979) tests, tests proposed by Park and Choi (1988) and Ouliaris, Park and Phillips (1988), and two tests proposed by Schmidt and Phillips (1989). Unfortunately, the results of all these tests are not unambiguous. The original Dickey-Fuller tests, with the critical values provided in Fuller's 1976 text, essentially always find unit roots. Applying the Dickey-Fuller tests, but with the corrected critical values provided in Chapter 2 of this study, turns up some rejections of the unit-root hypothesis, it no restrictions are placed on the estimated coefficient of time trend. Including various degrees of polynomials as the time-trend specification appears to make rejection of the unit-root hypothesis nearly impossible. 110 The tests of Schmidt and Phillips also indicate the presence of unit roots in most series, except for the industrial production and unemployment rate series. In the next chapter we will take up another pair of tests, which are meant to Clarify the situation but by asking a slightly different question. Those tests examine the question of whether a macroeconomic time series can be characterized as being stationary or integrated. ENDNOTES ENDNOTES 1. Nelson and Plosser take natural logarithms because of the "tendency of economic time series to exhibit variation that increases in mean and dispersion in proportion to absolute level." This also motivates their assumption that trends are linear in the data once it is transformed (Nelson and Plosser 1982, p. 141). 2. Some sequence {et) satisfies an inyarigngg pringiplg if the stochastic process constructed by [at] Wn()=—"L’ I I QJE'ZP e, where r 6 [0,1] and n . 1 402 = 11m —E e )2 n ‘23 . 1 n 111“]. n = 11;“ 328mg) + 21im 3f; 2 E