33 ‘1}- 3.1.. ‘ - '9 (kw-fry?” I 1" a 64» 4 $ J." J ‘Zé‘nki’fli l , v 52%;. :‘ca. «a: a; ,v‘ 3.3% ‘ C ‘ n- __ x». ’1; g ’11, . “.v 'i . .«xv N Tara, 4‘ v =ng}.- M ism ' ‘ 54‘ ICHOGAN STATE UN ITY LIBRARIES Illllllllllllllllllllllll lllll ll 3 1293 01048 8439 This is to certify that the dissertation entitled ASYMPTOTIC THEORY FOR LONG-MEMORY TIME SERIES presented by DONGIN LEE has been accepted towards fulfillment of the requirements for Ph . D . degree in Economics WSQQth/ Major professor Date June 27, 1994 MS U it an Affirmative Action/Equal Opportunity Institution 0-1277! LIBRARY Michigan State Unlverslty PLACENRETURN BOXNMMMMMMWM TOAVOIDFINEB Munonorbdmddodm. DATE DUE DATE DUE DATE DUE , MSU IoAn Mun-tho Action/EM Opportunity Im Walla-9.1 ASYMPTOTIC THEORY FOR LON G—MEMORY TINIE SERIES By Dongin Lee A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Economics 1994 ABSTRACT ASYMPTOTIC THEORY FOR LONG-MEMORY TIME SERIES By Dongin Lee Economic time series are nonstationary rather than stationary around deterministic trends in most cases. Usually nonstationary time series are analyzed by integrated process models. This dissertation considers a generalized integrated process in the sense that the differencing parameter is allowed to be a fractional value. For d 6 (-1/2, 1/2), the I(d) process is stationary and invertible. For 0 < d < 1/2, the autocorrelations of the I(d) process are positive and decline so slowly that the sum of autocorrelations is infinite in the limit, while for -1/2 < d < O the autocorrelations of the I(d) process are negative for all lags and the sum of autocorrelations goes to zero. Therefore as long as d 6 (-1/2, 1/2) and d at O, the standard ARIMA model cannot be applied to the I(d) process. Chapter 2 considers a stationarity test against I(d) alternatives. Kwiatkowski, Phillips, Schmidt and Shin (KPSS) proposed a test of the null hypothesis of stationarity. It is shown in Chapter 2 that the KPSS test is consistent against an I(d) processes for d 6 (-1/2, 1/2). It can therefore be used to distinguish short memory and long memory stationary processes. The simulation results show that a rather large sample size, such as T = 1000, will be necessary to distinguish reliably between a long memory process and a short memory process with comparable short-term autocorrelation. Chapter 3 considers the power of Dickey-Fuller unit root tests against I(d) alternatives with d e (-0.5,0.5). The Dickey-Fuller tests are shown to be consistent against these alternatives. Simulations show high power of the tests against stationary fractionally integrated alternatives, and reveal some interesting features of the power fimction at and around the boundary ((1 = 0.5) of the stationary region. Chapter 4 considers several estimators for the differencing parameter in the I(d) model. Specifically the minimum distance estimator (NHDE) suggested by Tieslau, Schmidt and Baillie (1994) is compared to the exact MLE and the approximate MLE of various forms. Both the exact MLE and approximate MLE of d are J? -consistent and asymptotically normal for d 6 (-1/2, 1/2), while this is true for the MDE only for d 6 (-1/2, 1/4). Simulations show that if the mean of the process is unknown, the MDE is comparable to the MLE in a reasonable sized sample when the number of autocorrelations is more than two or three. ACKNOWLEDGMENTS First of all, I would like to thank the members of my dissertation committee for their guidance and advice. Without their assistance the completion of this dissertation would not be possible. Especially I would like to express my thanks to Professor Peter Schmidt, the committee chair. Throughout the entire process of writing dissertation he provided helpful guidance and carefiil comments which were essential to completing this dissertation. Also I would like to thank Professor Richard Baillie who introduced me to the main subjects of this dissertation as well as taught me time series econometrics. There are so many other people, both professors and graduate students, who have been important for me during the whole program. Some of them are so special to me that I would like to name them. I gratefully acknowledge that I learned statistics from Professor James Stapleton and Professor James Hannan and econometrics from Professor Peter Schmidt, Professor Richard Baillie and Professor Ching-F an Chung. I am indebted to those professors forever. Also I am grateful that I have met Junsoo Lee, Yongcheol Shin and Kyungso Im, the graduate students who studied econometrics together . Especially, I am grateful to meet Kyungso who has been my classmate and officemate for five years. The tie between us is more than that of ofiicemate or classmate. Without him, studying econometrics would not have been fin and profitable. iv Special thanks also goes to the administrative staff in the department. I especially acknowledge Mrs. Ann Feldman who gave me practical advice from time to time which made the whole process of my study smooth and enjoyable. Finally, I would like to thank my wife, Jaekyung, and my two children, Minyoung and Changwoo. Words would not be enough for their love, understanding and encourangement. Also I wish to express my thanks to my parents for their prayer and moral support. TABLE OF CONTENTS LIST OF TABLES ........................................................................................................ vii CHAPTER 1 INTRODUCTION .................................................................................... 1 CHAPTER 2 POWER OF THE KPSS TEST OF STATIONARITY AGAINST FRACTIONALLY-INTEGRATED ALTERNATIVES ......................................... 10 1. Introduction ....................................................................................................... 11 2. Preliminaries ...................................................................................................... 13 3. Consistency Against I(d) Alternatives ................................................................. 17 4. Power in Finite Samples ..................................................................................... 22 5. Concluding Remarks .......................................................................................... 27 CHAPTER 3 POWER OF DICKEY-FULLER UNIT ROOT TESTS AGAINST STATIONARY FRACTIONALLY-INTEGRATED ALTERNATIVES ................ 34 1. Introduction ....................................................................................................... 35 2. Preliminaries ...................................................................................................... 36 3. Consistency of DF Tests against I(d) Alternatives ............................................... 39 4. Power in Finite Samples ..................................................................................... 46 5. Conclusion ......................................................................................................... 50 CHAPTER 4 FINITE SAMPLE PERFORMANCE OF THE MINIMUM DISTANCE ESTIMATOR IN THE FRACTIONALLY-INTEGRATED MODEL .................... 56 1. Introduction ....................................................................................................... 57 2. The MDE and the Asymptotic Properties of the Estimate ................................... 61 3. The Exact MLE, the Approximate MLE and Their Asymptotic Properties .......... 66 4. The Sample Mean, Sample Autocovariances and Sample Autocorrelations ......... 68 5. The Finite Sample Properties of the MDE and MLE in the I(d) Model ............... 72 6. Concluding remarks ........................................................................................... 80 CHAPTER 5 CONCLUSION .................................................................................... 114 LIST OF REFERENCES ............................................................................................ 118 LIST OF TABLES TABLE 2-1 POWER OF THE flu TEST AGAINST I(d) ALTERNATIVES ............. 30 TABLE 2-2 POWER OF THE fl: TEST AGAINST I(d) ALTERNATIVES ............. 31 TABLE 23 POWER OF THE 1‘1, AND fi, TESTS AGAINST I(d) ALTERNATIVES VERSUS SIZE IN THE PRESENCE OF AR(1) ERRORS ................................... 32 TABLE 2-4 POWER OF THE LO'S MODIFIED R/S TEST AGAINST I(d) ALTERNATIVES ............................................... 33 TABLE 3-1 POWER OF COEFFICIENT TYPE DF UNIT ROOT TESTS AGAINST I(d) ALTERNATIVES .......................................................................................... 52 TABLE 3-2 POWER OF T-STATISTIC TYPE DF UNIT ROOT TESTS AGAINST I(d) ALTERNATIVES .......................................................................................... 53 TABLE 3-3 POWER OF COEFFICIENT TYPE DF TESTS AGAINST STATIONARY AR(1) ALTERNATIVES AND AGAINST STATIONARY I(d) ALTERNATIVES ................................................................................................. 54 TABLE 3-4 POWER OF T-STATISTIC TYPE DF TESTS AGAINST STATIONARY AR(1) ALTERNATIVES AND AGAINST STATIONARY I(d) ALTERNATIVESSS TABLE 4-1 THE SAL/IPLE MEAN OF THE I(d) PROCESS AND ITS NORMALIZED VARIANCE ................................................................................ 82 TABLE 4-2 THE SAMPLE AUTOCOVARIANCES OF THE I(d) PROCESS ............ 83 TABLE 4-3 THE SAMPLE AUTOCORRELATIONS OF THE I(d) PROCESS .......... 95 TABLE 44 THE MDE 3, IN THE I(d) MODEL ...................................................... 105 TABLE 4-5 MLE AND MDE IN THE I(d) MODEL ................................................. 107 TABLE 4-6 IRREGULAR REPLICATIONS IN THE EXACT MLE FOR THE I(d) MODEL .............................................................................................................. 113 vii CHAPTER 1 INTRODUCTION Many macroeconomic times series are nonstationary processes rather than stationary processes around deterministic trends, as first found in Nelson and Plosser (1982). In the recent literature these nonstationary macro series are modeled by integrated processes. Economic theory such as the real business cycle theory, the permanent income-rational expectation theory of consumption or the efficient market hypothesis in financial economics provided theoretical grounds for integrated time series processes. However, if the first order autocorrelation of the series is too small for an integrated process and the autocorrelations for large lags are too persistent for a stationary ARMA process, it will be hard to decide whether the series is stationary or not. Or, if a series looks like a unit root process (integrated process of order one), but the first differenced series has small negative autocorrelations so the differenced series looks overdifi‘erenced, what is a natural guess for the series? It might be neither a usual short- memory stationary process nor a unit root process. This dissertation considers an alternative type of series, called long-memory processes. In a typical long-memory process the autocorrelations of the process are persistent, but it is neither a stationary ARMA process, nor is it a nonstationary integrated process. Long-memory persistence in a time series was observed in hydrology and referred to as a “Hurst effect” quite a long time ago. In the mid 19605 it was modeled as a “fractional noise process” proposed by Mandelbrot and Van Ness (1968). In economics, the possibility of long-memory processes was implied in some early literature, for example Granger (1966), and this kind of process was investigated in a formal way in the early 19805 afier Granger and Joyeux (1980) and Hosking (1981) provided an alternative definition of the long-memory process. Their process is called a “fractionally integrated process”. Granger (1980) provided an argument for the theoretical possibility of a fractionally integrated process in the economic time series. He showed that the aggregated series of heterogeneous but persistent AR(1) processes follows a fractionally integrated process. Geweke and Porter-Hudak (1983) proved that the two classes of long-memory processes (Mandelbrot and Van Ness versus Granger-Joyeux-Hosking) are equivalent. In the dissertation we will follow the definition of the fractionally integrated process of Granger (1980), Granger and Joyeux (1980) and Hosking (1981). A time series {y.} said to be a fractionally integrated process of order (I, or I(d) with zero mean, if it has the following from; (1) (1 - L)d y. = a. with d 6 (-1/2, 1/2), where L is the lag operator, (1 is the differencing parameter and at is a white noise process with zero mean and finite variance oz. The expression (1 - L)cl is defined by means of the binomial expansion: (2) (1 - L)d = 21:, L‘, «c. = I‘(i-d)/[I‘(i+1)/1“(-d)], i = o, 1, 2, i=0 where F( ° ) is the gamma function. Note that if d is a positive integer, {y(} is a nonstationary integrated process. However, if d 6 (-1/2, 1/2), {y} is stationary and invertible. The AR(oo) and MA(oo) representations of I(d) are as follows: (3) y. = :1. y.-. + e. «1. = - F(i-d)/[F(i+1)F(-d)]. i = 1. 2. 3. (4) y: = 29,814, 91 = F(i+d)/[F(i+1)F(d)], i = O, 1, 2, i=0 The variance 6,2 and autocorrelations p; of the I(d) process are also expressed in terms of gamma functions as follows: (5) of = (:2 I‘(1-2d)/I‘2(1-d) (6) pi = {F(i+d)F(1-d)}/ { F(i-d+1)F(d)} = H(k-1+d)/(k-d), i= 1, 2, 3,.... k=1 To determine the partial autocorrelations, we write the best linear predictor 9,“ of ym given y1, y2, y3, ..., y. as 91“ =¢uy1 +¢ayz+ “PM“: where the coefficient (5.. is computed by the Durbin-Levinson algorithm of Levinson (1947), Durbin (1960) and Whittle (1963) as (7) (5.. = - m {F(i-d) F(t-d-i+1)}/{F(-d)F(t-d+1)}, 1= 1, 2, t. So the partial autocorrelations on, are as follows: (8) a, = (1),; = -{F(i-d) F(1-d)}/{F(-d)F(i-d+1)} = d/(i-d), i = 1,2,3,. . .. Since F(x) ~ 421: e"‘*‘ (x- 0"” as x —> oo, we can find the asymptotic behavior of the coefficients in the AR(oo) and MA(oo) representations, and also of the autocorrelations for large lags. Thus (9) q). ~-i*"‘ /r(—d) asi —> oo, (10) tap—i“ /1“(d) asi _, oo, (11) p,~ {12“ r(1-d)}/r(d) asi _)oo. Comparing the asymptotic behavior of autocorrelations between an I(d) process and a stationary ARMA process, the autocorrelations of an I(d) process satisfy pi ~ C1 i2“, while the autocorrelations of an ARMA process satisfy pi ~ C2 r'i , where C1, C2, and r are some constants. In other words, the autocorrelations in an ARMA process decrease rapidly (exponentially), while the autocorrelations in an I(d) process decrease very slowly (hyperbolically). Sometimes the spectral density at zero frequency is used as a measure of persistence in a time series. The spectral density of the I(d) process is, (12) ID.) = H - zl'z“ 62/(21t) = 12 sin(N2)|'2d 62/(2n), for -1: < 1. < 1:, where z = e'“. From Equation (12), 1(0) is zero for d < O, and is infinite for d > 0. For the case of d > 0, since Sin(7t) ~ 71 as A -> O, asymptotically the behavior of f(0) is as follows: (13) 1(0) ~ 1;“ 62/(21t) as 2. —> 0. We can generalize the I(d) process in such a way that we can apply it to more general times series models for economics data. A time series {y.} is said to be an autoregressive fractionally integrated moving average process of order p, d, q, or ARFIMA(p,d,q), with zero mean, if it has the following form: (14) ¢(L)(1 - L)‘1 y. = O(L)e. with d 6 (-1/2, 1/2), where (L) is a pu' order lag polynomial of autoregressive parameters, O(L) is a q“ order lag polynomial of moving average parameters, and at is a white noise process as before. Furthermore we assume that all the roots of (I)(L) and O(L) lie outside the unit cycle, for stationarity and invertiblity respectively, and also we assume that no roots are common in <1>(L) and 9(L), for identification of the parameters. This is a generalization of the ARIMA process in the sense that the order of integration is allowed to be a fractional value. Comparing the I(d) process in Equation (1) with the ARFIMA(p,d,q) process in Equation (14), since (1 - L)d y. = [o(L)/(L)] e. a u., where u. is ARMA(p,q), and since <1>(L)y( = O(L)(1 - L)4 at .=. O(L)z., where z. is I(d), an ARFIMA process y. is an I(d) process with ARMA(p,q) error and it is also an ARMA process with I(d) error. Therefore the characteristics of an ARFIMA process are similar to those of an I(d) process. The ARFIMA process is stationary and invertible for d 6 (-1/2, 1/2). The autocovariances ya of the ARFIMA process are expressed in terms of the autocorrelations of the ARMA process u, and the autocovariances of the I(d) process 2. as followings: (15) 71:20; y:_,, i=0,1,2,---, where pi’ are the autocorrelations of the ARMA process ut, and 'yi’ are the autocovariances of the I(d) process z.. The autocovariances given in Equation (15) involve an infinite sum; however, if all the roots in 00, just as for the I(d) process. This occurs because in Equation (15) the autocorrelations of the ARMA process, pf decrease quickly, while the autocovariances of the I(d), 7f decrease slowly as i increases. Thus the asymptotic behavior of the autocorrelations of the ARFIMA process is dominated by the 7:. For a formal proof, see Brockwell and Davis (1991), for example. Because the ARFIMA process is an I(d) process with ARMA error or an ARMA process with I(d) error, its spectral density is (16) iv.) = (0(2)? |(z)|'2 I1 - 7.1-“ &/(2a), 2 = e‘“ for -1: < 2. < 1t. Similarly to the case for the I(d) process, in Equation (16), 1(0) ~ [O(1)/(1)]2 til/(2a) x“ as A —> 0 for d > o, and «0) = 0 for d < 0. This dissertation investigates two basic concerns about the stationary I(d) process. First, if we apply a unit root test or a stationarity test, as is common practice in time series applications, to a stationary I(d) process, what will be the results? This is not a trivial question because in both tests the usual alternative hypothesis is not a long memory process; the alternative is an I(O) process for the unit root test and an [(1) process for the stationarity test. Second, how can we measure the long-memory characteristics of a given data sets? Because any statistic based on I(d) data depends on the value of d, the differencing parameter, the second question is directly related to the estimation of the differencing parameter (1. The plan of this dissertation is as follows. In Chapter 2 we will prove the consistency of the KPSS test against a stationary I(d) alternative, where the KPSS test, suggested by Kwiatkowski, Phillips, Schmidt and Shin (1992), is a test of stationarity against an I(l) alternative. Simulations are performed to provide evidence on the power of the test in finite samples. Also we will compare the power of the KPSS test against I(d) alternatives to the power of the modified rescaled range test suggested by Lo (1991), which is another type of stationarity test that is designed to have power against stationary long-memory alternatives. F urtherrnore in Chapter 2 we will compare the power of the KPSS test against a stationary I(d) process to the size of the KPSS test in the presence of stationary AR(1) errors. From these results we can have some idea about the ability of the KPSS test to distinguish a long-memory process, such as I(d), from an autocorrelated but short-memory process, such as AR(1). In Chapter 3 we will prove the consistency of the Dickey-Fuller test against a stationary I(d) alternative. In a previous article, Sowell (1990) provided the asymptotic distribution of the Dickey-Fuller statistics when the true process is I(d) with d 6 (1/2, 3/2). So our asymptotic theory is a natural extension of Sowell’s results. The finite sample performance of the Dickey-Fuller tests against an I(d) alternative with some values of d e (0, 3/2) will be investigated, similarly to Diebold and Rudebusch (1991a), but more extensively. Also in Chapter 3 we will compare the power of the Dickey-Fuller tests against stationary I(d) alternatives to the power of the tests against stationary AR(1) alternatives. Chapter 4 will consider the estimation of the differencing parameter in the stationary long-memory model. In the recent literature several methods of estimation for the stationary long memory model have been proposed. These include regression based estimation procedures, a conditional sum of squares estimator, exact MLE, several types of approximate MLE, and a minimum distance estimate (MDE). We discuss the asymptotic properties of the MDE and MLE, and also we compare the finite sample performances of the estimates using simulations. In addition we will consider the estimates of the mean, autocorrelations and autocovariances of the I(d) process, because they are the basis for the minimum distance estimates, and the estimates of these parameters are not J? -consistent for values of d in some range. Finally in Chapter 5 we summarize our findings and make some suggestions for further research. CHAPTER 2 POWER OF THE KPSS TEST OF STATIONARITY AGAINST FRACTIONALLY-INTEGRATED ALTERNATIVES 10 11 1. Introduction I Let {zt }‘,’° be a time series with zero mean, and let A = Z Z ,- be its cumulation j=l (partial sum), fort = 1,2,.... Then we will say that zl is a short memogz process if it satisfies the following two requirements. (A1) 02 = limp... T‘E(ZTZ) exists and is non-zero. (A2) v r 6 [0,1], T'“2 z :5 oW(r). [rT] In assumption (A2) and throughout this chapter, [rT] denotes the integer part of rT, => denotes weak convergence, and W(r) is the standard Wiener process (Brownian motion). According to this definition, a short memory process need not be covariance stationary; some heterogeneity in the 2. process is allowed. If z is stationary, the "long run variance" 0'2 is proportional to the spectral density at zero frequency, which is required to be neither zero nor infinite. Assumption (A2) is just the usual "invariance principle" for convergence of partial sums to a Wiener process. Several sets of sufficient conditions for such an invariance principle to hold can be found in the literature. Many authors have used Assumption 2.1 of Phillips (1987, p. 280), which requires the existence of absolute moments of order [3, for some [3 > 2, and strong mixing with mixing coefficients on... such that Ear"; < 00. For example, Lo (1991) defines a short memory process as one that 111:] satisfies these assumptions. Our definition above is slightly more general. At a semantic level, one might object to our definition of short memory, because it implicitly involves conditions on existence of moments as well as restrictions on the 12 persistence of dependence. (For example, an iid Cauchy series is not short memory by our definition.) However, no matter what name they are given, conditions (A1) and (A2) are important, because the enormous recent literature on the problem of distinguishing integrated and stationary series has relied heavily on asymptotics involving Wiener processes, established using the invariance principle (A2). For example, the asymptotic properties of the usual Dickey-Fuller tests and of their various autocorrelation-corrected versions are routinely established in terms of Wiener processes. This asymptotic analysis establishes that the common unit root tests are consistent against short-memory alternatives. Conversely, Kwiatkowski, Phillips, Schmidt and Shin (1992) -- hereafier KPSS -- consider a test of the null hypothesis of stationarity, and show its consistency against unit root alternatives. They also assume the conditions of Phillips (1987) to establish asymptotics in terms of Wiener processes, so their null hypothesis is implicitly that the series is short memory, and they prove consistency against alternatives that are integrated in the sense of being short-memory in first differences. Some recent papers have considered the properties of tests when neither the data nor its first difference are short memory. These papers have typically assumed that the data are fractionally integrated, or I(d), in the sense of Granger (1980), Granger and Joyeux (1980) and Hosking (1981), and have involved asymptotics in terms of fractional Brownian motion. For example, Sowell (1990) derived the asymptotic distribution of the Dickey-Fuller unit root tests when the first difference of the variable is I(d), and Diebold and Rudebusch (1991a) demonstrated by simulations the low power of the Dickey-Fuller tests against I(d) alternatives. Lo (1991) showed that a modified version of the rescaled 13 range test of the null hypothesis of short memory is consistent against I(d) alternatives, and provided simulation evidence of its power in finite samples. Our objective in this chapter is similar to that of L0. We consider the KPSS test as a test of the null hypothesis of short memory, and we prove that it is consistent against I(d) alternatives. We provide simulation evidence of its power in finite samples, and show that its power compares favorably to the power of Lo's test. We also compare its power against I(d) alternatives to its size distortion in the presence of short memory autocorrelation. Unsurprisingly, a rather large sample size is required to distinguish reliably between a long memory process and a highly autocorrelated short memory process. 2. Preliminaries KPSS describe their test as a test of the trend stationarity hypothesis. More precisely, we wish to test the hypothesis that deviations of a series from deterministic trend are short memory. We therefore consider the data generating process (DGP): (l) yt=w+§t+zt,t=l,2,...,T, where {y.} is the observed series and {2.} represents its deviations from deterministic (linear) trend. KPSS assume the components representation 21 = rt + 8., where rt is a random walk (r. = rel + v., with to = 0, and where the vt are iid with zero mean and finite variance), and e. is a short memory process that satisfies Assumption 2.1 of Phillips (1987, p. 280), and therefore satisfies assumptions (A1) and (A2) above. They test the "stationarity" hypothesis Ho: 6.} = 0, which implies that z = e. is short memory. 14 Let e be the residuals from a regression of yt on intercept and time (t), and let S, be the partial sum process of the q: S, = 231' , t= 1,...,T. Let (1'2 be the long run variance i=1 of the errors a, and consider the Newey-West (1987) estimator of oz: t l T (2) 52(€)=T" 2e? +2T" 2W(s,e) 2e,e,_, 1:1 s=1 t=s+l Here w(s, t) = 1 - s/(e +1), which guarantees the non-negativity of sz(t’ ). For consistency of s2(£ ) under the null hypothesis it is necessary that the lag truncation parameter t’ —) 00 as T —-> 0°. The rate 2 = o(T“2) will usually be satisfactory [see, e. g., Andrews (1991)]. The KPSS statistic for testing the null of stationarity can then be expressed as follows: A T (3) n. =r2 ZS? /s2(0. t=l The KPSS statistic flu is defined in exactly the same way, except that it is based on the residuals e, = y - y. This corresponds to a regression of y, on intercept only, and is appropriate if we set a = 0 in (1), so that deterministic trend is assumed to be absent. That is, the flu test allows for non-zero level of yt but not for trend. In that respect it is similar to Lo's modified rescaled range statistic On, which also allows for level but not trend. (Of course, Lo's statistic could easily be modified to allow for linear trend.) T Under the null hypothesis that zt = e. is a short memory process, T-2 253 => t=1 02]; V2 (r)2dr’ where V2(r) is a so-called second level Brownian bridge, as defined by KPSS, equation (16). Also $2M) is a consistent estimator of 0’2. Therefore f], => 15 J1V2(r)2dr, which KPSS tabulate. Similar statements hold for the flu test, with V2(r) 0 replaced by the standard Brownian bridge V1(r) = W(r) - rW(1). Under the alternative that Azt is a short memory process, KPSS show that (I [1") 11m":[J(;'W‘(s)ds]2da/J1:W'(s)2 ds, where W‘(s) is a demeaned and detrended Wiener process, as defined in Park and Phillips (1988, p. 474). Thus the statistic it, is 0,,(1) under the null hypothesis and is 0,,(T/ (7 ) under the unit root alternative. Since T/[ —> oo as T —-) co, the test is consistent. A very similar asymptotic distribution result and the same conclusion hold for the fin test. In this chapter we are concerned not with unit root alternatives, but rather with the alternative that the z. are fractionally integrated, or I(d), in the sense of Granger (1980), Granger and Joyeux (1980) and Hosking (1981). As a matter of definition, 2. is I(d) if it has the representation (4) (1 - L)"z. = u. where the series {m} is short memory. Equivalently, z. = (1 - L)"u.. The usual binomial expansion of (1 - L)‘l yields the infinite moving average expression z, = 2b juH where j=1 bj = F(j+d)/[F(d)F(j+1)]. Some well known properties of I(d) processes include the following. An I(d) process is stationary and invertible for d in the range (-1/2, 1/2). Its autocorrelations decline slowly, at a hyperbolic rate rather than the usual exponential rate, and so an I(d) process is natural to consider when a series appears to exhibit persistent autocorrelation ("long memory"). For d > 0 the series is so strongly positively autocorrelated that the sum of the autocorrelations diverges and the spectral density of the 16 series at frequency zero is infinite. However, the spectral density at zero of the first differenced series equals zero, so that the first differenced series will appear to be overdifferenced. Thus an analysis of either z or Az. using standard ARMA models is unlikely to be satisfactory. For (1 < 0 the converse statements are true: the spectral density at zero of the series equals zero, and yet the spectral density at zero of its partial sum is infinite. We will proceed under the following Assumption. ASSUMPTION 1: (i) z. is I(d) with d e (-1/2, 1/2). (ii) The u. are iid N(0, of). This assumption is slightly stronger than is needed, and slightly stronger than others have made. For example, Sowell (1990, p. 498) does not assume normality, but does assume that the ut are iid with zero mean and a finite r“I absolute moment for some r 2 max [4, -8d/(l+2d)]. Lo (1991, p. 1294) follows Taqqu (1975) in assuming normality and stationarity of u., but he does not assume that the ut are iid. Hosking (1984) assumes that the u. are iid, and he makes a variety of other assumptions ranging from finite second moment to normality; a consistency result that we will quote below relies on ut having a finite fourth moment. We have deliberately made Assumption 1 strong enough that we can take useful intermediate results from a variety of sources. The basic tools that we need follow directly from Sowell. Define the partial sum process corresponding to zt as Z = 221. Define oT2= var(ZT). Then Sowell shows that i=1 (5) 03 = of {r(1-2t1)/[(1+2d)r(1+d)r(1-t1)])o [I‘(1+d+T)/I‘(T-d) - F(1+d)/F(-d)] 17 and that, as T —) co, (6) or2 / TN" —> of r(1-2d)/[(1+2d)r(1+d)r(1-d)] a (042. (Thus, for d at 0, requirement (A1) above fails, and the series is not short memory.) Furthermore, given Assumption 1, Sowell (Theorem 2) shows that, for r e [0, 1], (7) of‘ 2m = w..(r) where the "fractional Brownian motion" Wd(r) of Mandelbrot and Van Ness (1968) is defined by the stochastic integral (8) Wd(r) = 10‘ (I’ - s)d dW(s) / r(t1+1). (Thus, for d at O, requirement (A2) above for the series to be short memory also fails.) Using equation (6), we will rewrite the weak convergence result (7) in a slightly more convenient form: (9) T-(d+1/2)Z[r'r] => (Dd Wd(r) . Note that if 24 is I(d), its partial sum L is OATH”); in contrast, if z. is short memory, its partial sum is OP(T”2). This difference in orders in probability drives the consistency of tests based on partial sums against I(d) alternatives. 3. Consistency Against I(d) Alternatives In this section we prove that the KPSS fit and fin tests are consistent against I(d) alternatives with d 6 (-1/2, 1/2) and d at 0. To do so, we show that the statistics are 0,,(T/2 )2“, and so as T —> 00 they —P—> 0° for d > 0 and they -—p—> 0 for d < 0. Thus an upper tail test (which is standard when unit root alternatives are considered) is consistent 18 against (1 e (0, 1/2), while a two-tailed test is consistent against d 6 (-1/2, 0) and against (1 e (0, 1/2). For simplicity, we will first consider the the fin test, based on the residuals ct = y. - 3;. Thus we assume that the DGP is of the form of equation (1) with g = 0, so that e. = z. - 2. Assumption 1 is maintained throughout this section, so that the invariance principle (9) is assumed to hold. LEMMA 1: (i) T‘dim) SW => (0., Bd(r), where Bd(r) = Wd(r) - rWd(1). T o. I (11) T2“*”2 8? => wazjo B,,(r)2 dr. t=l rT] Proof: T*d+m)S[m=T*d”m (Z,-Z) 1=1 [le T = T410122, - {[rT]/T} "CW 22. i=1 j=l =9 (Dd Wd(r) - (Dd rWd(1) = (Ba Ba“), which proves part (i). For part (ii), T 1 TWP 253 = r1 2 {T*<"*‘”’s.}2 :5 (of I, Ba(r)2dr by the continuous mapping theorem. I l THEOREM 1: Suppose that P = 0. Then T‘2d 1"], => (Of/6.2) joBa(r)2dr, where 0,2 a var(z.) = c.2r(1-2d)/r2(1-d). 19 Proof: T2d fin = Tm”) :83 / 52(0). The asymptotic distribution of the numerator is t=l given by part (ii) of Lemma 1. The denominator, 52(0) =, T‘1 2e? converges in t=l probability to 0'22; for example, see Hosking (1984, Theorem 2, p. 6). The result then follows by the joint convergence of the numerator and denominator. I The case just treated, I) = O, is appropriate if one is interested in testing the null of white noise against I(d) alternatives, but not if one is interested in testing the null of short memory against I(d) alternatives. For the asymptotic distribution of the statistic under the null of short memory to be free of nuisance parameters, we must pick 13 such that t —> 00 but I IT —) 0 as T —> 00. We now proceed to consider this case. THEOREM 2: Suppose that, as T——> 00, (I —> 00 but (7 /T—9 0. Then, for de (0, 1/2), it. —P—9 co; ford 6 (-1/2, 0), it, -—"—> 0. A T Proof: 11,. = TM”) ZS? / T'2d 52(8) .The asymptotic distribution of the numerator is t=l given by part (ii) of Lemma 1. For (1 e (0, 1/2), T'2d 32(t) '—p—> 0 according to Lo, p. 1309, equation (A.5). Similarly, for d 6 (-1/2, 0), T'“ 52(6) —p—9 00 according to Lo, p. 1310. The result follows immediately. I Theorem 2 implies that the two-tailed flu test is consistent against I(d) alternatives for d 6 (-1/2, 1/2), (1 ¢ 0. Obviously the upper tail test is consistent against (1 e (0, 1/2), while the lower tail test is consistent against (1 6 (-1/2, 0). 20 In fact, we can say more about 52(t ) than the limiting results used to prove Theorem 2. By doing so, we can establish the following theorem giving the asymptotic distribution of the fin statistic under the alternative, from which Theorem 2 would follow immediately as a corollary. THEOREM 3: Suppose that, as T —> oo, 2 —-> oo but t /T —-> 0. Then, for d 6 (-1/2, 1/2), (4 I1")201 ft, :3 £3492“ Proof: (t/T)2d 1‘1, = Tum}: 5? m2“ s2(7). The numerator converges to (0‘12 1B r 2 dr according to Lemma 1. To prove the , .( ) theorem, we therefore show that the denominator converges in probability to (042. To do so, we first note that, as T —-) oo with 13 fixed, (7 '2‘! 52(2) —p> I '2‘! 62M ), where as a matter of definition I 0'2(£’ ) = Yo + 2 EMMY, with 71- = ju‘ autocovariance of z. and 3:1 w, I = 1 - s/(i +1). This is an implication of the fact that the sample autocovariances are consistent estimates of the population autocovariances [see, for example, Hosking (1984)]. We next note that l (7 +1) 02(7 ) = (72 +1) yo + 2 2(€+1-s) ). = var(Zg +1). Taking the limit 5:1 as I -—> oo, ( g +1)‘2d 62(6 ) = ((1 +1)*2d+” var(Zp +1) —> (of, using equation (6) above. Since (1’ +1)'2‘i 62M ) and 2 '2d 02(6 ) have the same limit, this proves the result. I 21 The analysis for the fit test is very similar. It rests on the following generalization of Lemma 1. LEMMA 2: Let ct be the residuals from a regression of y. on (1,t), t = 1, 2, ..., T, T and let S. = Zej . Then T'Wm) SW = (0d Vd(r), where i=1 V..(r) = mm + (2r - 3r2) wd(1) + (-6r + 6r2) jo'w, (s)ds. Proof: Let \ll and g be the coefficents of intercept and trend in the regression of y. on (1,0. Then [rT] (10) T-(d+l/2) S[rT] = T-(d+l/2) :2) _ {[rT]/T} Tl/Z-d(¢ ”‘41) j: - 1/2 {[rT]/T}{([rTl+1)fI‘} T‘-’*'(& - :1. Furthermore, by the same algebra as in Schmidt and Phillips (1992, pp. 285-286), specialized to their case p = 2, we have (11) TWO! -w)=4r*d+m> -6T*"5*“’ thr tor“) => 4(Dd Wd(l) - 603d flrdwd (I) = (0a{-2Wd(1) + 6 jjwa(r)dr }. Here we have made use of I(Irdwd (r) = Wd(1) - Fwd (r)dr , which follows from Jonas 0 (1983, p. 29). Also (12) rW(é t) = -6 T“*‘”’ 22. + 12 Tm” 2n. + 0.0) 22 => -6(Dd Wa(1) + 120).: firde (r) = (Dd {6 w..(1) - 12 Ewd(r)dr }. Combining (9), (10), (11) and (12), It”) sin => waw.(r) - 00., r {-2 Wan) + 6 Kw, (r)dr} -1/2 tod r2 {6 mm - 12 jo'w,(r)dr} = (0d Vd(r). I We may note that, for d = 0, Vd(r) is the "second-level Brownian bridge" defined by MacNeill (1978) and Schmidt and Phillips (1992). Given Lemma 2, it is easy to establish the same conclusions for the fit test as were given for the fin test in Theorems 1, 2 and 3. All that is necessary is to replace mm in Theorems 1 and 3 with Vd(r). 4. Power in Finite Samples In this section we provide some evidence on the power of the fin and f], tests against I(d) alternatives. This evidence is based on simulations. The calculations were done in FORTRAN using the normal random number generator GASDEV/RAN3 of Press, Flannery, Teukolsky and Vetterling (1986). Observations on an I(d) process for d 6 [-1/2, 1/2) were generated using the Levinson algorithm [Levinson (1947), Durbin (1960), Whittle (1963)]. We also performed some simulations using I(d) observations generated using the Cholesky decomposition of the error covariance matrix, and got essentially the same results as using the Levinson algorithm. For (I 6 [1/2, 1], 23 observations were generated by cumulating I(d-l) random variates. (Thus, as a matter of definition, 2. is [(8) if A2. is I(-.2), for example.) Given the I(d) series 2., t = l, 2, ..., T, data on the observable series y. were generated according to equation (1), with w = g = 0. The values of w and § do not matter for any of the tests that we consider, except that the fin test and Lo's modified R/S test assume § = 0. Tables 2-1 and 2-2 give the powers of the 5% upper tail fin and fit tests, respectively, against the alternatives (1 = 0.1, 0.2, ..., 0.9, 1.0, and also (1 = 0.45 and 0.499. The results are based on 5,000 replications, except that 10,000 replications were used for d = 0.4, 0.45 and 0.499. We have considered only positive values of (1 because we are primarily interested in testing short memory against long memory, and thus we consider only upper tail tests. Following KPS S, the number of lags used in the denominator of the statistic (I) was chosen as I O = 0, t’ 4 = integer[4(T/100)m], and t’ 12 = integcr[12(T/100)“‘]. We consider samples sizes T = 50, 100, 250 and 500. Some patterns in Tables 2-1 and 2-2 are clear, and in accord with our expectations. With other things held constant: (1) Power increases as T increases. This is a reflection of the consistency of the test. The rate of growth of power as T increases depends strongly on the choice of (I; it is higher when I is lower. (ii) Power is lower when t is higher. Note that this is true even for large sample sizes, in accord with the asymptotics of the previous section, which indicate that power depends on (I /T) even asymptotically. (iii) Power is not very different for fin than for fit. Allowing for deterministic trend does not cost power. (iv) Power is higher when d is larger; that is, as the alternative hypothesis becomes further from the null. 24 With respect to point (iv), it is interesting that there is no apparent discontinuity in the power function at or near d = 1/2. As d T 1/2, the series zt approaches nonstationarity, the one-period autocorrelation approaches unity, and the covariance matrix of (21,...,zT) approaches singularity. The asymptotic results in the previous section do not hold for d 2 1/2, and, if we were to derive the appropriate asymptotic distribution results, they would look rather different for d 2 1/2 than for d 6 (-1/2, 1/2). For d > 1/2, it would not be difficult to derive the relevant asymptotic results, using our asymptotic results and the fact that an I(d) series is the cumulation of an I(d-l) series. However, we established the asymptotic distribution of the KPSS test statistics only for d 6 (-1/2, 1/2), and in particular not for d = -l/2, so our results cannot be extended in any straightforward way to the case of d = U2, and it is not clear that some sort of discontinuity at d = 1/2 can be ruled out. Nevertheless, the power firnction is smooth in d over the whole range that we consider (from zero to one). This is not a trivial result. For example, in Chapter 3 we finds that the powers of the Dickey-Fuller (3,, , pp in, and T, tests are continuous at d = 1/2, while the powers of the Dickey-Fuller f) and i: tests have a discontinuity at d = 1/2. Thus a discontinuity arises only when the series has zero mean and correspondingly level and trend are not extracted. The same appears to be true for the KPSS tests. The KPSS fin test involves extraction of a mean, and the f], test involves extraction of level and trend, and in both cases the power function is continuous at d = 1/2. However, suppose we define a KPSS- type test it in the same way as the fin and f], tests, except that level and trend are not extracted; that is, the statistic is based on the raw series rather than the demeaned or 25 detrended series. Interestingly, this test's power function is discontinuous at d = 1/2. For example, for T = 50 and g = 0, power is .753 for d = .4; .837 for d = .45; .977 for d = .499; .800 for d = .5; and .824 for d = .6. Similar results occur for other values of T and I ; at d = 1/2, the power fimction is continuous from the right but not from the left. The reason why this discontinuity should occur, for both Dickey-Fuller and KPSS tests but not when the data are demeaned or detrended, is an interesting puzzle that remains to be solved. How optimistic the results in Tables 2-1 and 2-2 are depends largely on the choice of I. With 6 = 0, both tests show reasonable power against (1 2 0.3 for T 2 100; for example, the power of 11,, against (1 = 0.3 is 0.54 for T = 100 and 0.73 for T = 250. However, with K = 0 the tests are susceptible to considerable size distortions in the presence of short-memory autocorrelation. Choosing I large enough to more or less remove these possible size distortions will reduce power very substantially. KPSS provide some evidence on size distortions in the presence of short-memory errors. Specifically, they consider the size of the fin and f1, tests in the presence of AR(1) errors, with autoregressive parameter p = 0, i0.5 and $0.8. For T .>_ 100 and p = 0.5, the choice 2 = t 4 is sufficient to keep the size of the 5% test below 0.10, but (i = t 12 is required if p = 0.8. In Tables 2-1 and 2-2, we see that, with I = e 4, a fairly large sample size is necessary to attain reasonable power against (1 2 0.3. For example, the power against d=0.3 ofthe firm 4) test is 0.41 for T = 250 and 0.55 for T = 500; these are about the same as the power of the T1,,“ 0) test for T = 50 and T = 100, respectively. With 1’ = 26 t 12, even larger sample sizes are necessary for reasonable power. For example, for T = 500 the power ofthe fin (t 12) test is only 0.35 for d = 0.3 and 0.46 for d = 0.4. The good power properties of the tests with Z = 0 basically reflect the fact that it is not hard to distinguish an I(d) series with d > 0 from white noise, while the poorer power properties with larger values of 6 reflect the fact that it is harder to distinguish an I(d) series fi'om a substantially autocorrelated short memory series. To elaborate on this last point, Table 2-3 compares the power of the fin and f], tests against I(d) alternatives to their size in the presence of AR(1) errors. Specifically, we compare power against an I(d) alternative with d = 1/3 to size in the presence of AR(1) errors with p = 0.5. Both series have a one-period autocorrelation equal to 0.5, but the autocorrelations of the I(1/3) series are much more persistent than those of the AR(1) series. Power against the I(d) series is calculated by simulation as above, using 20,000 replications, while size in the presence of AR(1) errors is taken from KPSS, Table 3. In Table 2-3, it is clear that the powers of the fin and f1, tests against the I(1/3) alternative are larger than the corresponding sizes in the presence of AR(1) errors with p = 0.5, with a few exceptions for the f], test when t = 0 and T is small. The difference is most substantial when T is moderately large. For example, for the fin“ 4) test with T = 500, compare power of 0.612 to size of 0.090; for the fin ((7 12) test with T = 500, compare power of 0.383 to size of 0.058. It appears that we can hope to distinguish a long memory process from a short memory process with approximately equivalent short- run autocorrelation, but it will require a rather large sample size to do so reliably. 27 Finally, we compares the power of the fin test to the power of Lo's modified rescaled range test. Table 2-4 gives the power of the 5% upper tail test using the Lo’s rescaled range statistic, with the critical value given by Table II (p. 1288) of L0 (1991). The format of Table 2-4 is the same as those of Table 2-1 and 2-2. We use the same specifications for simulations in terms of d, T and t7 , and we use the same generated data series for the simulation results as in Table 2-1, 2-2 and 2-4. As a general statement, the powers of the 1:1,, test and Lo's modified R/S test are fairly similar. However, the fin test is clearly less powerful than the R/S test when power is high, and more powerful when power is low. Thus the fin test is more powerful when T is small and dis close to zero, or t’ is I 4 or e 12; and Lo’s modified R/S test is more powerfiil when T is large, (1 is close to one and t’ is z 0. Especially, when we choose E = I 4 or I 12, Lo’s modified R/S test has little power in small samples. Thus the fin test seems to enjoy an advantage in power over the R/S test in cases in which 2 is picked large enough to protect against severe size distortions from short-memory autocorrelation, but this is not necessarily an optimistic conclusion, since these are cases in which neither test has high power. 5. Concluding Remarks In this chapter we have shown that the KPSS fin and fit statistics can be used to distinguish short memory processes from long memory processes. Specifically, we showed that tests of the null hypothesis of short memory based on these statistics are consistent against long memory alternatives, and we have provided Monte Carlo evidence 28 on their power in finite samples. Their power compares favorably to the power of Lo's modified rescaled range test, which is also consistent against long memory alternatives. An important practical conclusion that can be drawn from our simulations is that a rather large sample size, such as T = 500 or 1000, will be required to distinguish a long memory process from a short memory process with any reasonable degree of reliability. It is interesting and important to note that this conclusion does not depend much on the strength of the autocorrelation of the series, since what is important is not the size of the autocorrelations, but their persistence. For example, we noted above that an AR(1) process with p = 0.5 and an I(d) process with d = 1/3 each imply a one-period autocorrelation of 0.5. With T = 500, choosing [ = g 4 for the fin test yields size of .09 with the AR errors and power of .61 with the I(d) errors. Now consider a more strongly autocorrelated series, with one-period autocorrelation equal to 0.8, which could be generated by an AR(1) process with p = 0.8 or an I(d) process with d = .444. Again with T = 500, results from KPSS and our Table 2-1 indicate that choosing (1 = g 12 yields size of .09 with the AR errors and power of approximately .51 with the I(d) errors. Finally, consider a less strongly autocorrelated series, with one-period autocorrelation of 0.2, which could be generated by an AR(1) process with p = 0.2 or an I(d) process with d = .167. With T = 500, picking e = I) 0 implies size of .13 with the AR errors and power of approximately .47 (found by interpolating in Table 2-1) with the I(d) errors. Size and power are approximately the same (perhaps to a surprising degree, in fact) in all three cases. The reason is straightforward: with a less strongly autocorrelated series, a smaller value of t’ is required to keep the size under the null close to its nominal value, but the relevant value of d under the alterantive is also smaller. Conversely, with a more strongly 29 autocorrelated series, the relevant value of d under the alternative is larger, but a larger value of t? is required to control size distortions under the null. The KPSS tests and Lo's test do not have any known optimality properties in the present context. An important avenue of future research will be to try to find more powerful tests, perhaps through a systematic application of standard principles of testing to the I(d) model. For example, Robinson (1993) considers the LM test of the hypothesis (1 = 0 in the I(d) model, and his statistic can apparently be made (asymptotically) robust to short memory autocorrelation using parametric or nonparametric corrections. We might anticipate a gain in finite sample power from this or similar tests, but that remains to be seen. I [0 [4 [12 [0 (4 (12 [0 [4 (12 [0 (4 [12 30 TABLE 2-1 POWER OF THE 11,, TEST AGAINST I(d) ALTERNATIVES 0.0 0.1 .042 .034 .012 .129 .075 .024 .054 .048 .037 .168 .099 .053 .048 .045 .040 .212 .129 .084 .049 .051 .049 .267 .174 .122 0.2 0.3 0.4 .251 .129 .034 .347 .185 .090 .472 .258 .161 .598 .357 .219 .392 .197 .054 .535 .272 .132 .728 .408 .244 .836 .552 .352 .544 .277 .070 .723 .386 .196 .882 .555 .335 .960 .724 .462 VALUE OF (1 0.45 0.499 T = 50 .610 .675 .314 .360 .087 .099 T = 100 .777 .830 .429 .481 .219 .250 T = 250 .934 .958 .621 .676 .384 .428 T = 500 .982 .990 .773 .833 .511 .573 0.5 .672 .372 .105 .832 .474 .244 .959 .677 .427 .991 .833 .578 0.6 .771 .444 .131 .910 .566 .316 .987 .772 .511 l .000 .903 .662 0.7 .849 .522 .180 .955 .645 .380 .995 .833 .581 1.000 .946 .747 0.8 .897 .583 .229 .976 .708 .449 .999 .892 .644 1.000 .969 .910 0.9 .938 .645 .275 .988 .767 .509 1.0 .960 .715 .343 .993 .826 .595 1.000 1.000 .930 .715 .948 .760 1.000 1.000 .986 .864 .994 .898 [0 [4 [12 [0 [4 [12 [0 (4 [12 10 [4 (12 0.0 .053 .039 .040 .051 .043 .032 .053 .050 .044 .049 .049 .044 31 TABLE 2-2 POWER OF THE fit TEST AGAINST I(d) ALTERNATIVES 0.1 .138 .076 .051 .180 .090 .057 .269 . 149 .094 .323 .189 .115 0.2 0.3 0.4 .262 .116 .050 .377 .175 .079 .584 .286 .160 .724 .411 .219 .417 .179 .065 .609 .272 .112 .832 .448 .230 .930 .623 .339 .581 .242 .078 .780 .364 .146 .948 .598 .304 .991 .798 .476 VALUE OF (1 0.45 0.499 T = 50 .640 .705 .268 .306 .076 .085 T=100 .842 .888 .413 .461 .165 .190 T=250 .973 .987 .665 .722 .353 .390 T=500 .997 .999 .846 .892 .531 .595 0.5 .700 .306 .092 .889 .461 .185 .989 .733 .383 .998 .885 .590 0.6 .801 .374 .098 .952 .565 .247 .997 .810 .471 1.000 .948 .681 0.7 .880 .441 .109 .979 .653 .288 1.000 .878 .552 1.000 .975 .782 0.8 .923 .510 .128 .990 .714 .320 1.000 .921 .624 1.000 .990 .835 0.9 .952 .577 .157 .996 .771 .362 1.000 .955 .710 1 .000 .994 .879 1.0 .976 .621 .178 .997 .824 .415 l .000 .969 .742 1.000 .999 .914 32 TABLE 2-3 POWER OF THE it, AND fi, TESTS AGAINST I(d) ALTERNATIVES VERSUS SIZE IN THE PRESENCE OF AR(1) ERRORS l—l 30 50 80 100 120 200 500 |'-l 30 50 80 100 120 200 500 11.. TEST Size with AR(1) Errors, p = 0.5 6 Q t” 54. t’ 1_2_ .321 .114 .005 .331 .098 .021 .350 .108 .042 .352 .090 .043 .359 .092 .047 .367 .099 .053 .370 .090 .058 1‘1. TEST Size with AR(1) Errors, p = 0.5 E .Q 2 3 t7 _1_ .425 .129 .178 .486 .113 .047 .521 .124 .046 .538 .107 .047 .542 .114 .054 .559 .121 .054 .586 .110 .062 Power against I(d), d = 1/3 I Q 8 fl 6 1_ .344 .184 .009 .451 .227 .058 .555 .312 .128 .604 .310 .154 .645 .344 .189 .752 .452 .247 .895 .612 .383 Power against I(d), d = 1/3 t7 9 I 5 t’ E .335 .149 .189 .473 .194 .069 .606 .290 .101 .673 .297 .123 .717 .340 .155 .838 .485 .223 .964 .681 .384 33 TABLE 2-4 POWER OF THE LO'S MODIFIED R/S TEST AGAINST I(d) ALTERNATIVES t’ [0 [4 [12 £0 £4 [12 [0 £4 [12 [0 [4 [12 0.0 0.1 .012 .064 .000 .001 .006 .002 .021 .141 .007 .017 .001 .000 .032 .255 .019 .090 .005 .013 .034 .367 .028 .175 .016 .065 0.2 0.3 0.4 .170 .002 .001 .359 .043 .000 .625 .220 .020 .796 .417 .131 .341 .001 .001 .611 .090 .000 .880 .408 .036 .960 .652 .250 .519 .002 .000 .803 .155 .000 .966 .558 .055 .997 .823 .378 VALUE OF (1 0.45 0.499 T = 50 .604 .675 .002 .002 .000 .000 T=100 .860 .900 . 199 .244 .000 .000 T=250 .984 .994 .641 .701 .067 .087 T=500 0.5 .681 .002 .000 .904 .231 .000 .996 .701 .082 .999 1.000 1.000 .870 .911 .424 .496 .910 .502 0.6 .794 .003 .000 .964 .341 .000 .999 .803 .119 1.000 .955 .598 0.7 .874 .001 .000 .984 .43 5 .000 .999 .864 .150 1.000 .979 .705 0.8 .919 .003 .000 .991 .516 .000 1.000 .910 .213 1.000 .989 .772 0.9 .950 .002 .000 .997 .600 .000 1.000 .950 .274 1.000 .995 .837 1.0 .967 .002 .000 .999 .676 .000 1.000 .963 .319 1.000 .998 .876 CHAPTER 3 POWER OF DICKEY-FULLER UNIT ROOT TESTS AGAINST STATIONARY FRACTIONALLY-INTEGRATED ALTERNATIVES 34 35 1. Introduction In recent years the econometric literature has shown a growing concern for the long run properties of time series data. For example, there has been an enormous amount of work on testing for unit roots and on cointegration. Virtually all of this work has assumed that the data series are either 1(0) or 1(1) processes. However, this framework is too restrictive for some applications. Following Granger (1980), Granger and Joyeux (1980) and Hosking (1981), we can generate a fractionally integrated, or I(d), model by allowing for a fractional value of the differencing parameter. The I(d) model has been successfiilly applied to a number of "long memory" series that are stationary and yet display very considerable dependence over long time horizons. One of the standard topics in the unit root literature is the problem of distinguishing 1(1) and 1(0) processes. The unit root testing literature typically considers tests of the null hypothesis that the series in question is I(l) against the alternative that it is 1(0). The most common tests have been the Dickey-Fuller (hereafter DF) tests of Dickey (1976) and Dickey and Fuller (1979), and various elaborations including the augmented DF test of Said and Dickey (1984) and the DF tests with Phillips-Perron corrections proposed by Phillips (1987) and Phillips and Perron (1988). In this chapter, we consider the power of the DF unit root tests against I(d) alternatives. We will be mostly concerned with the empirically relevant case that the data are stationary but long memory, so we will consider data generated by the I(d) model with d e (-0.5, 0.5). We will derive the asymptotic distribution of the DF statistics when the data generating process is I(d) with d e (-0.5, 0.5), and Show that the tests are consistent 36 against these I(d) alternatives. Our results involve fractional Brownian motion, and are somewhat similar to results of Sowell (1990). However, Sowell considered I(d) alternatives with d in the range (0.5,1.5); our results are a useful addition to his. We also provide simulation evidence on the finite sample power of the DF tests against I(d) alternatives. Similar results have been presented by Diebold and Rudebusch (19913) and Hassler and Wolters (1993). However, our results cover more values of d than theirs, and in doing so we uncover some interesting results that had previously been missed. In particular, we discover a discontinuity in the power functions of the DF .3 and ‘3 tests at d = 0.5. 2. Preliminaries I Let {2.} be a time series with zero mean, and let Z522]. be its cumulation (partial F1 sum), for t=1,2, Assume that Z. satisfies the following two conditions for some d e (-0.5,0.5): (A1) 0’2 = lim._)e Twz‘" E(ZTZ) exists and is non-zero, (A2) V I6 [0,1], TWO/2w) erT] =9 Owd(l') In assumption (A2) and throughout this chapter, [rT] denotes the integer part of rT, => denotes weak convergence, and Wd(r) is the fractional Brownian motion on [0,1] of Mandelbrot and Van Ness (1968), which is defined by the stochastic integral (1) Wd(r) = £(r — s)d dW(s) / F(d+l), 37 where W(s) is the standard Brownian motion. Note that W..(r) = W(r) for d=0. Note that for d = 0 assumption (A1) is the definition of "the long run variance". So, if d = 0, the long run variance (72 is finite and non-zero, and 2. can be called a "short memory" process. See Chapter 2 for a more detailed discussion. However, a short memory process need not be covariance stationary to satisfy assumption (A2), which is (for d=0) an "invariance principle" for convergence of the partial sum to a standard Brownian motion. Some heterogeneity in the 2. process is allowed. A sufficient set of conditions commonly assumed in the time series literature for such an invariance principle is assumption 2.1 of Phillips (1987, p. 280), which requires the existence of absolute moments of order [3, for B > 2, and strong mixing with mixing coefficients on... such that i 013;” < oo. m=1 When (1 at 0, a wide range of series that satisfy assumptions (A1) and (A2) may be found. Many recent papers focus on the I(d), or fractionally integrated of order d process. As a matter of definition, z. is I(d) if it has the representation (2) (1 - L)d zt = no where L represents the lag operator, and ut is iid with zero mean and finite variance. A generalization of the I(d) process is the ARFIMA(p,d,q) model, which is also of the form given in equation (2) but where u. follows a stationary ARMA(p,q) process. Several sets of sufficient conditions for the series to satisfy the assumptions (A1) and (A2) for (1 ¢ 0 can be found in the literature. For example, Sowell (1990, p. 498) assumes that the u. are iid with zero mean, and a finite r‘h absolute moment for some r 2 max[4,-8d/(1+2d)]. Following Taqqu (1975), Lo (1991, p. 1294) assumes normality 38 and stationarity of u., but does not assume that u. are iid. This is actually slightly stronger than Taqqu (1975), who assumes that z. is strictly stationary and that the absolute 2am moment of the partial sum Z. is Op[a(1+2d)] for some a > 1/(1+2d) for d S 0, and with a = 1 for d > 0. We will follow the way in Chapter 2 by assuming that the u. are iid N(0,o’..2), which is somewhat stronger than the other sets of assumptions, and sufficient for (A1) and (A2). For (1 6 (05,05), the I(d) process is stationary and invertible, but if d at 0 it differs from the usual short memory stationary process. The autocorrelations of an 1(0) stationary process decrease exponentially afier some lags, so that the sum of the autocovariances is finite, and is proportional to the spectral density at zero frequency. The stationary I(d) process with d > 0, however, is so strongly positively autocorrelated that the sum of the autocovariances diverges, and the spectral density at zero frequency is infinite, which explains why it is often called a "long memory" process in the literature. The I(d) process with d < 0 is negatively autocorrelated and the spectral density at zero frequency is zero. Furthermore, for d > 0, the spectral density at zero frequency of the differenced series is zero; and for d < 0, the spectral density at zero frequency of the partial sum process is infinite. Therefore if d is in the range of (-0.5,0.5), neither first differencing nor cumulation is a relevant transformation, since the central limit theorem does not hold for the transformed observations or for the original data. 39 3. Consistency of DF Tests against I(d) Alternatives The DF unit root tests are based on the following regression equation: (3) y. = 11+ B[t-(T+1)/2] + py... + e., t =1,2,...,T In equation (3), yo can be any random variable with an arbitrary distribution including fixed constant, but must be independent of the sample size T. The error process {ct} can be iid, or stationary ARMA, or any short memory process which satisfies the conditions (A1) and (A2) with d = 0. The DF test statistics are formulated using the OLS estimate of p (coefficient-type test) and its usual t-statistic (t-statistic-type test). The null hypothesis of a unit root is p = 1. There are three kinds of tests based on different assumptions about level and trend in the stationary alternative. If we restrict t1 = 0 and B = 0 in equation (3), which presumes that the alternative hypothesis is that y. is a zero mean short memory process, T(p-1) and T are the statistics for the test. If we restrict B = 0 only, so the alternative is that y. is a short memory process with constant but possibly non-zero mean, T(p,-1) and in are used. When we do not restrict the parameter values for u and [5, so that in the alternative we allow a non-zero level and a deterministic linear trend, the test statistics are T(p,-1) and it. Here (1,t),, p, are the OLS estimates of p , and it, in, T, are the usual t- statistics for the hypothesis p = l, in the respective regression equations. Under the null hypothesis that p = 1, the OLS estimates of p are consistent and of order 0,,(1'1). Thus to obtain an asymptotic distribution we normalize them by T, and consider T(p -1). The limiting distributions are not normal but rather are functions of 40 Brownian motion. The t-statistics are 0,,(1) but do not follow the t-distribution; again the limiting distributions are functions of Brownian motion. Sowell (1990) considered the asymptotic distribution of the DF statistics under the assumption that the data are generated by equation (3) with p = l and the errors e. follow an I(d.) process with d. e (-0.5,0.5). (For .3 and i , it is also assumed that p. = B = 0, while for [3,. and in it is assumed that B = 0.) Thus the observed data y. are I(d) with d 6 (05,15). Note that, to avoid confirsion, we let (1‘ represent the value of the fractional differencing parameter of the e. process, and d = 1+d‘ represent the value of the fractional differencing parameter of the y. process. Sowell's proofs only apply to the p and 42 tests, but the same results should hold for the tests based on 0,, and in or B, and if Consider first the case that e. is I(d') with d. e (-0.5,0), so that y. is I(d) with d e (0.5,1). Then .3 is a consistent estimate of p = 1, but B-l is 0,,[T‘mw’], so that T(B -1) diverges. The asymptotic distribution of T‘1+2d"(B -1) has non-positive support, so T(B -1) diverges to -oo. Furthermore I“: —> -oo. Thus the DF tests are consistent against (1 e (0.5,1). Next consider the case that e. is I(d') with d. e (0,0.5), so that y. is I(d) with d e (1,1.5). Then i —> 00, so that the DF t-statistic based tests are consistent against (1 in this range. However, {)4 is 0,,(T‘) and T(B -1) converges to a limiting distribution that is a function of fractional Brownian motion. Thus the limiting distribution but not the normalization differs from the case that d. = 0, and the DF coefficient based tests are not consistent against (1 6 (1,1,5). In this chapter we consider the Dickey-Fuller tests for the case that d e (-0.5,0.5). This corresponds to the case that we are testing the null hypothesis of a unit root against 41 the alternative of a stationary long-memory process, and this is an empirically relevant case. We will formally state our assumptions, as follows. ASSUMPTION 1. 1. The data generating process is of the form: (4) yt = 11 + l3[t-(T+1)/2] + et. e. = (1 - L)dut for d e (-0.5,0.5). 2. The u. are iid N(O,O'..2). 3.B=B=0 We note the following features of these assumptions. First, in this representation y. and e. are fractionally integrated of the same order. Second, the assumption of normality in 2. is stronger than necessary. Third, for tests based on 0,, and in we can allow 11 ¢ 0, while for tests based on B, and T, we can allow both 11 and B ¢ 0. Notice that according to Theorem 1 of Sowell (1990), under these assumptions, {as} satisfies conditions (A1) and (A2). In (A1), o2 = of r(1-2d)/[(1+2d)r(1+d)r(1-d)], where 0.2 is the variance of u., and F( - ) is the gamma firnction. LEMMA 1: Let f = t-(T+1)/2 and y. = y.-y, where Y = iytlT. Then under t=l Assumption 1, 2737, = 0.,(T‘m’2). t=l T Proof: i737, = Zty, - [(T+l)/2]:y,. Then i=1 t=l i=1 T inn/rm = 51%,rrm => [1 r dwd (r) = w..(1) - £Wd(r)dr, where 1:] t=l 42 the last equality follows from Jonas (1983, p. 29). Also film/I‘M“2 = W..(1). Then the t=l result follows immediately. I THEOREM 1: Under Assumption 1 B, B”, Br 3:) p., the first order A P autocorrelation of {y.}, and Bt —-> B: 0. Proof: B and B” are the first order sample autocorrelations using the known mean of zero and the sample mean, respectively, and are known to be consistent estimates of the population first order autocorrelation [see, for example, Hosking (1984), and Brockwell and Davis (1991)]. So we just need to prove the consistency of Bt and B,. First consider B,. After some algebra, [3 = 2. 3712-] Z 73;: -2 TED-1 21 ytyl‘l t 2. 72 2. Sir '(z. “Git—1 )2 = Op (T) Op (Td+2/3)_ Op (Td+2l3) Op (T) , since the consistency of the 0(T3) 0..(T)- 0.0“”) sample autocovariances provided by Hosking (1984) imply 2‘ $73., and 2‘ 9.9.4 are 0.0); by Lemma 1 2. fy, and 2. $9,, are o,(Td*3’2); and 252 is 0(T3). Finally from the facts that o,(T’)o,(T°) = o,(T°““) and o,(T*) + 0,0“) = 0,,(T’) for any real numbers or, Band 7: 43 O p (Td+5/2) __ = O Td-3/2 —-)P 0. OP(T4) p( ) 13.: Similarly, if) = 2‘ f2 2! SKY.-. —zt at?“ Z! {yr-1 21? 2.37124 —(2. {yr-1y =(T'3252xr' 5.9.-.)—(T22,fr.)(r"22,fi.-.) (T32, P )(T" 2,910-0’2 2, f9.-. )2 ' Then T3 2,11 —-> 1/12, T—‘Z‘y.§'.-., T421934 —”—-> 7., 70 respectively, and T2255}, , T4259“, —"—) 0 since thy, and 259,. are 0,,(T‘H3’2) by Lemma 1. Therefore ,3 ..(1/12>(7.>_ " Tammy“ ' Note that even though Theorem 1 tells us that the OLS estimates B, B“, and B, are consistent for the one-period population autocorrelation, they are not guaranteed to have asymptotic normal distributions. From Hosking (1984) it is known that 15 and Ba are f- consistent and asymptotically normal for d < 0.25 but not for d 2 0.25. For (1 = 0.25, the asymptotic distribution is normal, but the asymptotic variance is of order (lnT)/T instead of UT. For d > 0.25, the asymptotic variance is of order 1.41.2.1) LEMMA 2: Let denote S2, 5,2 and 5,2 be the usual error variance estimates from A the regressions that yield B, B.l and B,, respectively. Then S2 S 2 s 2 —p" 70(1-912) 5,151: 44 Proof: The proof for S2 is straightforward, as follows. ~2_ 1 .. 2 5 T_12.(yt pyt-l) - T]:i(ZY.2 462w.-. +52%” —£—> y.) - 2pm + p .zy.) = 70(1 - p12), by Theorem 1 and the consistency of the sample autocovariances. The proofs for 5,3 and 5,2 are essentially the same. I THEOREM 2: Under Assumption 1 all of the DF test statistics [T(B -1), T(Bu-l), T(Bt-l), T, T and it] —> -oo as T —> 00. ”2 Proof: Consider T(B - 1) = T(B - p.) + T(p. - 1). Clearly T(p. - 1) is O(T) and —) -oo as T -) oo. Now we want to claim the first term [T(B - p.)] in the expression is dominated by the second term [T(p. - 1)] as T -—> 00 so that the whole expression T(B - 1) —-) -oo as T ——> 00. Consider T(B - p.). If-0.5 < d < 0.25, (B - p.) is 0,,(Tm) and T(B - p.) is o,(T”2). Ifd = 0.25, W(r) - p.) —2-> a normal distribution, thus (.3 - p.) is o,,(,/(1n_T)_/T'), and T(B - p.) is 04m ). Finally if0.25 < d < 0.5, 1.1-2.1“, - p.) —"——> a non-nonnal limiting distribution, so T(B - p.) is OP(T2d). Therefore for d e (-0.5,0.5), in the expression of T(B - 1) the second term [T(p. - 1)] always dominates the first term [T(B - p.)] as T —> oo. For the other cases of the coefficient tests, T(Bu-l) and T(B,-1), the proofs are basically the same. 45 For the t-statistic-type tests T, 1:”, and 1:,first consider — i. “:11... _ P ; p1—1 = pl—l , TOO-pi) JU-pi) / , —-——-—- Thaw“ (— :Zy.-)l J r. 1 since S2 —L-> 70(1-1312) by Lemma 1, (B - 1) —p—> (pl - 1) by Theorem 1, and $2,111 —"—> y.) by the consistency of the sample autocovariances given in Hosking (1984). So 1 . —1 —‘C —P—> __p,__ < 0. Thus “Ac-too, as T '3 0°. The prooffor i: is the same as J7 ./(1 pi) " the proof for T, after replacing F3 52 and yt_1 with 13., $3 and S".-. respectively. Considering in after some algebra, ~ z 13:1 1 :‘rr . ~2 S1:2 1 ~ Ext ".172,th Yt-lz——(2th—l)2 ‘31-] J4 at}; 25.3%(239.-. )2 “TEN ._. P. -1 , since (2?pr2 is o,(T2‘*3) and J's): /[(:::2,Si._.2)-0p(1)1 2,? is 003). thus (2, ‘t'y.-. )2 MIX?) = 0p(T2‘"3)/[0(T) 0.06)] = o,(T2‘+3)/o,(T‘) = o,(TZd-‘) = 0,0) Therefore similarly to the proof for the t, r.’ - de 511’ $1»). 41 46 l . p . p1 "1 p, -I , __ Tc . = —— < 0, agaln by Theorem 1, Lemma 1 ,fi JIM-pi) ,/(1—pf) Yo and consistency of the autocovarince. So as T —> 00, it “'9 -oo. I The Theorem 2 is intuitively natural, because the value of d is one under the unit root hypothesis, and it is less than one under the I(d) alternatives of this chapter. From Theorem 2, both lower tail tests and two tail tests are consistent. 4. Power in Finite Samples In this section we provide some evidence on the power in finite samples of the the DF coeflicient type tests [T(B -1), T(Bn-l), T(B,-1)] and the t-statistic-type tests (‘3 , in, it) against I(d) alternatives with d e (-0.5,1.5). This evidence is based on simulations, using the normal random number generator GASDEV/RAN3 of Press, Flannery, Teukolsky and Vetterling (1989). Observations on the I(d) process {e.}, t=1,2,..., for d e [-0.5,0.5) were generated using the recursion algorithm given by Levinson (1947), Durbin (1960), and Whittle (1963). For d e [0.5,].5), the observations were generated by cumulating observations from an I(d-l) process. The observed series {y.} was generated according to the DGP (4) with p. = 0 and B = 0, so that y. E e. and the parameter "d" is the degree of fractional integration of the observed series y.. Diebold and Rudebusch (1991a) performed a similar though less extensive set of simulations. They generated I(d) series using the Cholesky decomposition of the error covariance matrix. Our results agree quite closely with their results (Tablel, p. 158) for those parameter values that are common to both experiments. an 111: 01'] 0"" for 47 Tables 3-1 and 3-2 give the powers of 5% two tailed tests against alternatives with d = 0.4, ..., 1.499. The critical value of the tests were taken from Fuller (1976) for T=50, 100, 250, 500. The results are based on 5,000 replications except for d = 0.4, 0.45, 0.499, 1.4, 1,45 and 1.499 where the results were based on 10,000 replications. We did simulations for d = 0.0, 0.1, 0.2 and 0.3, in which the power of the tests is so close to one that we did not report these cases in the tables. Note that we consider only positive values of (1, including the case where d 2 0.5, since we are primarily interested in positively autocorrelated series. There are several important results in Tables 3-1 and 3-2. First, with d constant, the power of the tests increases. This is certainly not surprising for those tests that are known to be consistent. (Recall that all of the tests are consistent against (1 < 1, while only the t-statistic based tests are consistent against d e (1 0,15); furthermore, for in and T, consistency against (1 6 (10,15) has been conjectured but not formally proved.) In some cases power grows rather slowly as T increases. See in particular the B and [3,, tests for d > 1. Second, the power functions of all of the tests are generally monotonic, so that power grows as d diverges from unity. An interesting and possibly important exception is that the power functions of the B and ‘3 tests are discontinuous from the left at d = 0.5; see the low powers of these tests against (1 = 0.499 for all sample sizes. This discontinuity does not occur for the [3,, ‘73,, B, or ‘1, tests. A similar discontinuity was found in Chapter 9 2 for the tests of the stationarity hypothesis, for statistics not involving correction for mean or trend (and in the absence of mean or trend). The power of the B and 511 tests 48 also falls as d increases to 1.499, so that it is natural to suspect a discontinuity of the power function from the left at d = 1.5. However, we did not consider values of d 2 1.5 so we cannot confirm such a discontinuity. In the case of the discontinuity at d = 0.5, we should note that the asymptotic distributions of the statistics for d < 0.5, derived in this chapter, are naturally different from the asymptotic distributions for d > 0.5, derived by Sowell. Also the asymptotic distributions for d = 0.5 are unknown. From this perspective a discontinuity of the power filnction at d = 0.5 is not surprising. What is surprising is that it occurs for some but not all of the tests. Third, it is worth stressing that the power of unit root tests against stationary long memory processes [(1 e (-0.5,0.5)] is quite high, except for the B test with d very close to 0.5. Previous papers, such as Diebold and Rudebusch (1991a), have tended to stress the low power of unit root tests against fractionally integrated alternatives, but this is because they have not focused on d in the stationary range. It is true that power is not high against d in the range (0.5,1.0), especially for (1 close to unity, and it is even lower against (1 in the range (1 0,15). However, power against stationag. long-memory processes is quite good. Fourth, the power of coefficient-based tests is quite similar to the power of the corresponding t-statistic based tests for d < 1.0. However, the t-statistic based tests are generally more powerful for 1.0 < d < 1.5. Finally, we can compare the power of the tests that do not make mean or trend corrections (B and i) to those that make a mean correction ([3,, and in) or to those that make both mean and trend corrections (B, and ’c,). The tests that do not make mean or trend corrections are generally less powerfill than those that do, for d < 1.0, and this is perhaps surprising given that no mean or trend is present. We might suppose that the 49 flexibility to allow for mean or trend would cost power, but it does not. The same pattern is true for the coefficient-based tests for d > 1. However, for the t-statistic based tests for d > 1, the pattern is reversed, and the I test is more powerful than the ‘13., or 12', tests. There is no apparent explanation for these interesting results. We also did some experiments to compare the power of Dickey-Fuller tests against I(d) alternatives to their power against stationary short-memory alternatives. Specifically, we consider power against AR(1) alternatives. In each case, we considered 5% two tailed tests. We consider AR(1) coefficients p = 0.8, 0.9, 0.95 and 0.98. We consider I(d) processes with values of d that imply the same one-period correlation as these values of p; that is, we choose d = 0.8/1.8 (= 0.444), 0.9/1.9 (= 0.474), 0.95/1.95 (= 0.487), and 098/198 (= 0.495). The results are given in Tables 3-3 and 3-4, based on simulations with 10,000 replications. Comparing parameter values that imply equal one-period autocorrelations (e. g., p = 0.8 versus d = 0.8/1.8), the power of all tests against the I(d) process is almost always higher than the power of the same test against the corresponding AR(1) process. These differences in power are often substantial. The few exceptions that we find to this general rule are not substantial, and occur when power is high. The higher power of the tests against long-memory alternatives than against short- memory alternatives is perhaps surprising. Although we have picked values of p and d that equate the one-period autocorrelation, the I(d) processes are much more persistent, and their high—order autocorrelations are much larger than the corresponding high-order autocorrelations for the AR(1) processes. In terms of the pattern of autocorrelations exhibited over moderate to long periods, an I(d) process with d = 0.444 is much more 50 similar to a unit root process than is an AR(1) process with p = 0.8, for example. Why unit root tests should be more powerful against the I(d) process with d = 0.444 than against the AR(1) process with p = 0.8 is certainly not clear. It may simply reflect the fact that unit root tests, at least in the forms we consider them (with no corrections for autocorrelation), basically rely on the one-period autocorrelation. With corrections for autocorrelation, especially with data-driven choices of lag lengths, these results might well reverse. For example, if we considered the augmented Dickey-Fuller test with a data- driven rule for choosing the number of augmentations, the higher persistence of the I(d) process would likely lead to a larger number of augmentations than would occur for the corresponding AR(1) process. Since more augmentations lead to lower power, the power of the augmented test against the I(d) process would quite possibly be lower than against the AR(1) process with equal one-period autocorrelation. This is an interesting topic for further research. 5. Conclusion In this chapter we show that the DF unit root tests can be used to distinguish an 1(1) process from a stationary I(d) process. We prove the consistency of the tests against I(d) alternatives for d e (-0.5,0.5), and the finite sample performance of the tests is investigated in a Monte Carlo simulation. The BF tests are quite powerful against stationary I(d) alternatives, even in moderate sized samples. They are less powerful against I(d) alternatives with d > 0.5, as has also been shown by Diebold and Rudebusch (1991a). 51 We usually found apparent continuity in the power filnction between the tests against stationary I(d) alternatives and the tests against nonstationary I(d) alternatives. However we also found somewhat strange discountinuities in the power function for some tests when the value of d approaches 0.5 from the lefi or 1.5 from the left. These discontinuities were related to the treatment of unknown mean and deterministic trend, in ways that are not at present understandable. We also compared the power of the DF tests against stationary I(d) alternatives to the power against stationary AR(1) processes, picking the values of d and of the autoregressive parameter p so as to imply the same one-period autocorrelation. A surprising result is that the DF tests usually had higher power against the I(d) process than against the corresponding AR(1) process. We conjecture that this result might be reversed by considering DF tests with autocorrelation corrections, such as the augmented DF tests or the Phillips-Perron corrected versions of the tests. In fact, the asymptotic and finite sample properties of augmented and Phillips- Perron corrected DF tests in the presence of I(d) data are a very important and natural topic for filrther research. In a recent unpublished paper, Hassler and Wolters (1993) argue that the augmented DF test is inconsistent against I(d) alternatives if the number of augmentations grows with sample size, and they support their argument with some limited simulations. These simulations show much higher powers for the Phillips-Perron corrected tests than for the augmented DF tests, and this is true for both stationary and nonstationary1(d) alternatives. However, they do not present rigorous asymptotics for either type of test, nor do they consider data-driven choices of the lag length in either type of correction. 52 TABLE 3-1 POWER OF COEFFICIENT TYPE DF UNIT ROOT TESTS AGAINST I(d) ALTERNATIVES TESTS T(B-1) .87 T(Bu-l) .98 T(B,-1) .94 T(B-1) .99 T(p,-1)1.0 T(B,-l)1.0 T(B-1) 1.0 T(p,-1)1.0 T(B,-1)1.0 T(B-1) 1.0 T(B,-1)1.0 T(B,-1)1.0 0.4 0.4 O l .63 . .95 . .88 . .86 1.0 1.0 .99 1.0 1.0 1.0 1.0 1.0 .13 1.0 .99 .23 1.0 1.0 .32 1.0 1.0 .85 1.0 1.0 .98 1.0 1.0 1.0 1.0 1.0 .71 .92 .91 .90 1.0 1.0 .98 1.0 1.0 .31 .41 .34 .16 .08 .84 .51 .98 .72 .23 1.0 .79 .26 VALUE OF (1 T=50 .16 .08 .19 .08 T=100 .47 .25 .10 .66 .32 .11 .64 .33 .11 T=250 .69 .40 .14 .92 .55 .94 .60 .19 .17 T=500 .18 0.7 0.8 0.9 1.0 .05 .05 . .05 .05 .05 .05 .05 .05 .05 .05 .06 .05 .08 .09 .10 .10 .11 .13 .10 .12 .17 .15 .17 .20 .17 .21 .32 .21 .24 .39 .22 .25 .33 .25 .30 .50 .27 .32 .60 .28 .30 .47 .31 .33 .62 .31 .34 .71 .28 .29 .51 .31 .32 .66 .32 .32 .74 .08 .07 .42 .10 .08 .56 .13 .09 .69 .13 .10 .75 TESTS (4) fl) d) (Q) (Q) d) (I) fl) (Q) d) r!) d) 53 TABLE 3-2 POWER OF T-STATISTIC TYPE DF UNIT ROOT TESTS AGAINST I(d) ALTERNATIVES .88 .95 . .83 . .90 .99 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 .88 1.0 1.0 .99 1.0 1.0 1.0 1.0 1.0 .14 .99 .99 .24 1.0 1.0 .33 1.0 1.0 .85 .99 .99 .98 1.0 1.0 1.0 1.0 1.0 .71 .87 .88 .90 1.0 1.0 .98 1.0 1.0 VALUE OF (1 0.7 0.8 0.9 T=50 .31 .16 .08 . .30 .13 .06 . .29 .13 .06 . T=100 .47 .25 .10 .55 .25 .08 .58 .27 .09 T=250 .69 .40 .14 .86 .46 .13 .91 .54 .16 T=500 .83 .51 .17 .97 .64 .18 .99 .74 .22 .05 .05 .05 .06 .05 .05 .05 .05 .05 .14 .11 .09 .17 .13 .11 .20 .15 .14 .33 .22 .18 .39 .29 .25 .47 .34 .30 .54 .37 .28 .62 .47 .40 .68 .53 .46 .74 .51 .38 .80 .62 .49 .83 .68 .58 .83 .58 .42 .88 .68 .54 .89 .75 .63 .97 .61 .36 .98 .69 .46 .98 .78 .59 .99 .83 .67 54 TABLE 3-3 POWER OF COEFFICIENT TYPE DF TESTS AGAINST STATIONARY AR(1) ALTERNATIVES AND AGAINST STATIONARY I(d) ALTERNATIVES IAR.1 lax) TESTS T T(tS-l) Tun-1) T(B-1) T(IS-I) Tux-1) T(B-1) p=08 d=0wr8 50 .58 .30 .13 .67 .95 .89 100 .99 .87 .57 .89 1.0 1.0 250 1.0 1,0 1.0 1.0 1.0 1.0 p=09 d=0Wl9 50. .17 .10 .05 .45 .92 .84 100 .57 .29 .13 .68 1.0 1.0 250 L0 .97 .78 .91 LO LO p = 0.95 d = 0.95/1.95 50 .06 .05 .04 .30 .90 .82 100 .16 .09 .05 .49 1.0 1.0 250 .75 .44 .21 .73 1.0 1.0 p = 0.98 d = 0.98/1.98 50 .02 .04 .05 .19 .89 .81 100 .04 .05 .04 .30 1.0 .99 250 .16 .09 .06 .50 1.0 1.0 55 TABLE 3-4 POWER OF T-STATISTIC TYPE DF TESTS AGAINST STATIONARY AR(1) ALTERNATIVES AND AGAINST STATIONARY I(d) ALTERNATIVES AR 1 I(d) TESTS T r I, t, ‘C 1,, T, p=08 d=0.8/1.8 50 .61 .20 .10 .68 .90 .84 100 .99 .74 .48 .90 1.0 1.0 250 1.0 1.0 1.0 1.0 1.0 1.0 p=09 d=0.9/1.9 50 .22 .07 .05 .46 .85 .78 100 .61 .20 .10 .69 1.0 .99 250 1.0 .91 .69 .92 1.0 1.0 p = 0.95 d = 0.95/1.95 50 .09 .04 .04 .31 .83 .76 100 .21 .06 .05 .50 .99 .99 250 .78 .31 .16 .74 1.0 1.0 p = 0.98 d = 0.98/1.98 50 .05 .04 .05 .20 .81 .75 100 .08 .04 .04 .31 .99 99 250 .21 .07 .05 .51 1.0 1.0 CHAPTER 4 FINITE SAMPLE PERFORMANCE OF THE MINIMUM DISTANCE ESTIMATOR IN THE F RACTIONALLY-INTEGRATED MODEL 56 57 1. Introduction In this chapter we will consider the finite sample properties of several estimators of the differencing parameter in the autoregressive fractional integrated moving average (ARFIMA) process of Granger (1980), Granger and Joyeux (1980) and Hosking (1981). A time series {y.} is said to be an autoregressive fractionally integrated moving average process of order p,d,q or ARFIMA(p,d,q) if (1) (ML) (1 - L)“ (y. - u) = 9018. where L is the lag operator, (1 - L)‘I is defined by the binomial series; (2) (1 - L)“ = §(§]<—L)‘. <1>(L) is a polynomial in L of order p containing the autoregressive parameters, O(L) is a polynomial in L of order q containing the moving average parameters, (1 is the differencing parameter, [I is the mean of the process, and e. is a white noise process. Furthermore all the roots of <1>(L) and O(L) lie outside of the unit circle, and <1>(L) and O(L) contain no common roots. When p=q=0, the ARFIMA(p,d,q) process becomes a fractionally integrated process of order (1, or I(d) process. In this model, the differencing parameter, (1, is of special interest, because the long run properties of the process only depend only on the value of (1, while the AR and MA parameters capture the short run dynamics. The value of d can be any real number, but most of literature focuses on d in the range between -1/2 and 1/2. The series is stationary for d < 1/2 and is invertible for d > -1/2, and it is common to assume that the series has 58 been differenced or cumulated sufficiently that d is in this range. For 0 < d < 1/2, the series is so strongly positively autocorrelated that the sum of the autocorrelations diverges, which is why this kind of process is called a “long-memory process” in the literature. If -1/2 < d < O the series is so strongly negatively autocorrelated that the sum of autocorrelations goes to zero in the limit. So as long as d is not an integer value, the usual ARIMA models are not suitable for these kinds of series. In the recent literature, basically two types of estimates are proposed for the fractionally integrated model. The first type is a two step procedure in which the differencing parameter d is estimated consistently in the first step, and the other parameters of the model are estimated in the second step using the consistent estimate of the differencing parameter. The best known example of this kind of estimator is Geweke and Porter-Hudak (1983). They proposed a least squares estimation method for the differencing parameter in the first stage, followed by usual methods for ARIMA models applied to the series filtered by (1 - L)". These procedures are computationally simple, but they are not efficient asymptotically, and their finite-sample properties are poor in the presence of significant short-run dynamics. In the simulation study of Agiakloglou, Newbold, and Wohar (1992) it is shown that the Geweke and Porter-Hudak estimate of the differencing parameter has a severe bias in finite samples. The second type of procedures for estimating the parameters in the long memory model are the methods in which all the parameters are estimated simultaneously, except sometimes the mean [.1 which can be estimated by the sample mean. The typical example for this case is the maximum likelihood estimator (MLE), called the exact MLE. Assuming normality, the log likelihood filnction is the following: 59 (3) 1nL = -T/2 ln(21t) - 1/2 1an - 1/2 (YT'E'lYT), where YT is the T X 1 vector of demeaned data series, so YT = [(yt-tt) (yz-Ll) ()TlUl'r 2'. is the covariance matrix of YT, and T is the sample size. The covariance matrix 2 depends on d and on the ARMA parameters. Often 11 is replaced by the sample mean Y. The exact MLE is intuitively appealing, but it has some shortcomings. Specifically, the calculation of the MLE is time-consuming and demanding because of the need to calculate and invert the T X T covariance matrix 2. So several approximate MLES have been proposed, which do not require the inversion of Z. The first approximate MLE is the conditional sum of squares estimator (CSS) which was proposed by Li and McLeod (1986). It truncates the infinite sum in the definition of (1 - L)d to a finite sum, and estimate the parameters ignoring the truncated parts which are negligible when the sample sizes is large enough. The second method avoiding the inversion of the covariance matrix is to use an approximation to the sum of squares in the likelihood function using the formula suggested by Whittle (1951), based on the spectral density. These kinds of MLES are called approximate MLEs in the literature. Fox and Taqqu (1986) used the Whittle approximation on only the sum of squares YT' 2'1Y-r in Equation (3). Dahlhaus (1989) and Hauser (1992) used the Whittle approximation for |2| as well as the sum of squares in Equation (3). Several other methods have been proposed in the literature, based on different principles than MLE. They estimate all parameters at once except possibly [1. Dueker and Startz (1992) utilized the GMM principle to estimate the parameters in the long memory 60 model using the orthogonality conditions of E(y.-. e.) for i = 1, 2, 3,.... Tieslau, Schmidt and Baillie (1994) proposed a minimum distance estimator (MDE) for the I(d) process, minimizing the difference between population and sample autocorrelations. These estimators require relatively weak assumptions compared to MLE, and under some specific conditions they may be asymptotically equivalent to MLE. In this chapter we will compare the finite sample performance of the MDE and various types of MLE. In our study we focus on the differencing parameter in the I(d) model, because it is natural starting point for comparison and presumably, it gives some general idea about the performance of the different estimators in more general cases. Also in this chapter we will provide a detailed comparison of several version of MLE. Several authors investigated the finite sample performance of some types of MLE. Chung and Baillie (1994) studied the finite sample properties of the CSS estimator. Sowell (1992a) compared the exact MLE with known mean to the Fox and Taqqu approximate MLE and the Geweke and Porter-Hudak estimate. Cheung and Diebold (1994) showed that the finite sample performance of the Fox and Taqqu approximate MLE compares favorably to that of the exact MLE when the mean of the process is unknown. Hauser (1992) constructed an approximate likelihood which is similar to the Fox and Taqqu approximation, but more accurate in finite samples, and showed that MLE based on his likelihood has smaller bias and similar variance, compared to the other MLE, when the mean is unknown. The scheme of this Chapter is as follows. In the next section we discuss the MDE, while the following section discusses various MLES. Then we report the finite sample properties of these estimates of d and finally we add concluding remarks. 61 2. The MDE and the Asymptotic Properties of the Estimate The MDE in the ARFIMA model is based on the consistency of the sample autocorrelations under relatively weak assumptions on the process. The idea of the MDE in the ARFIMA(p,d,q) model is to find the value of the parameter which minimizes the distance between the true autocorrelation fiJl’lCthl’l and its sample counterpart. The following discussion is a brief summary of Tieslau, Schmidt and Baillie (1994) for the MDE, but in the general ARFIMA model. Let p. be the 1th order autocorrelation of the y. process of Equation (1), and Bi be the iu' order sample autocorrelation function in the usual way as T—i T (4) 9. =20. mo... —r)/Z(y. -r):. where Y is the sample mean. Obviously p. is a function of the differencing parameter (1 and the p+q AR and MA parameters. Thus we write p. as p.(0), where 0 is the p+q+1 vector of parameters to be estimated. Sowell (1992a) derived the closed form of the autocovariance filnction for the ARFIMA process in terms of the hypergeometric filnction, so it is not too difficult to calculate p.(0). Now define vectors of the first n population and the sample autocorrelations as (5) P(O)=lpt(9) 92(9)Pn(9)1'. (3:113: 132 13.1'. where n 2 p+q+1. Then the MDE estimator, 6, of 0 is the value of 0 which minimizes the criterion function, 62 (6) 8(9) = [13 - 9(9)]' W [13 - 9(9)]. where W is an n X n symmetric, positive-definite weighting matrix. The asymptotically optimal choice for W is the inverse of the covariance matrix of B. The asymptotic properties of MDE depend on the asymptotic properties of the sample autocorrelation filnction of the ARFIMA process. These were derived by Hosking (1984) and Brockwell and Davis (1991), and we summarize them in the following lemma. LEMMA 1: Let y. follow the ARFIMA(p,d,q) process of Equation (1), where the white noise process satisfies either (a) iid(o,o’) with finite 4th moment; or (b) iid N(0,02). Then (i) for d e (-1/2, 1/4), ,5 [B - p(0)] —d—* N(0, V.) under condition (a); (ii) for d = 1/4, W [B - p(0)] —d—) N(0, V2) under condition (b); (iii) for d e (1/4,1/2), T‘”‘”[B - p(0)] 4% non-nonnal distribution with zero mean and covariance matrix V3, under condition (b); where T is the sample size, V., V2, V3, are the n X n covarlance matrlces of the llmltlng dlstrlbutlons, and -—> means convergence in distribution. Specifically ij“I element of V. defined as (7) V... =2 {0... (e)+p.-.(e) —2p.(e)p.(e)} {9...- (e)+p.-. (91-29.- (610. (9)1- k=l Now consider the asymptotic properties of the MDE 6. The following theorem summarizes the consistency and asymptotic normality of the estimate. THEOREM 1: Let y. follow the ARFIMA(p,d,q) process of Equation (1), and satisfy the conditions in Lemma 1. Let 6 be MDE of 0. Then 63 (i) for d 6 (-1/2, 1/4), fl [6 - 0] —£—> N(0, C.) under condition (a); (ii) for d = 1/4, W) [6 — 0] —"—> N(0, C.) under condition (b); (iii) for d e (1/4,1/2), T“'2d’[é - 0] —d) non-normal distribution with zero mean and covariance matrix C3, under condition (b). Here T is the sample size and C., C2, C3, are the (p+q+1) X (p+q+1) covariance matrices of the limiting distributions. Specifically they are of following form: (8) C. = (D' w D)" D' w vi w D (D' w D)", i = 1,2,3, where W is the weighting matrix in Equation (6), V. is the covariance matrix of the limiting distribution of B defined in Lemma 1, and D is n X (p+q+1) derivative matrix of p(0) with respect to 0, so D(0) = a p(0)/a 0‘. Proof: Since 6 is the value at which the criterion function 8(0) is minimized, at 0 = 6 (9) a S(0)/a 0: -2D' w [p - p(0)] = 0 (10) a 2S(e)/a ea 0' = 2D' W D - 2(a ma 0) w [p - p(0)] = 2D' w D + o,(1), so the second derivative matrix is asymptotically positive definite. Taking the Taylor expansion of the first derivative of S(0) around the true value of 0, say 00, (11) a 8(0)/a 01,; = a 8(0)/a 9190 + (a 28(0)/a ea e'|9.)(é - 0.), where 0' is between 0.. and 0. So provided a 2S(0)/a 00 016. is nonsingular, after substituting the second derivative in Equation (11) into Equation (10), we get 64 6 - or = [9(9'1' w 9(6) + 0r(1)]">< [D(9o)' w (13 - 9(9o)l. fl (0 - e.) = [13(6) w W) + o.(1>l">< [D(90)‘ wfi (13 - P(Go)1 = 1thwa 9(9‘) l"[D(9o)' WJT (13 - 9090)] +o.( 1). Finally, since Bconverges in probability to p(00), 6 and 0' converge in probability to 00, and we get the following equation: (12) JT<é - e) = [9(61' w D(9) 1"[D(9)‘ Wins - 9(0)] +0.0). where we drop the subscript from 00 for simplicity. From Equation (12) it is clear that as long as B is consistent, and has the asymptotic normal distribution for d e (-1/2,1/4) given by Lemma 1, 6 does also, and the covariance of the limiting distribution for 6 is as given in Equation (8). For other ranges for d, after replace the normalizing factor J? with the proper one, we have the same type of result. I From Theorem 1, it is clear that the optimal weighting matrix is the inverse of the covariance matrix of the limiting distribution of B, which is either V., V2 or V3, according to the range of d. If we choose the optimal weighting matrix, the covariance of the limiting distribution for 0 is (13) C. = [D(e)' V."D(0) 1", i=1, 2, 3. A few points should be made about the implementation of MDE. First, because the criterion function S(0) is nonlinear in 0, it is generally not possible to have a closed form solution for the estimator, and we have to use numerical optimization to get the estimate. For the initial value for 0 in the numerical optimization, one possible suggestion 65 is the Geweke and Porter-Hudak (1983) estimate. However, if there are no AR and MA terms in the process, so that p=q=0 and d is the only parameter to be estimated, we can get a simple consistent estimate for d from the one-period autocorrelation. If y. follows the I(d) process and if we use only the one-period autocorrelation, the MDE of d is given by (14) d=0./(1+p.). Therefore we can use this estimate as an initial value of d for more general cases which use more than one autocorrelation. Second, to calculate S(0), we need to construct the weighting matrix first. In general it involves an infinite sum or integral. If we consider only the case for d e (-1/2,1/4), where the MDE is .Ff -consistent, the optimal weighting matrix is the inverse of the covariance matrix of the sample autocorrelations as given by Equation (7). The expression involves an infinite sum. When (1 is less than zero, there is little persistence in the autocorrelations and the infinite sum can be approximated with less than 100 terms. But if d > 0, especially d > 0.1, the infinite sum cannot be approximated very well even though we allow more than 1000 terms in Equation (7). Third, for the estimate of the asymptotic covariance of the MDE 6, if we know the closed form of the covariance matrix, we can evaluate it at 6. However even if we do not know the closed form of the covariance matrix, we can estimate it consistently through the numerical second derivatives of the criterion function. Since [a 2S(())/.') 03 0' ]/2 converges in probability to D(0)' Vi"D(0) in Equation (10) when we use the optimal 66 weighting matrix, a consistent estimate of the covariance matrix of the limiting distribution of the 6 is given by (15) C. = 2[a 2S(0)/a ea 0' 1" evaluated at 6, which can be provided by the numerical optimization procedure in most computer software. 3. The Exact MLE, the Approximate MLE and Their Asymptotic Properties The exact MLE for the model given by Equation (1) is the value of the parameters at which the likelihood filnction of Equation (3) is maximized. In calculating the MLE we can substitute the sample mean for the population mean it if we are only interested in the differencing parameter and p+q ARMA parameters, or we can estimate the mean [.1 together with the other parameters, in which case the MLE of u is the GLS estimate with the covariance matrix 2. Because the MLE of It is asymptotically independent of the other estimates, the choice of the estimate of [.1 does not affect the asymptotic properties of the estimates of the other parameters. An alternative form of the likelihood function for the exact MLE, which is mentioned in Yajima (1985) and formally suggested in Brockwell and Davis (1991), is numerically more convenient, since it reduce the number of calculations. It is numerically equivalent to the exact MLE likelihood function of Equation (3). It is of the form: (16) In L = -T/21n(21t) -1/2 :1n(v,2) - 1/2 [:(x, — 12,)2 /v,2 ], t=l t=1 67 where x. is the demeaned data series (so if we know the mean it is y. - it, and if we do not know the mean it is y. - Y), x, is the one step predictor x, = E[x. | x.-., x.-2, x.], t= 1, 2, 3, T, and vf is the variance of the 12,. The formula for ft, and V? are provided in Brockwell and Davis (1991, Proposition 5.2.2 in p.172). An approximate MLE based on the frequency domain can be defined as the value of the parameters at which the following filnction is minimized. (17) L. = iln[f0»,)1+ imp/mp, i=1 where 3.,- = 27tj/m, is the j‘h Fourier frequency for j = l, 2,..., m; m is the largest integer in (T - 1)/2; f0...) is the spectral density at 71,-; and 101,-) is the periodogram at 715,. An asymptotically equivalent form of the Fox and Taqqu approximate MLE is the value of the parameters which minimizes following: (18) L2 = iI(A,)/f0t,) i=1 Several authors have provided the asymptotic theory for the MLE in the long memory model, using the likelihood filnctions or objective filnctions for minimization based on (3), (16), (17) and (18) or equivalent ones. Yajima (1985) considered the exact MLE and the approximate MLE of Fox and Taqqu form based on the I(d) model. He called the second estimator a "least squares estimator" but the objective filnction of the minimization is the same as the likelihood firnction used by Fox and Taqqu (1986). For the exact MLE, he showed the J”? -consistency and asymptotic normality of the MLE d for d e (0, 1/2). For the Fox and Taqqu approximate MLE, he proved fi-consistency 68 holds only for d e (0, 1/4); for d = 1/4, ((1 - (1..) ~ Op[(1/I lnT)”2], and for d 6 (1/4, 1/2), ((1 - (1..) ~ OP(T2‘“), where do is the true value of d. These results were extended by Yajima (1988) to a regression setup in which u is replaced by a regression function z.'B, where z. are non stochastic regressors and B is the vector of coefficients. Fox and Taqqu (1986) studied the approximate MLE based on two type of long memory processes, one of which is the ARFIMA process. They proved fl -consistency and asymptotic normality of the approximate MLE for d e (0, 1/2), which appears to contradict Yajima‘s result. Dahlhaus (1989) improved the Fox and Taqqu (1986) results for the exact MLE and the approximate MLE based on the self-similar process, which is a generalization of the long memory process. He confirmed the J? -consistency and asymptotic normality of the two estimates for d e (0, 1/2) and proved the efficiency of the MLE. MOhring (1990) extended these results to the case that d < 0. He proved the J"? -consistency and asymptotic normality of the exact MLE and the approximate MLE for d e (-1/2,0). 4. The Sample Mean, Sample Autocovariances and Sample Autocorrelations In this section, we consider the finite sample properties of the sample mean, sample autocovariances and sample autocorrelations for the I(d) process. For the autocovariances and autocorrelations, we consider both the case in which the mean is known and the case in which it is unknown. This is of interest because the sampling properties of the MDE largely depend on those of the sample autocorrelations, and in general if the unknown mean is replaced with the sample mean the properties of the sample autocorrelations are quite different. 69 All the results in this section are based on Monte Carlo simulations with 10,000 replications. The I(d) data series are generated by the Durbin-Levinson algorithm with p=q=0, [1:0 in Equation (1), using the normal random number generator GASDEV/RAN3 of Press, Flannery, Teukolsky and Vetterlimg (1986) in FORTRAN. See Chapter 2 for details. We considered d = -.49, -.4, -.3, -.2, -.1, 0, .1, .2, .24, .25, .3, .4, .45, .49, and sample size T = 50, 100, 250. We considered the sample autocovariances and sample autocorrelations up to 5‘" order. Tables 4-1, 4-2 and 4-3 show the simulation results for the sample mean, sample autocovariances and sample autocorrelations. First consider the results for the sample mean y, as given in Table 4-1. The Table gives the mean of the sample mean, and its variance multiplied by Tu'z‘”. From Hosking ( 1984) it is known that T(m'd) (y - B) has an asymptotic distribution, so that var(')7) is asymptotically of order Tad"). Therefore 1.0-2.1) times the variance of 37 should approach a limiting value as T ——> oo. This limiting value, given by Hosking (1984), is presented in Table 4-1 where it is called the “asymptotic variance”. The asymptotic theory for the sample mean seems to be a good approximation for sample sizes 50, 100 and 250, except for the cases where d is close to -.5 or .5. Except the case for d = .49, the mean (of the sample mean) is close to the true value of zero. The variances (of the sample mean), normalized by 1.1-2.1), are also close to the asymptotic variances except the case for d = -.4 and d = -.49 where the normalized variances of the sample mean are smaller than the asymptotic variances. For example, for d = -.49, the theoretical variance in the limiting distribution is 32.195 but when T = 50, 100 and 250, the normalized finite sample variances are 3.600, 3.942 and 4.483 respectively, as given in Table 4-1. 70 We next consider the results for the sample autocovariances, which are given in Table 4-2. We begin with the zero-period autocovariance ‘yp, the population variance, for which the estimate .7. is just the sample variance. These results are given in Table 4-2-0, made up of two pages. Table 4-2-0(a) gives the mean, the normalized variance, the finite sample and the theoretical asymptotic bias, and the mean squared error (MSE) of y.. The normalized variance is the finite sample variance, multiplied by T. For (1 < .25, the variance of y. is asymptotically of order T'l, so T times the variance of .7. should approach a limiting value. For d 2 .25 this is not the appropriate normalization to approach an asymptotic limit, but it is used to avoid the confusion that could result from two different norrnalizations in the same table. In Table 4-2-0(a) it is assumed that y. is calculated using the sample mean, as it would be when the mean is unknown. Table 4-2- 0(b) gives the same information as Table 4-2-0(a) (mean, normalized variances, bias and MSE) for "y.. calculated using the true mean of the series. Finally, Table 4-2-1 through 4- 2-5 give the same information as Table 4-2-0, but for the autocovariances of order one through five. The results for the autocovariances at lags one through five are quite similar to those for the variance, so we will discuss only the results for ?o, as given in Table 4-2-0. The sample autocovariances using the sample mean are downward biased in general, except the zero-period autocovariance which is upward biased for d < 0, and downward biased for d > 0, while the sample autocovariances using the true mean are not biased systematically. When (1 < 0, the variances of the sample autocovariances are also quite similar whether the sample mean or the true mean is used, and the normalized variances do not change much with T. Thus it appears that, for d < 0, the finite sample behavior of the 71 sample autocovariances is similar to what would be expected from the asymptotics. However, for d > 0, and especially for (1 close to .5, things are rather different. When the mean is unknown and the sample mean is used, the sample autocovariances have a severe bias. This bias grows quickly as d approaches .5. For example, for d = .45, y.. = 3.642 but the mean of .7. is only 1.308 for T = 50, 1.456 for T = 100 and 1.635 for T = 250. The bias disappears very slowly as T grows; however, this is as predicted by the asymptotic theory in Hosking (1984) as reported in Table 4-2-0(a). When the mean of the process is known and the sample autocovariances are calculated using the true mean, however, this situation exactly reverses: for (1 close to .5, the bias is much smaller than when the mean is unknown, but the variance is much larger. For example, with d = .49 and T = 50 (and y.. = 16.36), 9.. has a mean of 1.401 and variance of 1 1533/50 = .231 when the mean of the process is unknown, and a mean of 16.415 and variance of 23,154.981/50 = 463.1 when the true mean is known. Mean square error is of comparable magnitude in the two cases (224 versus 463) but the division of mean square error into squared bias and variance is strikingly different. The results for sample autocorrelations are given Table 4-3, which is similar in format to Table 4-2. We will discuss the results for the one-period autocorrelation p., as given in Table 4-3-1, but the results for higher-order autocorrelations, given in Table 4-3- 2 through 4-3-5, are very similar. For (1 < 0 the properties of the sample autocorrelations are more or less the same whether the mean is known or unknown. The sample autocorrelations are only very slightly biased, and their variance is essentially the same whether the true mean or the sample mean is used. When d > 0, the bias is larger, especially when the sample mean is 72 used. When the mean is known, the bias of the sample autocorrelations is not very large, and it tends to disappear fairly quickly as T grows. Furthermore, the variances of the sample autocorrelations with known mean are of reasonable magnitude. This is strikingly different than the situation for the sample autocovariances with known mean, whose variances became very large as d approached .5. The sample autocorrelations based on the sample mean are downward biased. Especially when d is close to .5, the bias is large, and this bias largely persists as T increases from 50 to 250. For (1 > 0, the sample autocorrelations based on the sample mean usually have smaller variances than the sample autocorrelations based on the true mean. 5. The Finite Sample Properties of the MDE and MLE in the I(d) Model The main purpose of this section is to investigate the adequacy of the asymptotic theory provided in the previous sections for the I(d) process. First, we want to know how reliable the asymptotic theory for the MDE is in finite samples. Second, we want to compare the MDE to the MLE, which is asymptotically efficient. Note that a comparison of the asymptotic efficiency of the MDE and the MLE is presented in Tieslau, Schmidt and Baillie (1994). They showed numerically that for values of d 6 (-1/2, 1/4), the variance of the estimate approaches the variance of the MLE, 6/1t2, as the number of autocorrelations in the criterion function increases. So we believe that for d 6 (-1/2, 1/4), so that the MDE and MLE are fl ~consistent and have asymptotic normal distributions, those two estimates are asymptotically equivalent when we use enough autocorrelations in computing the MDE. However, we now ask whether this is approximately true in finite samples. 73 We begin our simulations of the MDE in the I(d) model with the simplest case in which we use only the first-order autocorrelation. Then (i. = B./(1+B.) is the MDE, as given above in Equation (14). Tables 4-4-1 and Table 4-4-2 show the results for these simulations based on 10,000 replications using the same data series as for Table 4-1, 4-2 and 4-3, for the cases that B. is based on the sample mean and on the true mean respectively. For (1 S 0, the bias of cl. is small, and it goes to zero quickly as T increases. The asymptotic theory is reliable in the sense that the finite sample variance of d. is close to the asymptotic variance, especially for T 2 100. For positive values of d, the bias of d. is small when the mean is known, but when the mean is unknown, the bias of d. is larger and goes to zero more slowly than it did when d < 0. Unsurprisingly, the bias becomes worse as d approaches .5. The asymptotic variance becomes a less and less accurate guide to the finite sample variances of (1., especially for d 2 .2, whether the mean is known or not. For d 2 .25, d is not fi-consistent and the asymptotic distribution is of a different form, and Tieslau, Schmidt and Baillie do not provide asymptotic variances. However, because (1. converges to d more slowly for d 2 .25, normalized variance defined as T times finite sample variance should increase with T. This does not appear to happen in Table 4-4-1 (case of mean unknown), though it does in Table 4-4-2 (case of mean known). Thus an overall summary of these results is that the asymptotic theory seems quite reliable in moderate sized samples for d < .2 but not for larger values of d. We next consider the MDE based on a larger number of autocorrelations, and compare the results to those for various forms of the MLE. The results for these 74 simulations are given in Table 4-5 for T = 50, 100 and 250, and for values of d between -.4 and .4. More specifically, in Table 4-5, MLE. denotes the time-domain exact MLE when the population mean is known; MLEy represents the exact MLE when the data are demeaned using the sample mean; F&T represents the F ox-Taqqu approximate MLE; and WL represents the approximate MLE based on the Whittle likelihood. For i = 1, 2, ..., 5, both MDE. and MDE; represent the MDE based on (p., , p.); that is, on the first i autocorrelations. They differ in how the weighting matrix is evaluated. Both MDE. and MDE; use the optimal weighting matrix as given in Equation (7), but WE. evaluates the weighting matrix at d = d. (the consistent estimate base on B.) whereas WE; evaluates the weighting matrix at the true value of d. This does not matter for i = 1, since (1. is the MDE without specification of the weighting matrix, but it matters for i 2 2. Clearly MDE; is not a feasible estimator in practice, but we include it to understand the extent to which any poor performance of the MDE might be due only to the use of d. in evaluating the weighting matrix. It should be noted that the form of the weighting matrix given in Equation (7) is optimal for d < .25. For (1 2 .25, and in particular for d= .3 and .4 in our simulations, this is not necessarily the optimal weighting matrix, and some other version of the MDE might be better. Also, Equation (7) contains an infinite sum and this sum converges very slowly for d > 0. For d = .2, .3 and .4, our evaluation of Equation (7) was accurate only to about 102; however, this did not seem to matter much in the simulations. The results in Table 4-5 are based on 1,000 replications, using the GAUSS random number generator. For the numerical optimizations, we used the GAUSS maximization procedure with a convergence tolerance of 10'5 for the gradient. In most cases we used 75 the Davidon, Fletcher and Powell (DFP) algorithm. In a few cases in which DFP could not find the optimum, we used the Broyden, Fletcher, Goldfard and Shanno (BFGS) algorithm and/or the Newton-Raphson algorithm provided by GAUSS. We used (1. as the starting value for all optimizations, except for a few replications of the exact MLE in which we could not find the maximum starting from (1., and so we used the true value of d or the true value of d i .05 as a starting value. Except for the exact MLE we did not have any particular problems in the numerical optimizations. However, for the exact MLE we faced a problem which is worth noting. As long as d is less than .5, we can evaluate the likelihood function - either the original one given by Equation (3) or the alternative form given by Equation (16). However, if the value of (1 equals or exceeds .5 during the search for the optimum, we cannot evaluate the likelihood function directly, because the covariance matrix 2 in Equation (3) fails to be positive definite. If the estimate tries to go above the value of .5, we can still evaluate the likelihood by differencing the data and then evaluating the likelihood based on the differenced data and the value (d-l) for the differencing parameter. This is legitimate if we assume the unobservable past observations are fixed at the mean of the series. When we did this, we found a small jump in the likelihood function around (1 = .5. Table 4-6 shows the number of irregular replications in the exact MLE - both MLE. and MLEy. In a quite large number of replications, the final estimates are not in the range of (-1/2, 1/2). This happens particularly for d = -.4 or .4 and T = 50. However, comparing our results for the exact MLE with similar simulations given by Sowell (1992a), Cheung and Diebold (1994), Hauser ( 1992) and Smith, Sowell and Zin (1993), 76 we cannot find any significant differences for the cases where they use the same value of d and sample size. The MDE using the weighting matrix evaluated at d. (MDE., i = 1, 2, ..., 5 in Tables 4-5) is generally biased downward. For each sample size we considered, for d < 0 the bias of the estimate is small and decreases quickly as the number of autocorrelations increases from one to five, while for d > 0 the bias of the estimate is large in general, increases as d approaches to .5, and does not decrease quickly as the number of autocorrelations increases. For a given value of d, as the sample size increases from 50 to 250, the bias of the estimate is reduced substantially. Therefore when T = 250 and the number of autocorrelations is four or five, the bias of the MDE is of reasonable size. For example in Table 4-5-3 (a), with T = 250 and five autocorrelations used (MDEs), the absolute bias of the MDE using the weighting matrix evaluated at d. is less than .01 for d < 0 and less than .025 for d > 0. The variance and the mean squared error (MSE) of the MDE using the weighting matrix evaluated at d. are given in Tables 4-5-1(b), 4-5-2(b) and 4-5-3(b) for T = 50, 100, 250, respectively. To see how well asymptotic theory works in finite samples we reported T times finite sample variance and T times finite sample MSE as “Normalized Variance” and “Normalized MSE” respectively in these tables. As we mentioned in the previous section, for d e (-1/2,1/4) the MDE is fi-consistent and has an asymptotic normal distribution, while for d e [1/4,1/2) the MDE is consistent but the convergence rate is slower than for d e (-1/2,1/4). Thus if the asymptotic distribution theory is applicable to the finite samples sizes we considered, the normalized variance or normalized 77 MSE should be stable for d e (-l/2, 1/4) and they should be increasing with T for d 6 [1/4, 1/2). Also we note that for d e (-1/2,1/4) we use the optimal weighting matrix for the criterion function, but for d e [l/4,1/2) the weighting matrix is not optimal. Therefore the variance of the MDE for d 6 (-1/2, 1/4) is not compatible to that of the WE for d e [1/4,1/2), not only because of the difference between the two limiting distributions but also because of the difference between the two weighting matrices in the criterion functions. In Tables 4-5-1(b), 4-5-2(b) and 4-5-3(b), for the variance and MSE of the MDE using the weighting matrix evaluated at (1., there seem to be two patterns according to the values of d: one is for d e (-l/2,1/4), the other is for d e [1/4,1/2), reflecting the different asymptotic distributions, for these two ranges of (1. First, for a given number of autocorrelations and sample size, the variance of the MDE using the weighting matrix evaluated at (1. is decreasing as (1 increases fi'om -.4 to .2. For a given sample size and value of d, as the number of autocorrelations increases, the variance of the estimate decreases rapidly for d < 0, and tends to be stabilized or increases a little for d > 0. The MSE has a similar pattern to the variance. In the theoretical variance of the limiting distribution for the MDE as given by Tieslau, Schmidt and Baillie (1994) for d 6 (-1/2, 1/4) a similar pattern was found. As the value of d increases from -.5 to .25 the asymptotic variance decreases and as the number of autocorrelations increases the asymptotic variance decreases quickly for d < 0, but decreases slowly for d > 0. 78 Comparing the magnitude of the normalized finite sample variance or MSE of the estimate to the asymptotic variance, in all the cases the normalized finite sample variance or MSE is slightly bigger than the asymptotic variance. However as the sample size increases the difference between the normalized variance or MSE and the asymptotic variance decreases, as expected. For example, when the number of autocorrelations is five and d = -.4, -.2, 0, and 2, the theoretical variance in the limiting distribution is 1.137, .892, .683, and .676, respectively; the normalized variance of MDE5 for T = 250 is 1.192, .953, .771, and .733, respectively; and the normalized MSE ofMDEs for T = 250 is 1.191, .953, .784, and .833, respectively. Second, for d = .3 or .4, the normalized variance or MSE of the MDE using the weighting matrix evaluated at d. is generally smaller than that of the same estimator of d for d S .2. For a given sample size, as the number of autocorrelations increases, the variance of the estimate increases for d = .3, and generally decreases for d = .4; the MSE of the estimate generally decreases for both d = .3 and .4 as the number of autocorrelations increases. For a given number of autocorrelations, as the sample size increases from 50 to 250, the normalized variance (finite sample variance X T) of the estimate increases for both (1 = .3 and .4, except for MDE. But the normalized MSE (finite sample MSE X T) of the estimate does not change much as the sample size increases. If the asymptotics are relevant for these sample sizes, T times the finite sample variance should increase as T increases. The IVHDE using the weighting matrix evaluated at the true value of d (MDE.°, i = 1, 2, ...,5 in Table 4-5) is also biased downward and the bias is increasing as (1 increases. 79 Contrary to the NHDE using the weighting matrix evaluated at (1., the bias, variance and MSE of the MDE using the weighting matrix evaluated at the true value of (1 do not generally decrease as the number of autocorrelations increases. Comparing the MDE using the weighting matrix evaluated at d. (MDE.) to the WE using the weighting matrix evaluated at the true value of d (MDEf), it is surprising the MDE using the weighting matrix evaluated at d. is better in terms of bias and variance in most the cases we considered. However as the sample size increases form 50 to 250, the differences between the two estimates decreases, as we would expected, and for T = 250 it makes little difference whether the weighting matrix is evaluated at d. or at the true value of d. We next consider the properties of the various exact and approximate MLES in our experiments. With the exception of WL (the MLE based on the Whittle likelihood), the MLES are all biased downward. The absolute bias decreases as T increases, as would be expected. MLE. (exact MLE with 11 known) has smaller absolute bias than MLE‘y" (exact MLE using 37), F&T (fox and Taqqu approximate MLE) or WL (the MLE based on the Whittle likelihood). MLE. clearly has the smallest variance, though its variance is not much smaller than that of MLEy‘. In terms of MSE, MLE. is clearly best, and WL is generally best among the estimators that do not assume knowledge of 11.. It is worth noting that the exact MLES (MLE. and MLEy) are biased downward as d approaches .5, even though the range of d is not restricted in our numerical maximization. Thus the argument of Smith, Sowell and Zin (1993) for the source of this bias (slow convergence of the sample mean, and restriction of d to the range (1 < .5 in maximization) are not supported by our results. 80 Even for T = 250, the normalized variances of the MLES are not very close to the asymptotic variance of 6/1t2, except the case for d = .4. The convergence to the asymptotic distribution is obviously fairly slow. We next compare the properties of the MDE to the various MLES. We will consider only MDES, the MDE using five moments and the weighting matrix evaluated at (1., which is generally the best of the MDEs in our experiments. In terms of absolute bias, MDE5 is generally worse than WL, and sometimes better and sometimes worse than MLEn, but better than any of the other MLES. The variance of MDE, is larger than the variance of the MLES for d < 0, but it is generally smaller than the variance of any of the MLES except MLE. for d > 0 and worse than WL for d < 0. As a general statement, the MDE is dominated by the exact MLE based on the true value of 11 (MLEu). However, it compares favorably with the exact and approximate MLES based on ‘y‘. Since convergence to the asymptotic distribution is slow for all of these estimators, filrther simulations with T > 250 are really needed to say more about the comparisons of these estimators. 6. Concluding remarks In this chapter we discussed the asymptotic theory for the MDE in a general ARFIMA setup, and also we surveyed the asymptotic theory for the exact MLE and approximate MLES. To see the finite sample behavior of the MDE and MLE, we performed the simulations for the MDE and MLE, as well as for the sample mean, sample autocovariances and sample autocorrelations. All of these simulations were in the context of the simple I(d) model. 81 In our simulations for the autocovariances and the autocorrelations, we found strange behavior of the sample autocovariances and of the sample autocorrelations using the sample mean when the value of d is close to .5. In our simulations for the MLES and MDE, we found that the MDE is comparable to the MLEs if we use more than two or three autocorrelations in the criterion function in the MDE. Our results could profitably be extended in at least two ways. First, the largest sample size that we considered is only T = 250. For series with a strong degree of persistence, convergence to asymptotic results is slow, and larger sample sizes may be relevant. Second, many of the estimates have a substantial and persistent finite sample bias, which makes it hard to compare the finite sample and asymptotic results. It would be desirable to develop higher-order asymptotic approximations which would provide asymptotic expressions for the bias as well as for the variance of the estimates. This is an important topic for filrther research. 82 TABLE 4-1 THE SAMPLE MEAN OF THE I(d) PROCESS AND ITS NORMALIZED VARIANCE T=50 T=100 T=250 (1 Mean Normalized Mean Normalized Mean Normalized Asymptotic Variance Variance Variance Variance -.49 -.001 3.600 .000 3.942 .000 4.483 32.195 -.40 .000 2.459 .000 2.568 .000 2.719 3.525 -.30 .000 1.766 .000 1.790 .000 1.806 1.918 -.20 .000 1.357 .001 1.363 .000 1.366 1.383 -.10 -.001 1.135 -.001 1.137 .000 1.140 1.129 .00 -.001 .999 .000 1.012 .002 1.010 1.000 .10 .005 .946 -.002 .961 .000 .938 .954 .20 .004 1.002 .002 1.000 .000 .991 .995 .24 -.006 1.065 -.005 1.024 -.002 1.048 1.047 .25 -.004 1.088 -.001 1.069 .003 1.078 1.064 .30 -.007 1.185 .000 1.212 .002 1.209 1.190 .40 .000 1.966 -.008 1.939 -.015 1.919 1.930 .45 .003 3.509 -.004 3.492 -.014 3.419 3.498 .49 .027 16.267 -.032 15.874 -.042 16.275 16.213 Note : “Normalized variance” denotes (finite sample variance of Y) X T(l'z‘”. “Asymptotic variance” denotes the theoretical variance in the limiting distribution of the sample mean, based on Hosking (1984). .C \ oo§E> eouaecoz + ea 23% x Em 2988 38% .32.. $on $6.8m eo pose .o A a so as 29:8 BE E3885 2: «88% :33 £83.53... Aer - e3 mo :88 2: 88:0“. :35 292mm: H x Ga. we count? 29:8 BEE moaoeoo ..oo:mtm> commas—82.. am. mo :88 05 mouocou $802.. 35.53 923333 2.: mo 33> 25 2: 885w as»... M So Z 83 mooog 3mm;- Sm.3- w—oAm mmw._ ago: 353- 35.3- ommdm now; 556%.. 80.3- ammo:- mmm.: 5v; oomofi ow. ozé Sod- wood- omvom mmoa omoé homdr ow_.mr 03.: 09; mod comma- vaN- mono mom; «we.»- 3. wow. over mmor Nooo— 59V.— oho. wonr ooh; 8N6 onA Vmw. mwwr omwr NSC 3N4 ohoN ow. wmo. :2.- oSr main _o_._ who. of.- 3.7 Nome wm_._ _N_. ovmr ommr mind So; 2mg on. go. hoot moor Egan hzg avo. oofi- voor wad owo._ who. om: mgr GwN Nmog o2; mm. :o. omor mmor whmd sou; omo. moor mwor 8:” who; ooo. 52.- N:.- wand wvog Sfi— VN. So. omor Nmor Swd Rho.— omo. moor mmor mvod m3.— omo. moor chor NOON mmog ooog oN. ooo. :o.- ooo.- ammm :o._ mmo. vmor m5.- oEN moo; Nvo. Nvor hmor mnoN 80. as.— om woo. voor So. to; So; oNo. o_o.- moo. ovod moo; 3o. omor Noor oooN woo. ooo.“ oo. ooo. Noor moo. mid So; go. voor ooo. mo_.m omoA mvo. o_o.- So. NMNN omog go; o_.- ooo. _oo.- moo. onN omo._ vmo. moor moo. Nomd ooo._ ovo. ooo.- 3o. movN ooo._ Nmo._ om.- :o. ooo. moo. 02am #4:; 39 _oo.- 2o. SEN m3; hmo. voor 2o. wmwd wN—A mom" cm.- m_o. ooo. boo. Sand oo_._ emo. _oo.- 20. mmmxm o3; woo. moor Zuo. sumo won; m3; ow.- o_o. ooo.- moo. Ema mom; 3o. voor So. hwoé ohm; mwo. 30.- omo. Smé oowg mom; ow.- 35 35 859$ BE 35 REES, mmmm mam oocmtm> mm: .a§m<0_o8mmvofi_m::oz=82mm2 .qE>mouo5< mansim an... Ntv Eda—<8 .I-lIIIIII“"-“ '- ‘ fu- lfil (u-s .C. \ 8§E> 832502 + Em 292mm x 35 2959 38:3 :MmE: Aer - cm- .«o 53.6 88:3 :35 oEEam: H x Acm- mo 855> 29:3 2va 385v a855> tau—£502.. 6m- .«o :88 05 88:3 ..502: .3583 Econ—anon: 05 mo 02.? 25 05 88:8 so»... H 082 84 822. m8. menace 3.2 5;? «on- ENS? 08.2 832. ”.8. 532mm 3.2 8mg 9.. on: 95.- 80.82 83 :3 8o. v2.3a 33 ES ms. 33% 33. N3; 2. 2m. 80.- 223 83 #2 So.- 322 $3 a: 20. v8.3 33 ES 2.. ”8. 8o. 5.: 22 5. 8o. 08.: a? £2. 89-23 22 22 on v8. ooo. 23.0 5: Re. So. :3 $2 N2. 80. £3 a: 82 a. so. So. 82 S: ”3. ooo. ES. 8: 3o. 08. $3. $2 a: E. :o. ooo. N31,. 83 m8. 8Q- 33 83 80. ooo. 82 M33 33 om. ooo. N8.-o:.~ ME: NS. Shy-82 23 as. 80.- $2 23 23 2. woo. so. 83 83 So. So. 83 83 25. 8Q- ”SN mg. 83 oo. woo. ooo. £2 23 so. ooo. Sod 23 95. 8o. 82 23 :3 2.- ooo. 8o. 22 93 m8. 80-22 :3 So. moo-EN 5: N2: 8.- :0. so. a: 2: ms. 8o. Ea :: m8. 89.32 8: 8: 8.. so. So. 82 a: «8. :5. $2 £2 80. goo-3S $2 82 2..- 2o. ooo. 83 $2 95. c8. 83. 8.2 so. 80. 2.3. 82 $2 a..- mmmm oocmtm> 3mm oucmtmxr mflm occur—ax, mm: oaeamuafiéoz 50: mm: oaeamBuaéoz :82 mm: 2&3 Buzéozsoz o» w OWN-H 87H ow :32 25,—. win: .m- .3 H92 E; as:— .02—3...; ovum—«5.82 .582 .33 Egzflzouo a... 53:. .C- \ oo:a.§> 853582 + 85 oEEam x 85 29:88 88:3 2592.. Ago: wfixmom :o 88: .o A 3 8.: 83 2:88 3:5 330805 05 88:3 :85 2888~$<= .2. - £9 .8 :88 2: 88:3 :85 298m: H x A. m- .8 8:39, 0888 SEE 88:3 28855 33:89:82.. ._ o .8 :88 2: 88:3 :83? 83858888 carom-0:8 828—383 85 ,8 33> 25 2: 88:3 2;! H 082 85 «Co—N £03. :03- mmoom mom; conga 3::- mPéT Now—N 3o. vomdmm mail- o3.3- obvo— 9K. 25.2 ow. «.23 Sod- ~_o.- Gown woo. some ENN- nofim- 28.2 mob. Sow oomN- End- :ws omo. owod 9». 33. over 30.. Nnofi m3. Eb. mo“... 8:..- wmzw «E. Now. mom- got 2nd oom. ow: ov. mmo. 12.- ofir 8.3. 3+. ooo. om;- Bfi- mom.m Fm. N; 9%.. mvmr wnmd 2m. vow. om. So. So.- moor Sod omm. mmo. oofi- vo_.- :md oom. ooo. o2: 3;.- Rog ova. mom. mm. Eo. omor wmor Sod mom. Nmo. moor voor ohmd mum. omo. 2:.- m2.- 2o; «.2. 58m. vm. ooo. omor mmor omo._ 98. 5o. moor moor m2; Sm. ovo. moor moor vow; Nwfi mum. on. moo. :o.- :o.- SN; m2. So. vmor vmor mm: ooo. vmo. mvov :8.- oo: «no. m2. 2. voo. «oo.- «oo.- omo._ voo- o8. o_o.- :o.- go. :o.- So. omor omor ovog omor ooo. oo. voo. Noo- _oo.- ooo. voor o8. voor ooo.- m8; ooo.- Go. 99. 08.- _mo._ wofi- moor o_.- voo. _oo.- «oo.. 5: >2.- Zo. moor voor M8: on? So. ooo.- moor um: 92.- m:.- on.- moo. ooo. moor owmg wmmr Eo. _oo.- moor mm: momr So. voor moor 3”: EN. 03... on.- hoo. ooo. moor omo._ o3..- Bo. _oo.- voor omo._ new..- v8. moor ooo.- one; 21m.- wmmr ow.- moo. ~oo.- moor «no; Sv- oNo. voor moor Sod omvr Nvo. So: o_o.- mZN omvr 2v.- ow.- 85 85 852:3 85 85 8:33» 85 85 8:323 mmE dééoafimmcomzagozfioE mmE .ofixéoEEmmvouzagozgoz MmE .q§mu .3 2.. ._ o mo 32: 05 885v :53)? 3553093 35928 Agata—338 2: .«o 33> 25 2: 38:3 2;... H 202 86 o2.me omo. oowdomog $.52 wwhomw mom; Smdowmw 3w.m_ oowdow mmo. gods—mm $.52 25.2 ow. mums swor mgoooo wmod mood _oo. oidom Sod _ow.o_ 2o. mmoowm mooN owod mw. mow. moor Boomm mum; mam.“ ooo. oNoNS ow: now; go. mmowm _ow._ own; ow. omo. moo. 05.2 com. m3. moo. omw.o_ con. mi. moor wow wow. wow. om. 3o. moo. wmm.m com. nwo. ooo. 25w won. woo. woo. wmmw mom. mom. mm. 2o. ooo. wa com. owo. moor amok. mom. 3o. moo. mmoA mom. Sm. wN. o8. ooo. owed mum. mmo. _oo.- mhwd wnm. wwo. So. momma mum. mum. on. moo. ooo. wag m2. 2o. ooo. mom; m2. mmo. _oo.- own; NZ. m2. 3. woo. ooo. omog ooo. o8. 8o... ooo. _oo.- 56. _oo. mmo._ So. ooo. oo. woo. ooo. o3. moor o8. Soy moo; moo... omo. moor Go; ooo.- Noov o_.- woo. ooo. moo; 02.- :o. ooo. mm_ A m: .- «No. moo. w_ _ ._ w: .- m2 .- om.- moo. _oo.- ohm; nmmr 2o. moor ommg ommr omo. ooo. mom; own... own... omr oo. _oo.- own; 30. 2o. ooo. woo; wmmr Nmo. ooo. m3.“ wmmr wmmr ow.- woo. ooo. So; 2w; omo. ooo. Po; 2w; _wo. ooo. Sod 2w... Ewr owr $5 859$ 35 8:323 85 oocatm> mmE 295m 3538.52 :82 mmE oEEumooN§§oZ :82 mmE 295m tau—£502 :82 S o ommuh oo_n._. omnh :82 03,—. Min: é ..o Hm: 23 35 .3525» cos—agcz .532 .3: Egzooo N... mam—<9 .C. \ 853$ ovum—9:82 + mam 295m x 5m 2953 88:00 :MmE: .Awwoz wzfimom co 083 .o A 0 8.0 85 29:8 880 18:88.: 05 8850 :35 coca—53.. .3- - $0 00 :88 05 38:00 :85 295m: 9 x Q.» .«o 85¢? 0388 SE: 3850 $2893 03:43:02.. .«m. .«o :88 2: 8880 :58)? 35.53093 02.8995 €032:qu 2: 00 33> 25 2: 8850 an»: H 082 87 Nwwo—m waT waT Snow So. S008 $5.3- 35w”- Smom Nmn. 0S.wNN moowT SowT 0005 omm. 20.2 ow. h0_.w wSN- mSN- mwmom wE. abow Sad- ooNN- omjfl mom. 25.0 00m.m- Smd- st haw. wmhd 0w. 00w. ow0.- 3.0.. 30.3 ohm. o00. 002..- oRr 000.5 wmw. w00. mow- Ewr 30w omm. mom; ow. mmo. S7 o2: w_w.w mom. 00o. of; mm:- Noo.m mwN. wofi. owmr omNr Sad 35 Sw. on. So. 50o; 00o.- oo0.m 2m. mmo. 02.. mo_.- nmfim 0:. So. oflr ow? 03-; mfl. SN. 0N. wS. omor omor and o2. omo. woo- woo.- 0wo.N w0~. mmo. 2:.- 02.- on“; NS. 03. wm. ooo. 08.. 0mo.- mww._ >3. omo. m0o.- m0o.- 20.2 2N2. wwo. woo- woo.- Nom— owo. m3. om. moo. _S.- ~S.- S: 00o. NS. wmor «No- 00: wwo. wNo. wwo.- _wo.- no: wwo. 00o. o2. woo. woo.- moor 2S.— moor :o. oS.- :o.- owog :o.- So. oNor omor Nwo._ omor ooo. oo. woo. moor So.- Rog _wo.- _S. woo.- moor Io.— wwo.- mmo. oS.- ooo.- mo: wwo.- owo.- of- moo. So.- So.- um: 08.. NS. moor moor mo: 00o.- wNo. 0oo.- moor 05; o0o.- w0o.- om.- moo. ooo. So. w0m._ Eb.- wS. So.- So. mmwg Ro- wNo. woo.- 0oo.- mmw._ wwo.- wS.- om.- So. ooo. So.- 20; wwo.- to. So.- So. 22.; wwo.- 0S. moor woo.- wow; owo- wwo.- ow.- moo. So.- Noo- Sod Sow oNo. woo.- ooo. Sod wwo.- mwo. wS.- moor N0_.~ wwo.- wwo.- ow.- 35 35 85.53 85 £5 oocmtw> 85 85 8593 mmE .QE>m<0_anmooN_B§02§oE mmz dfixéoEEmmoofifictozgo—Z mmE dEzéoEEumoon—mncaicmofi £- 0 cmmnh OSHH omnh :32 0.9—Em 9.7.: g ..o Hm; 0:: as:— Ju=ata> uni—«5.52 .582 Au: Egzooo a-.. mam—<9 AGHDZ-FZOUV n-v HJQQE. .C- \ oo§ta> 332502 + mam 295m x $5 2959 88:3 ammE: .3- - N m- oo 536 38:3 :35 295m.- ..H x A; we occur? 29:3 02:8 88:3 $05.53 Bum—«€02: .2“ mo :38 2: 882% 2:82.. 3.853083 Seton-03“ Sosa—soon: 2: mo 33> 25 05 88.8w :9: H 802 88 2S? wwo. ”2.882 2n: 28.3. 8m.- ovogwmv 8a.: :22. 08. $622 Son :02 2.. :2: So.- 3302 :2 802 N8. 2202 82 5.2 :o. 3.0% a: £2 2.. 23. So.- 828 82 22 8a. 3.22 82 8: go. 5.3 22 82 8. m8. NS. 2.92 v.2. .2. 8o. :2: x... ”2. 8o- VNE 2v. 5.. on so. so. 83 SN. 95. so. 22. SN. :5. so. 83. am. am. a. to. So. 32. mm. ms. 80.. SM: 08. ms. 80. 2.: 08. E. x. 2o. 8o. 22 m2. m8. ooo. 82 m2. 2.0. so. 62 2:. m2. 8. so. So. a: 80. so. So. :2 ”8. 2.8. So. 22 8o. 80. 2. So. so. :3 :5. 2o. ooo. m2: ooo. so. so. 22 so. So. 8. 3o. 25. $3 a8.- 20. 08. E: 2o.- zo. 8o. 23 EV.- ovo- 2.- 30. So. 2.: $0.- so. so. 2: So.- MS. 8o. :2 N8.- So- 8.. moo. so. an." to.- zo. 8o. 82 29. so. So.- 82 25.- ms.- 2.. So. So. 82 25.. to. N8. 23 So- Xo. so.- 52 3o.- 25- 2..- so. So.- zoa £0.- 08. so. 82 go.- So. So. :2 30.- ns- 2... 35 8:22; 35 cos-23> 22m 3:223 9.: 295% 853502 :82 mm: oaaamBNfiéoz :82 mm: gas-$23552 :82 2 u 2TB 2:; any :32 2...,“. 9.6: «m. .3 mm: 2:. 23m .353-.5» .25—«5.82 .582 Anon 85208 n... mamfi. 89 C. \ 3:223 338882 + 85 “.88-mm x 85 2:88 88:3 .592: .AwwoC 83—85 :0 388: .o A 3 :8 8B 8888 383 3838823 2: 88:3 :85 288888. .3- - £9 .8 :88 05 88:3 :85 2:88?- 9 x 1 m- .8 3883 2:88 285 88:3 2888> 388—8802.. .m w .8 :88 23 88:3 .882: 3888888 3033-88: 838883 23 .8 38> 25 2: 88:3 :3. H 082 twofiN 393- 203- oofiww th. mNh.w~N $53- 353- SWO— wOO. thwNN mmowT SowT Nmod OOM. momma ow. mgw wSN- wS.N- NONHN moo. whow NONN- MONN- Oth~ mhw. hNbd oomN- OOmN- MONO NS. ONON mw. mow. owor Omor 3w”: 35w. OOO. wON-r Eh.- Om—H mwm. mow. mwwr mwmr _NN.w SN. m:._ ow. me. Tm—r Om_.- OONw SUN. OOO. mgr OOT meN w:. 33. ori mer ONON w:. wow. Om. mS. NOO- hoot NNON m3. Nmo. OO_.- OO~.- mmON wN~. Ono. Om_.- S—r nub; ONO. OMN. mN. mS. OmO.- wmor ONwN 52. ONO. moor moor 385 m2. OmO. ES.- wflr wow; NNO. O_N. wN. wOO. Onor wmor woma— 30_. OS. moor Noor :m._ SO. 58. moor moor ND: wwo. ww_. ON. mOO. :O.- :O.- hNfi— Omo. NS. wNO.- mNO.- m:._ wNO. wNO. NwO.- NwO.- hO—g OOO. wwo. O5. wOO. wOO.- moor coo; moor OS. oS.- moor wNO._ OOOr NNO. ONO.- _NO.- moo; ~NO.- OOO. OO. woo. NOO.- NOO.- woo; ONOr :O. wOO.- moor moo.“ ONOr MNO. OS.- NS.- mfg Omor wNO.- O_.- moo. SO.- OOO. OONA Omor NS. NOO.- wOO.- :N._ mmov ONO. 300.- moor NN-NA wwO.- wwo.- ON.- OOO. OOO. NOO.- Nam; NwO.- mS. SO.- SO.- oowg NwO.- SO. wOO.- SO.- mmm._ _wO.- owO.- Om.- hOO. OOO. OOO. So; owO.- wS. SO.- wOO.- up; wwO.- Ono. moor NOO.- ONw._ NwO.- owO.- ow.- wOO. SO.- So. Z-ON mmor ONO. wOO.- OOO. OmON SO.- wwO. wS.- NOO.- OONN Omor SO.- ow.- 85 85 3883 85 85 3883 85 85 3883 592 .o§m<8_888m3o~_8::oz:82 mmE .88>m<0_q8mm3o~=m::oz:82 mmE .o§m<0_o88m33=m::oz:82 m» 3 OmN-I-H OO—HH Own-H. :82 988.5 win: @- .8 am: 3:: 85 6388:» 33:2:qu £82 Aavm 8875208 n-.. ”8:5 .c. \ 859$ 88802 + 88 2.88 x 8m 2983 88% .8ng .3 - mm. 80 532V 88% :38 288.. H x 3 .3 85.5., 389. 085 88% ..858> 88,502.. .m m. .8 :88 05 88:3 ..:82.. 8888888 30898:: 828—3qu 2: .8 28> 25 2: 88:3 an»: H 082 90 3%.va wwo. 3v.nmmoo_ 28.2 mNNde Non... 352an 392 $3.83 avo. 3:”ng 33.2 $52 av. 53. wwo.- OVNNNE mmoN to? NOO. 3582 Och 38.2 O20. 28.2% OMEN EON 3. 3mm. moor 3Nv._NN I: wNN._ So. NF.NN_ 32.2 mg; 20. 58.3 mm: 2: ov. wwo. NOO. 53.2 Ohm. m2. NOO. wwNS acm. o2. moor «.me NON. mom. Om. ONO. SO. Noow SN. wwo. SO. 83.... EN. Ono. NOO. mood mmN. OMN. mN. SO. SO. mmNé 2N. 2.8. NOO.- 3.9m wON. who. NOO. 59m SN. SN. «N. 29 NOO.- ONvN NE. NNO. SO. NMNN v2. wwo. NOO.- GONN NE. 3;. ON. moo. OOO. 3w: wwo. NS. SO. 2NN._ wwo. mNO. OOO. mmN._ wwo. 30. O2. woo. SO. O_O.2 SO. SO. NOO. wNO._ NOO. _NO. 89. woo._ So; OOO. OO. woo. OOO. So; VNOV :O. OOO. who; VNor NNO. _OO.- 2: m8: 3N9- 2.- moo. SO. 3: mmor NS. 89. S: hmor VNO. NOO.- ONNA wmor 38.- ON.- OOO. So: Ow: 30... So. OOO. 2Q; ovo... Ono. MOO. cow..— hmo... ovor On..- NOO. OOO. 58.2 95... SO. moor 2N5; mvor mmo. 80. v3..— omor ovor ovr woo. NOO. wwoN mmor ONO. OOO. 23.2 2.8.- Nvo. OOO. 2 2 .N 58.- 30.. av: 85 8:883 85 8:883 85 8:883 5m2 088% 388—8882 :82 mm2 0883 388882 :82 5m2 088$ 3onmfipco2 :82 a. 3 OmNuH OO_H.H OmHH :82 2:5. wfim: i“ .8 Hm2 3:: 85 69:82; 38:88.82 £82 Anon Angzooo a... 885. 91 .Q \ 8:39 833882 + 35 29.8w x mam 295$ 8850 ammzs A320 magma: :0 083 .0 A v 80 35 39:8 3E0 300885 2: 88:00 :35 use??? .3 - :mzo 52: 2: 88:3 :36 295? 9 x 3m .6 85.29, 295m 325 88:8 ..8§E> 851252.. ..+ mo :88 05 88:3 2:82.. 3555083 Eton-58 0.233020 05 00 33> 25 05 $850 :5: H 082 50.03 03.3- :21- mhmsv mwb. 550$ 55.:- mmfiz- wVQw— Em. 0mN.mNN 80.3- M0067 0:6 00m. 00W: 0v. mo_.v Sod- v—oN- 306m 00m. $06 homd- 00N.N- 3%.: 50m. enhd bond- EON- omwfi mmm. mood mv. vow. 0%.- 08- 3w.m_ Ev. 000. mm“... 2.5.- 200 ohm. cow. mwwr 30.. 000A v2. mm0._ 0v. 3.0. 2:.- ~m_.- Nm0.v 59. V00. 02.- 02- vowd 52. m2. mvmr 03- v3; 2.0. wNm. 0m. 20. $0.- 30.- and 3:. :0. 02.- 02.- 56.— mmo. wwo. 02,- m2.- 009— 50. 03. mm. 20. 30.. ww0.- hmmN N2. wmo. 30,- v00- :0; £0. 0m0. 52,- 03.- meA 30. Sf vm. woo. 08.- 50- woo; wwo. 20. «.00.- 30,- mow; 000. 0.8. m00.- 000- SM; 30. ~N—. 0N. m00. :0.- «Sr NZ; 0N0. NS. vmor vmor 01; :0. vmo. Nvor :0.- ~m_.~ m00.- $0. 0_. v00. v00.- V00; m8.— v00.- :0. 0~0.- 0_0.- m3.— 0~0.- NNO. 0N0.- 0N0.- 000A 0N0.- 000. 00. v00. ~00: N00.- woo; 08.- :0. v00: m00.- 000A NNO.- 30. 08- 20.- 53A 0m0.- 20v 0_.- m00. 50.- 000. mom; v8.- 20. N00,- m00.- gm; hmor 0N0. 000- 000- 2m.“ 0m0.- vmor 0N.- 000. 000. ~00.- NS: 0N0.- 20. 30.- m00.- Cm; wwo.- Nmo. v00.- m00.- So; :0.- mN0.- 0m.- 500. 000. _00.- mos; 20.. £0. 50.- ~00. mp; NNO.- wmo. m00.- _0o.- 000; mmor mmor 0v.- 000. 50.- 000. 000A _N0.- 0N0. v00.- v00.- mmoN wwo.- vvo. 30,- N00.- :NN mmor ~N0.- 0v.- msm mam oocatm> 35 85 oocata> mam SE 8555 mm: .a§m Buaéoz + :5 293 x 3m 2959 8850 :mmE: .3. - .m- 00 5830 8650 :35 295m: .0. x AJ- 00 853...? 2088 2E5 8850 ..oo§5> 8338.82: 40 .«o 538 2: 3850 $802.. 3555093 totem-58 Gown—3020 05 00 33> 25 05 8850 2;... H 202 92 02”.va 03. 20063002 unnum— _0v.0Nv ~0m.- 02.309. 000.: 0:.mov 0m0. ”00.023 0Vm.m_ 00N.2 0v. M008 mv0.- 592-02 0mm.N £0.02 ~00. 028002 wood 05.0— 0—0. 392% 20d mood mv. 00w. m00.- magma 0m0._ wmm2 200. 03.3. 302 20.— 20. m0m.mw 000.2 30.2 0v. mmo. N00. 0222 0Nm. _0_. N00. 32.02 0mm. m2. 000.- 2N0 Em. wan. 0m. 0N0. 000. 20% 00_. vvo. 200. 03% SN. 000. m00. NVO.M mom. 002. mm. :0. ~00. 3:» 52. 58. 200.- 59m 02. :0. ~00. 25m N2. 22. vm. 000. 200.- vaN 02. NNO. N00. VVNN m2. ovo. 200.- whmd 02. 22. on. m00. 000. 0022 5.0. 20. 000. 2&2 08. 30. ~00. mmNA 0m0. 08. 0_. «00. 000. ”No.2 000. 020. 000. 03.2 000. 30. ~00. $0.2 ~00. 000. 00. 000. 000. 000.2 20.- :0. 000. 000.2 :0.- mmo. N00.- 02.2 020,- :0.- 0_.- m00. 000. h0_._ v8.- 20. 000. m0~.— vmov 30. 000. New; wwo.- vmor 0N.- 000. 200.- 20¢; 30.- 20. ~00.- 02a.— hmor :0. N00,- mmmg 0N0- mN0.- on.- 000. _00.- 200.2 30.- 20. N00. N22 _N0.- 90. 200. 25.2 20.- 30.. 0v.- 000. 000. 0002 _N0.- 0N0. v00.- «.00.— wwo.- N3. 200.- 2 2 .N 2N0- 50.- 0V... mam 855$ $5 3553 92m oogtm> mmE 208mm uofiiccoz =82 mmE 205mm 03238.82 :82 mmE oEEmmvoEgCOZ :82 i 0 0mmuh 00_n._. omuh :32 93,—. win: .m- u: mm: 0.3 3:— Jugata> tau—«Eucz .532 .30.. EHDEZOUV a... REE. 93 .C. \ 850:3 030$:qu + 35 208mm x 35 20:89 3850 .052: .9003 0:28: :o 083 .0 A 0 80 33 2088 30.0 300885 2: 8850 :35 0:89:03: .3. - n 3 00 :88 of 38:00 :85 208mm: ..0 x A“ 0 00 3593 2083 80:5 8850 ..oocuta> Bum—«€02: .n 0 00 :88 2: 88:00 .0802: 3555093 0050-020 0802:0000 2: 00 33> 25 05 8850 ..n»... H 802 000.05 203- 20.:- mv_.0v 20. 03.2w 000.3- 000.3- 0v0.0— vvv. www.mmm m00.3- 000.2- 08.0 MNN. 23.2 0v. 00:q 30.N- 0—0.N- www.0N 0mm. 000.»V 00~.N- 00N.N- mm;— 9%. 20.0 00m.- N0m.N- MON.m v2. 0vm.N 3. m0? 0v0.- 0m0.- v3.2 00m. 000. w00.- 000.- 02x0 NmN. N00. 30.. 0mm.- 00m.m 0:. 000.2 0v. 28. 22- 22.- mv0.m 002. m00. 02: 02.- _00.N :2. N2. 0v~.- 0mm.- 30.2 30. 00m. 0m. :0. 000,- 000,- SYN 22. 00.0. 00_.- 00_.- 000.2 N00. 000. 02.- m2.- 000; 0N0. 002. mm. 20. 000- 000.- 0mNN ~02. 0N0. m00.- 000.- 000.2 m00. wwo. 02.- 02.- ND: vmo. 202. 0N. 000. 0m0.- 0m0.- M00.— 000. 05. m00.- N00.- 022 30. 00.0. m00.- m00.- 20m; :0. 002. cm. 000. 20.- 20.- 02.2 020. 20. vmor VS.- 0002 000. 0N0. Nvor vvor 0.2.— 20.- N8. 02. «00. 000.- m00.- Mao; m00.- :0. 020.- 20.- 020.2 20- mmo. 0N0.- 20.- 022 20.- 000. 00. .00. N00.- 200- 000.2 :0: :0. v00.- «00.. 02.2 0—0.- vmo. 020- 20.- 02.— 08.- 20.- 02.- m00. 50,- 000. EN; 20.- 20. N00,- 50.- 0mm; 05.- 0N0. 000.- 000.- 022 m8.- 0_0.- on.- 000. 000. ~00. m0: 08... 20. _00.- N00,- 00v; 020.- N00. v00.- 000.- N00._ 80.. 20: 0m.- 000. 000. 000. 020.2 20.- :0. 80.- 50.- :0.— 0_0.- 08. 80.- moor 30,— 0S.- 0_0.- 00.- 000. 200.- 000. 0.002 20.- 30. v00- 80. :_.N 20.- 30. 30.- 000. w0_.m 30.- 20.- 0v.- 35 gm 850:3 mam mam 850:3 mam mam 8:80; mmE dE0m 005.5:ch .582 A50 Swag—.2000 a-.. mam—<9 .c. \ 855, 853802 + Em oaeam x Em 2958 855w :32: .3 - n A” mo 536 8.8% ...Em 2959. q. x o A” mo 85:? 0353 238 888v .2539 833.802.. .n m no :88 2: 88:3 ..502: .oo§5>8o§a cocoa 3o Agog—zooms 2: no 252, 25 2: 88:8 :5. H 802 94 .3..va “mo. ommdmmoe mmm.2 mmodmv oomr Emdmomv ~34; www.mov So. woodwgm «5.3 ENS ow. gas ovor 39:2 oomd ovo.o_ moo. S033 ovfim momoo :o. 3&me SON ova 3. wow. moor Sigma woo; mam; ooo. 35.0.9 boo; 2N..— Eo. 25mm vmog boo; ov. mmo. moo. ENS Non. oo_. woo. o_o.o~ vom. v2. voor 32w com. oom. on. 28. _oo. 95v o2. go. .8. bond o5. owo. woo. Noo.m .92. wt. mm. oo. _oo.- onoé o2. Ono. ooo.- ovoa of. who. woo. owed no". Sfi vm. ooo. _oo.- ommd m2. mmo. moo. wom.m m2. ovo. moo. ohmd ooo. 02. on. moo. So... on: omo. So. ooo. vo: Nmo. mmo. ooo.- mom; omo. «no. oo. voo. So. «mo; So. o5. moor ovog moor «No. Noo. o:._ moo. ooo. oo. woo. So. So; Sor :o. ooo. moo; 2o; mmo. _oo.- 3: Eor Qor oo.- moo. ooo. mom; 55.. 2o. 8o. om~._ 2o... omo. moor mom; oo.. 58.. omr ooo. So. cm: to.. So. ooo.- wmv; ooo.- 3o. moor mmm; omor woo... omr woo. ooo. mom; for So. 89. now; So: mmo. ooo. own; oo.. 08.- ovr woo. ooo. So; Boy 3o. moo. wood :o.- So. So. NSN Qor Sor ow: 35 3:33, 85 8593 35 02853 mm: 295m 3538.52 :82 mmE o_o8amooN=m§oZ :82 mm: 0388338502 :82 a» o ommnh oo_n._. omuh 582 2th. 95: no .8 am: 9.: ma:— .3=a_._a> ted—«5.52 .532 .Svm SEE—#208 a... ”33:. .2. \ 859$ 33.532 + mam 295m x 3mm 295$ mouocoo ..mmE: Ago—v mac—mom no 389 .o A o 8o 35 29:8 exam 303885 05 88:3 :35 38885.. do - do he :88 05 88:3 :85 295m: H x an we count? 2953. 23: 88:3 cougars, oo~__m=:oz.. ._ a mo :88 2: 8856 2:82.. doom—0:893 carom-28 328—32: 2: oo 2:? 25 2: 88:3 25.. H 802 95 m.— _. mmor ommv 5%; VS. 2;. mmor oomr mmmg vow. uvm. omor KY- ONNA owv. So. av. 80. 2:.- Sumv N3; mom. 52. o:.- ven- va._ 3m. m3. w~ _ .- NS”.- mVNA ovv. 3w. 3. Nmo. m2 .- 02 .- con; how. moo. v2 .- fat man; wvv. 2:. NE .- 05.. mg; 5m. ooo. ov. :o. hmor ooo.- now; own. 30. Nwor moo .- in; own. wwo. we .- m2- mmm._ mom. mmv. om. woo. wmor vvor mom; can. go. ooo.- moor mom; wmm. mmo. 30.- o_ _ .- v2 A MNN. mmm. mm. boo. mmov mvor mmmA com. 20. cmor Eb.- KNA 3N. mmo. 50.. co? SSA mom. 2m. vm. ooo. mNor wmor hem; NNN. ms. 30.- Nmor 03; mg. omo. moor mwor ~24 mg. 03. on. moo. 08.- :o.- N_ _ ._ o2. go. _No.- vmor go; wwo. mmo. 98: $0.. mmo._ ooo. “I. o_. woo. voor voor m5.— voor So. 05.. 20.. who. :o.- omo. omor gov woo. _mo.- ooo. oo. woo. moor Zoo.- oom. gov ooo. moor voor mNo. moor ms. :or :o.- N3. N2 .- go- o_ .- moo. floor ooo. wow. so. .- ooo. Noo- ooo. 3w. n2 .- :o. ooo- ooo. wmw. >2 .- 52 .- om.- moo. ooo. ooo. moo. 3N- woo. 50.. So.- own. 2N.- moo. voor .80. con. wmmr ZN.- 0m.- moo. ooo. So. 5A.. vwmr moo. _oo.- moo. v3. mwmr m _o. moor moo. own. Sm- ori ow.- moo. Soc moo. woo. Rmr moo. voor voo. who. 30- So. 20.. hoo. $0. 3.0- omn- ow.- mflm mmmm 355$ $5 35 oo§t~> 3mm 35 REE-:3 mmE .a§m<0_q§moon__m§02=mo§ mmE .mExm-«oEEmmoofiEE—oZfio—z mmE .oExéoEEamoonE—cozgofi & o ommuh oo_u._- om"..- SBE 295% wins .a ..o Hm: E.“ as:— .3=ata> Uni—«5.82 .532 As: mmHUOmm A3— Hmh ho mZOHHfiEOUOHD< mamz Buaéozs .S .«o :38 05 38:0“. 282.. doom—0:823 ootooéco Econ—:33 2: mo 33> 2.5 05 88.3w ...o: H 202 So. wwo.- nmwd who. omo. ooo.- 98; now. So. oofi- mwmg mmw. So. ow. mmo. oSr wwow on». wmo. ooo.- mnwm omn. wwo. 2 ~ .- m2; mob. Em. mw. 0S. wwo.- mama So. mmo. moor find So. mwo. mwor m2; wwm. So. ow. ooo. wS.- Smm 2w. wS. wmo.- wow; wow. mmo. omor 0mm._ oom. omw. om. So. ooo.- mm“; mmm. oS. wS.- wmm._ 2m. wmo. mmo.- mm»..— :m. mmm. mm. So. So.- 5S.— oom. mS. wS.- Kw; oom. wmo. mmor won; 8m. 2m. wm. woo. woo.- omw._ owm. MS. ooo.- _mm._ _wm. mmo. wS.- ammo mmm. omm. om. moo. So.- hm: o2. :o. moor mm: woo _mo. moor So; ooo. _2. oo. woo. ooo. m S._ ooo. oS. So.- oS. So... omo. ooo. So. ooo. ooo. oo. woo. So. ooo. ooo.- ooo. So. So. ooo.- wS. ooo. ooo. So.. So... o_ .. moo. So. 3%. 03 .- ooo. moo. 0mm. wS.- So. ooo. 93. S7 S7 om.- moo. ooo. mob. ommr woo. So. mm». ommr m S. ooo. won. mmmr Rm; omr moo. moo. nwn. wwmr So. woo. wmn. mwmr m o o. So. omn. mum: 0mm: ow... moo. moo. So. hmmv So. woo. wS. mm»..- wS. ooo. wwo. ommr ommr owr 35 85:3, 35 859$ 35 35.53 mmZ o_o8mmoo~=mscoZ=82 mmE oEEwmoofifigozgoE mmE 038333580 Z :82 a u ommnh o2"... omuo. :32 03,—. win: .a .3 ”am—Z ES 35 .353—.5 .55—«5.82 .532 .3: Embzfizooo a... ”SEQ .Q \ 8:25. 8850,02 + 85 298m x 85 03:83 88:0“. :58? .Awwo: 8385 :0 “008; .o A o So 85 03:80. 88o 80300005 05 885.0 :85 0:08:52: A8 - «9 oo 805 05 88:20 :85 298m: 9 x Qa «0 0088.» 2980 0:5,: 88:20 :008_8> cod—«E002: .«a o0 808 05 8800—0 :802: doom—0:888 “Em—0905 Econ—305 05 o0 28> 03.: 05 88:8 :2... U 202 97 wo—. owo.- mmwr omwm oS. mom. owo.- mmm.- wmfim mmw. ~mw. wwo.- ~mo.- m3..— omm. wwo. ow. S_. oS.- o_ m..- Sum mmw. m2. mw_.- oomr whom ooo. oom. m2 .- oow... owo._ omm. moo. mw. omo. om_.- oom... oow.m mwm. moo. mwo.- :mr who; 2 m. ww_. w: .- _wm.- oom._ mwm. mwm. ow. wS. ooo.- So.. So.— owm. So. ooo.- omgr mwmé mom. wmo. om— .- oS.- own; Sf mmm. om. ooo. mwo.- omo.- mom; wwfi _mo. ooo.- wwo.- mom; wfl. owo. ooo.- wm~ .- Omm._ w~ _. mmm. mm. woo. owo.- owo.- mwm._ o5. omo. woor wS.- mom; mwo. omo. moor : _.- mom; oo_. mmm. wm. ooo. wmo.- Soc _mm._ o2. oS. mwo.- omo.- mom._ :_. oS. moor wwo.- om_._ oS. oofi om. woo. :o.- :o.- omoA wmo. mS. mmo.- mmo.- S5.— mwo. mmo. wmo.- mwo.- boo; _mo. woo. o_. woo. woo.. moor ooo. moor oS. oS.- oS.- mS._ oS.- fimo. omo.- omo.- oSA omo.- ooo. oo. woo. moor So.- mmo.~ owo.- oS. moor woo.. woo. mwo.- omo. :o.- ooo.- oSA owo.- omo.- o_ .- woo. So.. So.. _mo.~ moo... oS. moo.- Sor mmo._ moor omo. ooo.- woor mmo._ woo: _oo.- om.- woo. ooo. So. So; ooo... — S. So.- moo. woo; woo.. _mo. woo.- ooo.- woo; mS.- ooo.- omr moo. ooo. So.- o2; mS.- mS. So: So. mo_._ Kor mmo. moor woor oo_ ._ oS.- :or ow.- moo. So.- moo... Sm; ooo.- mS. moor So.- 5mg ooo.- wmo. mS.- woo.- 3m; :or ooo.- ow.- 85 85 0088> 85 85 008tm> 85 85 00893 .52 .o§m<0_o8mooN_B§oZ=802 mmE .oE>m<0_o88mo0N=SEoZ:802 mmE dim<0388mo0~=8==02802 & o Ommnh oo_H.H omuh :32 0.955 3.8 «a .0 mm: 98 85 .00—58> tea—2502 .802 .Aavm Emoézouo n-.. 085. 98 .fi \ 85§> 803882 + Em 2qu x 80 2958 8:000: .584: do - No .«0 80% 88:0: :85 298m: H x Auo mo 008_8> 2980 SEE 88:0: :008.8> 828802.. .No 50 82: 2: 88:0: :802: .028—00.8088 8.5903: Gog—30$ 2: 50 28> 0:: 2: 080:0: :No: ” 0:02 omo. oo_.- omow omw. omo. omo- homo oS. moo. wwo .- 2mm wow. wwo. ow. mmo. mo_ .- 25o woo. omo. mm_... woo.w owo. wS. mwo- mowm 2o. mom. mw. omo. mmo.- oomo omo. mwo. owo.- ROM mom. moo. S7 ooom mww. mwm. ow. mS. mS.- Sod 2m. wmo. omo.- 2mm mom. omo. mwo.- Sm; mmm. mmm. om. woo. So.- woo.m 2m. wS. mS.- mmm.. mmm. mmo. mmo.- Sm; o~m. mmm. mm. moo. So.- wwo._ 2m. So. So. So; oom. mmo. mmo.- mom; oom. mmm. wm. ooo. woo.- Sm.— mo: mS. ooo.- mow; o2. mmo. mS.- fin; m2. moo om. woo. ooo. woo; woo. _ S. ooo. w: A woo. mmo. woo.- oo_ ._ So. woo. o: woo. So. ooo. So. oS. ooo. woo; ooo. omo. So. moo; So. ooo. oo. woo. So. omo.. wmo.- oS. So. ooo. wmo.- omo. moo. ooo._ omo.- omo.- o_ .- woo. ooo. omo._ ooo.- oS. So. omo._ ooo.- omo. moo. mS.: omo.- So... om.- woo. So. owo.: ooo.- _ S. moo. woo; moo... _mo. moor mwo: mS.- oS.- om... moo. So.- P: mS.- mS. So. mo: oS.- mmo. moor oo: wS.- ~S... ow.- moo. moo.- mmm.: ooo.- mS. So.- 2mg moo... wmo. moor 2m: oS.- moo... ow.- 85 0088> 85 00883 85 008:3, 58: 0388083802802 58: 0388828802802 mmE 038880382502 802 No : ommuh oo_u.~ omuo. .802 0:5. win: «a ..0 “mm—Z :8 85 .02—38> 80:85.52 .53: .Svm Smoézooo a... 0.55 .C. \ 00:88.3 888882 + 85 088% x 85 0888 88:0: ..MmE: .Awmos 8:185 :o :08: .o A : 8.: 83 0888 08: 80:08:02: 2: 88:0: :85 0:08:58? do - mm: .8 808 2: 88:0: :85 088%: H x Ana .8 008.53 0888 0:85 88:0: ..0088> 88.88.82: .2.“ .8 808 2: 88:0: :802: .88—08808: 889002: 80:88:05 2: .8 28> 08: 2: 88:0: 2&3 ” 082 99 SN. mmo.- mow... ohm: oww. mwm. woo... ooo.- Sod mwm. omo. wmo.- moor Nwog mmm. “wo. ow. m2. ow— .- owmr Nwmd Sm. o3... oo_ .- wmwr SwN mom. Sum. N: .- mmo.- www._ mom. mmo. mw. So. m: .- mmm.- wmoN 2 m. o— _. :_ .- oom.- oSN _wN. wt. :2 .- mmm.- mmo._ m2. mmm. ow. mS. mmo.- owo.- mg; m2. wmo. m3 .- wfl .- wwo._ ow: moo. o2 .- wwfi- mom; Noo. oom. Om. ooo. owo.- mmo.- Nmog N3. mmo. mwo.- wwo.- wS.. no: mwo. no: .- 02.. wow; moo. m2. mm. ooo. mwo.- wwo.- Nwo._ m2. omo. ooo.- omo.- mom; oofi omo. ooo.- om— .- SN; ooo. S5. wm. boo. omo.- wmo.- ohm; ooo. oS. omo.- omo.- NMNA who. mmo. who... mmo.- ow_._ omo. 2:. om. woo. :o... :9. who; mmo. ~ S. mmo.- wmo.- woo; mmo. mmo. omo.- mwo.- ooo.. moo. owo. o_. woo. woo.- moor woo. moor o _ o. oS.- ooo.- So. ooo.- mmo. omo.- Sor o8; Soy ooo. oo. woo. moor moor «do; omo.- oS. woor ooo.- wmog wmo.- So. oS.- :o.- owo._ mmo.- wmo.- o— .- woo. So.- ooo. woo; wmo.- oS. NOO.- moor owoA omo.- So. ooo.- ooo.- woo; _wo.- wmo.- omr woo. ooo. So.. we; wmo.- So. So.. So.- NE A omo.- mmo. moor So... 02; So... omo.- Omr moo. ooo. ooo. o2; wmo.- NS. Sor woo... oo_._ omo.- wmo. moor moor RA; omo.- wmo.- owr moo. So.- So. now; mmo.- NS. moor Sor 2N; omo.- mmo. :o... moov omm._ mmo.- omo.- owr 85 85 0088> 85 85 00:8_8> 85 85 008ta> mmE .o8>m<0_o88m:0u=::toz:82 mmE .o§m<0_o8m:0~__8802:802 mmE d8>m<0888m:0~=8:8o2802 no : oomnh oo—HH omuh .802 088.5 win: no .8 mm: :8 85 .005—v.5» 8588.82 .802 A5». 8828288 8... :85. 100 o. \ 85:; 88252 + 8: 298 x 8m 2988 88:0: ..mm2: A8 - no .8 80:6 88:0: :85 088m: ..A. x A”: 8 0088> 088m 285 88:0: :0088> :08—8882: .6 .8 808 2: 88:0: ..802: .88—080098 889008: 80:88:03 0.: .8 28> 25 2: 88:0: :2... H 082 mwo. mmAr 2:. 2w. moo. wwAr oomw mom. So. woA.- mmom oS. Awo. ow. swo. E A .- oww.w Amo. ooo. owA .- woo.w mom. ooo. woA .- wow..«. 2%. on». mw. AS. woo.- wto mnw. omo. So... mmmw Amw. wS. 2A.- mood wmw. mmm. ow. So. So... mAm.m wom. mmo. wmo.- 2.0m mmm. mwo. mwo.- ooo.m wmm. ohm. om. ooo. So... :mm mm: omo. mS.- owoA ow: wmo. mmo.- ASA m2. 3A. mm. woo. ooo.- moA .m wS. wS. wS.- ARA SA. mmo. mmo.- omoA on: SA. wm. So. ooo.- wmoA omA. So. So... mmwA wmA. wmo. wS.- wAwA :2. A2. om. woo. ooo. SA A owo. A S. So.- 2 AA mwo. mmo. moo.- :AA mwo. wwo. oA. woo. So. woo. So. oS. moo. oooA moo. Amo. So.- mmoA So.- ooo. oo. woo. ooo. AmoA wmo.- oS. ooo. omoA wmo.- Amo. So.- SoA wmo.- wmo.- oA .- woo. So. wooA wmo.- oS. So.- wwoA mmo.- Amo. moor AwoA omo.- wmo.- om.- woo. So.- SA A mmo.- A S. ooo. on: omo.- mmo. moo. Am AA mmo.- omo.- om..- moo. ooo. mmAA wmo.- mS. moor omA A mmo.- mmo. ooo. «:2 A wmo.- wmo.- ow.- moo. So. mnmA wmo.- mS. ooo. mAmA omo.- wmo. moor wmmA Amor omo.- ow; 85 008.8 > 85 0088> 85 0088> 58: 08888088802802 mm2 0888088882802 5m2 088808—8802 802 no : 802 08,—. win: no .8 ”S2 :8 85 .008_.8> “.08—«8.82 .802 Anon 38829:. n-.. :85. .C. \ 00883 :088882 + 85 088m x 85 0888 88:0: ..5m2: .Awmo: 8282 :o :88 .o A : :8 85 088m 28: 828.82: 0.: 88:0: :85 088828... dd - do .8 808 2: 88:0: :85 088m: ..A. x :8 .8 0088> 088m 285 88:0: ..0088> 88:28:23 ea .8 808 2: 88:0: ..802: .88—28088 88:88.: 808883 2: .8 28> 0P: 2: 88:0: .8: H 082 101 Sn. wmo.- omo.- ohow oom. omw. omo.- owo.- mmwm oom. wmo. omo.- woo... Amom AC. mmo. ow. of. may oom.- mood own. mwm. mS.- oowr woo.m wwm. mom. mwA .- oom.- mwwA mwA. mS. ow. woo. m2 .- mmm.- mwAAU mom. mmA. m2... oS.- 2mm SA. SA. oAmr oom.- mBoA :A. oom. ow. oS. mwo.- ooo.- wwom omA. mmo. woAr wmA .- owoA :A. moo. mwA... woAr momA mmo. owm. Om. oS. wwo.- mmo.- owoA 3 A. mmo. mwo.- owo.- momA owo. mwo. ooA .- mmA .- womA omo. ooA. mm. ooo. mwo.- owo.- wmoA 52. So. ooo.- Sor o_wA wS. owo. ooA .- mmA .- ommA mmo. omA. wm. So. omo.- mmo.- AmmA So. oS. Soy omo.- mmmA wmo. mmo. owo.- moor woAA wS. o_ 2. om. woo. AS... mS.- ohoA omo. mS. mmo.- mmo.- wooA mS. wmo. omo.- Awor oooA woor omo. o_. woo. woo.. woo... mSA woo... oS. oS.- oS.- mmoA oS.- Amo. omo.- omo.- oSA omo.- ooo. oo. woo. moor moor mmoA oS.- oS. woo.- ooo.- wmoA So... So. oS.- AS; mooA wmo.- Co... 2.. woo. So... ooo. mSA mmo.- A S. moo.- moor mooA mmo.- mmo. ooo.- moo... SA A wmo.- mmo.- om.- woo. ooo. So... AmAA wmo.- mS. So.- So... ooAA omo.- wmo. moor ooo.- owAA wmo.- mmo.- oo.- moo. ooo. So.. of A .mor mS. So: So. moAA oS.- wmo. moor Sor 2mmA Amor omo.- owr moo. Sor ooo. mmmA oS.- mS. moor moov oomA omo.- mmo. :o.- moor wmmA wS.- oS.- owr 85 85 008tm> 85 85 88ta> 85 85 008_8> mm2 88.220888888802802 mm2 88.230888888802802 wm2 .o82m<0888m:0~=8882802 .d : ommuh ooAuH omnh 802 0.98m 87.: Va .8 Hm2 :8 85 60:88.» :088882 .802 Aaow 8:808 n... :85. 102 C. \ 0088> “008—8882 + 85 088m x 85 0888mv 88:0: .592: dd - .m. «o 802V 88:0: :85 088m: H x Ga :0 0088> 0888 0:88 88:0: ..0088> 2002—8882.. ea :0 808 2: 88:0: ..802: .88—080038 8:08.80: 828—823 2: mo 28> 03.: 2: 88:0: 2.5... H 082 mmo. 02.- momd can. who. 82.. mad mg. 5: m: .- www.m mmm. mmm. ow. wmo. mS.- 92am mam. owo. o2: 39m wow. mo: mS.- N36. mmm. mg. mw. mmo. mwo.- wows mww. omo. mmo.- Shaw 2w. mwo. S _ .- mowa mmm. oom. ow. wS. for mmm.m mmm. wmo. wmo.- owed mmm. mwo. mwo.- ow_.~ wow. own. on. ooo. moor hmmfl G: omo. So... mood m2. mmo. mmo.- an“; nw: «03. mm. moo. ooo.- mgd o2. 2o. mS.- nwwg mw: mmo. mmo.- m2..— wQ. of. wm. woo. woo.- 802 no: So. moor wNmA no: mmo. mS.- mmwg go. o: _. om. woo. goor moo; omo. :o. .oo.. m3 ._ omo. mmo. _oo.- 3; ._ omo. wmo. o5. woo. ooo. w_o._ ooo. o5. ooo. omog ooo. omo. So. go; So. ooo. oo. woo. ooo. mmo._ Cor o_o. ooo. go; to; go. So.. omo._ wS.- Co.. 2.- woo. ooo. mnog mmo.- :o. ooo. moo; mmo.- mmo. ooo. moo; mmo.- mmo.- oar woo. _oo.- om: mmo.- .So. moor no: mmo.- wmo. moor on: mmo.- mmo.- on..- moo. _oo.- 8: 3.6.. So. So. no: mS.- wmo. ooo. 2m; omo.- omo.- ow.- moo. ooo. SN.— Eor NS. moor now." So... mmo. _oo.- Rm; for 30.- ow: 85 0088> 85 0088> 85 0088> mm2 08888028802802 mm2 08888038882802 mm2 0E8mo0N=SEOZ 802 .d .0 802 08,—. win: 2.. .8 mm2 :8 85 .008_.8> .002—8882 .802 .33. Anmpézooo n-.. 8.88 .fi \ 88.52 885.02 + 85 088m x 85 088$ 88:0: .582: .AwwoC 8385 so :08: .o A : 8.: 88 0888 28: 80882: 2: 88:0: :85 2888204.. do - .9 .8 808 2: 88:0: :85 088m: H x 3.. .8 0088> 0888 285 88:0: 200883 :028882: an .8 808 2: 88:0: ..802: 80888898 8:08-88 828883 2: .8 28> 0:: 2: 88:0: :8: H 082 103 own. So.. mwm.- hwww mmm. mom. moor wwo.- EQN mwm. moo. moor wowr omoN m2. :6. ow. m5. >3 .. oom.- mmow mom. ohm. NE .- wmwr mwod mom. mom. :2... can... wmw._ S2. moo. mw. wS. oS.- mwm.- owm.m _wN. 22. ST. mmm.- mmNN 52. SN. 2N..- o~wr omoa owo. oww. ow. to. wS.- moor Show :2. omo. 22.- 32.. 08.2 wwo. moo. :3 .. m3 .. mmm._ mmo. mmm. om. oS. wwo.- mmo.- :0.— ooo. mmo. omo.- Sor Sam; ooo. mwo. wofi- m: .- SN; wS. S2. mm. ooo. wwo.- Soy mom; wwo. So. So... mwo.- mow._ omo. omo. N2... 32.- ~34 wS. 02. wm. boo. omo.- wmo.- m5“.— moo. :S. mmo.- wmo.- mom; omo. So. who... owo.- ~22 woo. woo. oN. woo. :o.- mS.- moo._ m S. 2 S. mmo.- wmo.- ooo._ boo. wmo. owo.- wwo.- moo; mS.- So. o2. woo. woo.- moor moo; moor oS. oS.- NSr smog mS.- mmo. omo.- wS.- mwo.— mS.- ooo. oo. woo. moor ooo. coo; mS.- oS. woo.- woo.. mwo._ :or mmo. oS.- :o.- 30.2 wS.- mS.- o_ .- woo. So.- ooo. wwo._ :or _ S. So.. So... wo_ .— wS.- mmo. ooo.- moor 5N2; wmo.- :or omr woo. ooo. So. ~22 mS.- mS. So.- So.- oflA Cor wmo. moor moo... No2; _No.- oS.- o»..- moo. ooo. ooo. mom; mS.- mS. So.. So... $2.2 wS.- wmo. moor moo... oom._ mS.- mS.- owr moo. Sor ooo. oS.. :o.- mS. moor So. me._ oS.- mmo. :o.- So... th2 :o.- oS.- owr 85 85 00885 85 85 0088> 85 85 008tm> mm2 .a8m<0_a88m:0u__8802:802 mm2 88.230888888802802 582 88308888028802.8802 .Q : omNnH oo_H.—. omuh 802 08.8w ”8: ma 8 Hm2 :8 85 .02—88> :028882 .802 .88 8882208 a... 88... 104 fi \ 85.53 88.82 + 80 038m x 30 2958 88% .82.. .3 - “a 8o 526 ”28% .35 288.. H x 2 00 85c? 038 28V 88% ..858> 88:82.. .2.“ mo 82: 2: 86:0: ..802: .88—0:838 8:093: 30:22—03 2: .50 28> 25 2: 88:0: aha: H 082 So. 2: .- womom own. So. o: .- wwfim Sn. 3 5. 03 .- woo.m Nwh. 5mm. ow. ooo. 0S .- 05.2 ohm. wwo. wfl .- wand _wm. E _. m3 .- oomw 2 m. mmo. mw. wmo. ooo.- wax :w. Go. ooo.- n: .m am. So. 3 _... mwo.m Sm. oww. ow. 2o. 08.. 305 NE. mmo. omo.- SEN mom. owo. mwo.- ENN mm: mmm. om. o8. moor womd ww: 5o. w_o.- wwoN >2. omo. Gov was; o2. EH. 3. ooo. moor mEN TS. 3o. mS.- cow; om: mmo. wS.- can; 3: on: wm. woo. woo.. owe; 80. m8. moor own; mmo. omo. moo... how; wwo. omo. om. woo. moor wo: omo. :o. _oo.- 2: omo. mmo. moor G: wmo. go. o: woo. So. So; So. o8. moor as; moor 8o. moo. wuog moo. ooo. oo. woo. So. moo; So... o8. ooo. _wo._ mS.- "no. Sor to; w_o.- So... 3.- woo. ooo. mwo._ oS.- So. So. No: 08.. mmo. moor S: ms... :or our woo. So. 2: mS.- :o. _oo.- 5w: 08.- wmo. moor 5: wS.- oS.- omr moo. ooo. Sm; mS.- So. _oo.- 5: wS.- wmo. _oo.- 3: w_o.- mS.- ow.- moo. ooo. com; :o.- 20. So. 034 oS.- mmo. ooo. mm; :o... oS.- owr 85 0088> 85 0088> 85 0288> mm: 03888028802802 ”.52 03888058802802 mmE 03888038802 802 no : ommnh oo_u._. ownh 802 0:5. win: no .8 5m: :8 85 .02—38> Egg—:82 .802 .33 388208 m... 88H #8: 08m 8 680m .328: 3 82w 3 .8838 28: 2: s 858, .8885 2: m. ..BSES 28388.. .C \ 8555 88562 + 35 298 x 35 2.8% 885.. .82.. .9 - 6 Mo :88 88% :85 08:8? ..2. x A. v .8 00.88? 298m 085 88:0: 2028.8> 802—«E82: ._ v .8 82: 2: 88:0: :802: H 082 ogr hon. 2mm. ow. om_.- 2wm. oom. mw. on—r mwm. :N. ow. Noo.- wmm. mom. on. gov ms. 02. mm. So... wmo. 02. wN. mS.- own. hmm. mmo. m2 _ .- mmm. mmm. omo. mmo.- mwm. mom. wmo. woo- 3w. :8. omo. Got nwm. o2. 2o. omor wbm. o3. omo. N20. mo_r th. hwm. omo. ooo. owor Nom. won. o_o. ooo. ooor mwm. wmm. m_o. moo. mmov wa. mom. ooo. moo. wmor N2m. NNN. woo. mow. moo. hmor 2mm. m_N. woo. 105 wwo. moo. Gov $3. 2.2. woo. mwo.- Re. mm: 20. go.- wK. on: om. awn. moo. N8.- mwh. wwo. aoo. omo.- So. 28. mmo. wmo.- wwo. owo. 2. 80.2 woo. woo.- who; moor So. mmo.- 2 _ .2 mmo.- mmo. wwo.- com; wwo.- oo. 2w: moo. moor mm: m2.- mS. wS.- mom; «2.- mmo. 2wo.- m8; _w2.- o2.- wmn; woo. ooo.- own; oom.- omo. m8.- nwm._ 2m- Nwo. 28.- omoa 28.. on.- SNN ooo. 30.- Rad how..- mmo. 9o... oow.~ 2m.- Nmo. 28.- Nmm.m :0. cm.- Kwd So. ooo.- ommd cow; omo. m8.- mmoa 2w: ooo. mmo.- owm.m mmw- ow.- .ow.m wS. moor oom.». Rwy 08. wS.- _wm.m womr who. 18.- mwm.m 2mm- ow.- 0088> 85 028.8> 85 00885 85 0o88> .883 584 088% 802—8882 802 5m2 0.98m 802—2802 802 592 088m 802—2:82 802 8 emu”... oofiuh own-r 802 0.98m win: :0 .— 482; 8: ES E .m 82 HE. w-w HAM—(Q. $8: 2:8 23 .258 5:8: 3 82m 3 5:338 8:5. 2: a 8&2; 8:28: 2: 2 $2893 9:05:23... C. \ 88:8» 82.2802 + 85 298m x 85 2989 88:0: ..5m2: .8 - 6 :0 :82v 888: :85 298m: .5. x o : mo 88:? 29:8 02:5 88:0: .82—88> 822502.. ._ : mo 82: 2: 88:0: ..:82.. H 202 mmo.- m2. 3:. av. mwo.- 3m. mov. mv. gov mmm. own. ow. mmo.- wwv. mom. Om. mmo.- mwm. mmm. mm. hmor Eh. 2m. vm. vmor as. 0:. om. omor oom. oov. moo. wmo.- mmm. m3. boo. omo.- mom. ohm. woo. 08.- 3w. 5m. :o. for mmm. 3mm. :o. 05.- mmm. vmm. m5. mS.- now. 53. So. moo. mmo.- omm. mow. moo. moo. wmo.- Ev. mmv. moo. moo. omo.- mmm. owm. moo. moo. o_o.- ohm. oom. moo. moo. ooo.- mum . vvm. ooo. mow. moo. boo: mmm. mmm. ooo. wwo. moo. moor moo. m2. ooo. 106 oS.. moo. voo- ooh. ooo. woo. :9. wow. owo. to. 2.6.. omw. omo. o: ooo._ woo. woo: mmo._ voor :o. :o.- coo; :o.- wmo. _mo.- 8: _mo.- oo. 5: moo. moor 9%.— mo:- 2o. mS.- :9: m2.- mmo. omo.- own; mom—r o:- wmng boo. moor ME..— momr 2o. m8.- 202 m_m.- owo. _mo.- 26; 2mm: om.- Emm ooo. ooo.- 3mm oom.- mmo. :or mvvm 20. 2o. wmo.- omvm vmm- om.- _mm.m m8. moor mmom mow; omo. mS.- Sod m;- moo. mmo.- 025 hmvr ow.- _ov.m 2o. moor coma. mov- omo. m8.- mmm.m mom; E-o. nmor oowd 20. av.- oocmca> 85 8:223 85 8:223 85 o28.:a> .953 mm2 29:8 82.8552 :82 5m2 298m 822502 :82 mm2 298m 823802 :82 : ommHH oo.”.r omuh :82 can. win: _: .m Emozfizooo 3 5:5 107 25.n- woo.m- 29w- Emo- mmm.? moos- 3.0.0- : 5 #- mmm.:- mwm. ovm. 3 _. 30. who.- :7 oom.- Km.- m5..- omo.? omvo- owfiw- whoo- ohms- oom.:- 96.0- 26.0- 03 6- gm. mmm. m_ _. 2o. mwo.- wS. oom.- oom.- m5».- mono- ow~ 6- m2 .m- wfiow- woo.w- 9%.:- 31:20- www.c- o: .5- mmm. mmm. mo“. omo. go.- mg .- mom.- com.- mow.- .A: - 22:83 .«o :88 2: 88:0: :85: 888:8 2: mo :88 2: 888: ..:82.. H mood- :ooT www.m- v2.8- owne- wvmh- www.m- www.m- 3 _ o- :m. oom. _o_. mmo. So.- 02.- mmm.- ommr 3v.- oowd- woo.w- www.c- vmm.m- wad- ovom- o2 .m- m _ m. T wmo.—- mom. 2mm. 3:. wwo. owo.- om_.- _mm.- 2m.- 2v.- www.2- ammo- mwo.:- vw_ .m- woma- mwm- mwom- m9:- moo._- ham. :m. m2. wwo. mmo.- m2.- 2mm.- 2m.- 2:.- ..25 ..mezmmEz .892 nm5: 8:: cm u :- 2: E: E; 532 .3: hvmmT wode omo.- ~mm.o- 9%.”- wood- ohoo- mwm.:- 52 .m- mhmw- 2a.».- 3:...- mmmm- mm:- 2 mm- whom- mvw. T gum- hmmg- mmom- oo_ X 85 5m. whm. 2m. oom. mm_. 12. owo. Duo. omo.- mwo.- om_ .- v2 .- mmmr ommr 5 Mr mmm.- mSfl- om?- :82 £92 £2 woooT mwm.—- Subm- $6.7 oomo- oom.—- momm- 2mm;- mom.v- 5h;- momé- www.5- vvoh- moo._- hmmw- ohm.- wom. oom. oom. owm. mm: :3. omo. mwo. wmo.- wS.- om_.- 2 ~.- mwm.- 2m.- ommv o~ m.- mmvr vow.- finflz A? 4:25: G: .:E. 7: an: :z< .52 m-v HAM—(G. ommoT www.2- wmmg T 59.:- 5?:- mmm._ T moo; T ooooT www.m- mom. 3: wwo. 2o.- m2.- 2m.- 2m.- cow.- oov... Haw-.5 8::- 82- 8::- £3.- 892- 28- $8- 83- 33- R3- :8. 23- $2- 83- 5.:- $2- :8- E8- N8. N2. ma:. acm. 8o. 5. ooo. 20. 8o.- 80.. «2.. v2.- VR. 8.- vom. mmm. ~8- Na... .23: 1mg: : 108 .3: x 85 - 5:: x :- 838: ..8§E> 8:38.02.- .% - 28.53% 88. 2: x H 8:80: :32 8:380? H 202 23. m8: £2 £3 0:. 8: 8: «2.. v9. 5.: 2:: :8: 2m. v. :2 2m: :3 So: 5. 8m. 3: an: own. on: 82 5:. 8m. m. N8: mmo.: a: 83 :3. 5:. am. 0;. 8: NE: 3: o8. So. N. 82 :2 :2 a: 3:. 8m. 5w. «.8. mmo.: :3: 82 So. 8: :. 22 92 E: :2 :3. so. 33. so: 32 :3: 3: :3. mm: o. 83 98a 33 SN.“ wwo.: :8: 0:: 83 a: 82 32 m3. 8: T 33 so: 22 5a 8m: 82 $2 $2 S2 8: £2 08. wS. N.- 33 3; $3 33. :2 $2 we; 83 SE 8: mmm.: ma. mmm. m.- 33 $3. 2.2 33- :2 $2 :3 :3 $2 3:: 83 So. :3. v.- OOGME> Uon—mgoz 3.: :2 32 $0.: 8: $2 82 :2 3m.— 5: :6: 08.: mm. v. 82 9%.: $3 3.: 3.: oz: :2 $2 owo.: $2 $2 81 as. m. 32 3; :2 33 E: 83 $3 $3 :8: $2 5.: mm: 8:. N. mmm.: $2 82 2: m8: :8: :3 so: $2 cm: :3: 5: E. _. 82 we: 83 on: wmo.. 08.: so: 82 mmm.: $2 5: £2 3:. o. 23 SS. 82 23- 8: :2 E: mmm.. 23 an: on: :2 o3. T 33 83 83 woo.m om: 3.2 32 83 83 S: cm: .82 03. N.- Ea $3 a: $3 $2 22 23 3M: 33 $2 3: 8: 5. m.- SS- @3- §3 83 22 32 am: 3: 83 a: S: 82 so. v.- 3: 3:35.02 nm5: ...mez .Mmez m5: “we: em5: Mwe): £92 :5: :3 S: :3: 1:42 a an n :- 2: 85:23 Baa—=82 :5 mm: BEE-32 .3: Emszflzouv m... 5:5- 109 A: - 3853 Mo :88 2: 38:0: :35: 038:8 2: .«o :88 2: 88:0: :53?- “ 8:2 www.m- «mwm- can- :36. van-.0- 8»? 2:. m5.”- oowd- 8w.- Ebw- Ev:- vfl-f v. 36».- N:.v. $31 20:- ommé. Raw- 82.. oms- own:- Eof whim- Saw- 6:. m. oom.? mwm.? gimm- nvow- ooh-2”- :moé. mo;- mmm.v- mS.? 82- 02.6- omm.m- E:- N. $31 83-- :om.v- wwo.? :ma.- @2- MBA- vo_.m- Roa- SZ- mam- omm.m- 8:- _. 39m- mmm.? 5:}- 86m- QSN- oS.-m- om~.- vaN- mmo.».- mmv._- www.m- owo.? So:- o. 3:... aood- wmo.».- mohm- am:- NSL- So:- oS.:- find. 9%.:- mvww- 8:1 03:- _.- Rad- v38- vom.~- EON- 2m.- Zof 0mm:- aoofi NEN- Z:- Vmow- mmm.? So:- u.- Efim- 2nd- mzum- Sva- Qm- 35.- :3:- Svf max. :2..- mvmw- 36m- mam:- m.- SoN- awed- SEN- coed- Rm.- mmmr So.- NmI- mam- mvmr oS.? woo.m- 52- v.- 02 Xmflm oom. mom. mom. mwm. Rm. mmm. :Nm. 2m. 8m. an. SA. 0?: mmm. v. mom. :8. mmm. mmm. o8. mwm. mwm. omm. mmm. oom. mwm. ovm. Em. m. :1. oi. :2. v2. 02. :2. mm: :2. N2. 2:. N3. 2;. mm: m. So. :8. So. ooo. 08. :8. o8. ooo. 8o. 80. :6. wwo. mmo. :. :8.- :8.- mmo- N8.- _mo.- :3. mmo.- 8o.- 08.- So.- mmor owo.- to; o. 3:. :2- oflr mm:- ET mS.- 0:.- o:.- mm:- m:.- :2... N-Er o:- _.- mmm.- mmm.- mwm.- 8N; mom.- o_m.- 2N- Emr RN- :~.- mmm.- mwm.- 08.. m.- »mm- Rm.- Rmr mmm.- con. 80. SM- 2&- wmmr mo»..- mmm.- mmm.- Em..- m.- nm:.. 26.- Rwr mm:- movr mov- oov- 2v.- on- .8:- :vv.. SUV.- Evr v.- 532 mmm.): ...mQEmmez .Nmaz “we: .mQE Mme: £92 £92 A3 5E REE 1mg: : 2: u ,.- ..: E: E... :32 .EN 8:52:20“: m-.. 5:5 110 so; 52.— mos. ovw. cam. now.— N3.— 30: mmo.— wm—A ohm: mwo.— a8: m2: wen; mam: mmm.: SON “we: mm:— aom: mow. 0mm. mmo.: 5mm; fin; mt; mmod mmm._ chm; coo; mwo.— m2; Em.— 3m.— mwm.— mo_.N .232 .882 mmm.): 3N: Sm: wvw. mmm. mmo.: won; can; mwm.— wwfim mom; mwm.: v3.— wmo.— m: .— mam: $0: wmo.— ommd 03.: New; mwm. ova. on: A 5:; own; mflN ommd Swag mwNN oaa: oo—A mmm.— mow; NS; mad owmd mg. 03. :3. mg. mum. ohm. we; mwm.: mow.— NNN._ 35: owo. woo. m3. mom. 2 _._ 0mm; nova “mg mg. Gm. wwo. :w. mmm. own. can. 02.. th. Cw. mow. wS. 3w. moo. voa. woo; mwo.: VEA 5:; :NA vow; 8m.— oov: awe: a3: 20.— omo: occur—3’ 8838-52 00.6..— oamg och; of; 52; v2; Noa. 95. a5. com. com. 30. mmo. mmm. as: wNo._ 2.0.— mg: :2: cam: www.— mOm._ o2; cow; on»; 20.: woo.m mm: 8835.52 «MDE Mme]: fine: SEE-ECU: m-.. :85. vwm. mmm. mmm. owa. won. 25. wwo. pa. me._ owo. v8: 5a. mm. .N wwo. wok-N coo; mwm.». go; mic-A moo; mwo. mwa. ova. mwa. owo.: Nam. Em: omo. mun: mam. womd ooo.: SKA.- moo: 091:. So.— TmQE 1:5 80.: owo. who. wha. 5o. mwm. wmo. ace.— mmo.: oom.: 3m: mom: ommg 3mm: 0mm.— :m._ mwm.: Nmmg Ham 2: u ..- .5 85.85 35.252 :5 :8: 3:35.52 .3: cam. vwm. :mw. wmo. now. won. oom. :mn. o_o. own. Nmo. 0:”. m_m. ova. woo. New. vow. :mm. :_m: m_o. ow:._ an:. _n_: was. m~_: on». mo_: v_w. ~v_: mwm. :O:: we». nee: ham. bum. moa. .mm::: 1m::: :38 x E: - :86 x a 885.: ..8§E> Baas-.2.. .NG - engage 52: 2: x : 88:8 .32 88.252. H 302 111 .AU .- oumemumOV MO 508 0:“ mDHO—BU ..mmmm: .ougwmo 2.: m0 G88 2: mouccvfi :502: H OHOZ 8v.- momr S2- 83. osm- SoN- www.m- omo.:- 26? mmo.- NS:- mSN- 3.x.- v. v3.7 :3:- SZ- MUS-.m- mmvN- :GN- mvod- 80m- omo.m- mwm.- mhoa- St? ooo.- m. Nonm- mmvN- Sva- SvN- Sod- woo.m- ovoN- 2;- mafia. mvvr 32- 38m- :3- N. S:- owo._- o3:- Emf S:- SS- 2N:- Smf 3:- 50. 22. 8;- :8.- _. 32. oo_._- 8:. mmo._- SN..- Sn.- mmh- mwm.- mm:- Smmr cama- floN- wmo.- o. NS.- mmo- oom.- Sow- oS.- mov- 3v... 3%.- omo. T oS.- SN.- «oo. 7 omo.- _.- :S- SS.- 3.».- Su... m:- :~.- mmm.- m3..- omof oS.- oSN- ow:- Nmo- m.- Sfi- m2..- Sfi- own. :5. So.- :5.- mmv- mm:- mwor v3.7 vow: Soc m.- 87 3:... ooo.- o2..- .mo. 39- mmo.- om:- mwm. T oom. m3. T S: _ . T «S.- v.- 8: x85 oom. oom. Sm. mS. S. :S. :3. GM. mmm. oom. Sm. oS. mom. v. mwm. Sm. SS. oom. mS. EN. KN. mom. Gm. Sm. oS. 2h. «.3. m. oi. oi. mt. m2. ow: ow: of. at. S: 02. SE. SS. 8:. N. So. So. So. So. So. So. So. So. So. moo. So. 2o. woo. _. ~5- m _o.- :o.- oS.- So.- noo- Soy Soo- :o.- moor mmo.- omo.- ooo.- o. o:.- o2.- ooT So? «2.. «2.- voT m2.- o:.- mS.- «2.. o:- ooT :- Soar Somr Sm.- nomr Nom- Nomr 8N.- voN- ZN- momr :NN- 2N- oo~.- N.- Som- Som- Sm.- S»..- oom.- Sm.- mom- vom- NS..- _om.- m2.- 2»..- oom- m.- Sov- 8v.- Svr Sov- oom- ooo.- 8...- vow: m :1 Sm- m _ v- _ :1 8:... v.- :82 awe: Beg-”ma: .Nmez “we: vmez Mme: ”was: $52 :3 5.: K32 1mg: : :2 u H .5: :8 E... :82 .3... Swazi-268 m-.. :85. 112 won: 02; wmo. who. 5:. omw. ooo. mwo.— 3N; ZN.— N2.— 03.. wwo. oom. mwm. mwo. oo_ .— mmm.— “ma: wmo.; wag omo. owo. 52.. 0am. w—o: of; 2m; oom.— mmm.: who. .mo. NS. :o. m8; m2.— own; ...mndz . man: how: V:m: woo: mmo. :oo. ooo. own. now. Now. mwo. _vo: boo: omm: :hm: woo: vov:_ own: on: no: mm: .83 No». o_o. Now. :So. omo. oom. ooo. Foo: oo:: mom: mom: m_o: one _ hoo._ .mnoz _mmn:z mwm. owo.: mmo. mg. :5. Vmw. mmo. ooo.— Na_.~ amo._ ca:— mmm. mmm. wwo. mmm. mmo. 000.— 5:.— «mm: Oma. ave; Sm.— omog a2: moo. wmo. owo. omo. ooo. ooo. OK. who. own. mwm. Now. :a. :0: owo. owe; mmmg m2; mmm.: NS; KN; Sow.— a3.— oocoto> FEES-52 _o_._ Ohm: coo; oom: mmm.: ohm: :3. _oo. 02.. SK. wmo. owo. wwo. Now. 5mm. cow. Ea. mmo.— oao. omo.: 2N.— NSA NNN._ m5; com: cow: Nwo.— Mmz 8838-52 £92 Mme/H NmQHZ Euozfizooo o-.. :85. aom. mmm. ~av. Cw. :6. 2w. a5. am. who: mmm. 2v.“ on. mwm.: amw. mmmd 5mm. amoN mmm. mow: vmw. wow. a5. 8:. mmm. vow. mmm. we: omw. N3: 0mm. wow; 3w. owmd 0mm. woo.m omw. 5% H5 mmm. mmm. mmm. mmm. 0mm. mmm. mmm. Dam. oom. 3o. owo. ova. woo. woo. Ga. ova. mmo. _Na. Had om~ ...- o. 2: 828$ 83.5532 2; :8: 85.332 .3:- VFo. omo. «on. __o. ooh. moo. SS5. woo. :ow. Son. ooo. om». moo. mmm. oafi. :oh. no“. _on. ooo. boo. vow. NNo. woo. woo. ooo. boo. moo. S_:. now. own. ohm. Sen. :oo. ooh. o_m. on». .m:::: nonoz .Amomm x mam - mmvé x H 88:0: $055.? $838.52: «A: - 08833 mo :88 2: X H «30:0: 2mm: 3838-52.. H 802 YWNfQFTNWV. III I WN':Q"1“1"‘.V'. Ill v.- o 113 TABLE 4-6 IRREGULAR REPLICATION S IN THE EXACT MLE FOR THE I(d) MODEL MLEu MLE- y T=50 T=100 T=250 T=50 T=100 T=250 d It ge lt ge lt ge lt ge lt ge lt ge -.4 249 O 183 O 51 O 341 O 233 0 62 O -.3 81 O 26 O 2 0 154 O 44 0 3 0 - 2 30 O 7 O 0 O 63 O 10 O O 0 -.l 8 O O O O O 27 O 1 O O O .0 1 O O O O O 6 0 O O 0 0 .1 O O O O 0 O 2 O O O 0 0 .2 0 3 O 1 O O O 3 O O O O .3 0 14 O 7 O O 0 l6 0 6 O 0 .4 O 81 O 66 O 14 0 117 0 7O 0 13 Note : The numbers in the "It" columns show the number of replications where the estimates are less than -.5; the numbers in the "ge" columns show the number of replications where the estimates are greater than or equal to .5. Note that the total number of replications is 1,000 for each parameter value ((1, T pair). CHAPTER 5 CONCLUSION 114 115 This dissertation considered a stationary long-memory process for economic time series. In a long-memory process, the autocorrelations of the process are so persistent that the sum of the autocorrelations does not converge to a finite non-zero constant. In the literature it is shown that a fractional value for the differencing parameter in a ARIMA(p,d,q) process implies a long memory process for some range of the differencing parameter. To distinguish this process from the standard ARIMA(p,d,q) series, the long- memory ARIMA process is called the autoregressive fractionally integrated moving average process of order p, d, q, or ARFIMA(p,d,q). The simplest form of the ARFIMA(p, d, q) process is the ARFIMA(O,d,O) or I(d) process. If d e (-1/2, 1/2) it is stationary and invertible. For 0 < d < 1/2 the autocorrelations of the I(d) process are positive for all lags, and they decrease so slowly that the sum of the autocorrelations is infinity in the limit, while for -1/2 < d < 0 all autocorrelations are negative, and the sum of autocorrelations goes to zero in the limit. Therefore the spectral density at zero frequency is infinity for d > O and is zero for d < O. In the dissertation we considered the behavior of a stationarity test and a unit root test when the series is a stationary I(d) process We found that the KPSS stationary test is consistent against stationary I(d) alternatives with d 6 (-1/2, 1/2). However, we found using simulations that to distinguish a stationary autocorrelated I(O) process, such as an AR(1) process with coefficient close to unity, from a stationary I(d) process with d 6 (-1/2, 1/2), the sample size must be very large. We also found that the power of the KPSS test against a stationary I(d) process is comparable to power of the modified rescaled range test, which is also a test of stationarity against an I(d) alternative. 116 We considered the Dickey-Fuller unit root tests, and showed that they are also consistent against a stationary I(d) alternative with d e (-l/2, 1/2). We can use either the coefficient-type test or the t-statistic—type test to distinguish an 1(1) process from a stationary I(d) process. Our simulation study showed that the powers of the tests are high even in relatively small samples. However, this might not be true if we used the test statistics which allow for autocorrelation in the error process, such as augmented Dickey- Fuller tests or the Phillips-Perron versions of the Dickey-Fuller tests. In the simulations for the KPSS test and Dickey-Fuller tests against I(d) alternatives, we chose values of d that allowed for nonstationary cases as well as stationary cases. In the KPSS case we chose values of d from the range [0, 1], and in the Dickey-Fuller case we chose values of d from the range (0, 1.5). Note that if 0.5 S d s 1.5 the I(d) process is nonstationary and the consistency of the KPSS test against an I(d) process with d in this range is not proved, while the consistency of the Dickey-Fuller tests is not guaranteed for all tests against an I(d) process with the value of d in this range. The power function of some tests is seen to be continuous over the whole range of d we considered, while for other tests it is discontinuous from the lefi at d = 1/2. We guess that these strange phenomena are caused by the strange behavior in the autocorrelation function of the I(d) process when d is close to 1/2. Our consistency results for stationarity and unit root tests were shown for I(d) alternatives. We believe that consistency of the stationarity and unit root tests would hold against general stationary ARFIMA alternatives, because the fractional functional limit theorem holds for the general ARFIMA process and this theorem is a main building block for the asymptotic distribution theory for the test statistics. But the finite sample 117 behaviors of the tests against ARFIMA alternatives might be substantially different than against I(d) alternatives. The usefulness of stationarity tests and unit root tests to identify a general long-memory process (including a possibly nonstationary fractionally integrated process) is a quite interesting and challenging topic for fiirther research. We also compared the finite sample properties of different estimates of the differencing parameter in the I(d) model. Especially we compared the minimum distance estimate of d to various forms of the MLE of d. We found that the minimum distance estimate of d is favorably comparable to the MLE when the mean of the process is not known, even though for d 2 1/4 the minimum distance estimate is slow to converge compared to the MLE. In addition, we confirmed previous findings that the approximate MLE based on the Whittle likelihood function is better than the exact MLE in terms of MSE and bias when the mean is unknown. Finally, we note that even though the estimates of d we considered are consistent, they are biased in finite samples. The bias is usually negative. It is quite persistent as sample size increases, and is a serious practical problem even for fairly large sample sizes such as T = 500. A distribution theory that would explain the size of the bias in the I(d) model or the more general ARFIMA model is another important area for fiirther research. LIST OF REFERENCES 118 119 List of references Agiakloglou, C ., P. Newbold and M. Wohar (1992), “Bias in an Estimator of the Fractional Difference Parameter”, Journal of Time Series Analysis, 14, 23 5-246. Andrews, D.W.I(. (1991), “Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation”, Econometrica, 59, 817-858. Baillie, R.T., C.F. Chung and M. Tieslau (1992), “The Long Memory and Variability of Inflation: A Reappraisal of the Friedman Hypothesis”, unpublished manuscript. Baillie, RT and T. Bollerslev (1993), “The Long Memory of the Forward Premium”, unpublished manuscript. Baillie, R.T., T. Bollerslev and Hans-Ole Mikkelsen (1993), “Fractionally Integrated Generalized Autoregressive Conditional Heteroskedasticity”, unpublished manuscript. Baillie, R.T., C.F. Chung and M. Tieslau (1992), “Analyzing Industrialized Countries Inflation by the Fractionally Integrated ARFIMA-GRACH Model”, unpublished manuscript. Beverridge, S. and C. Nelson (1981), “A New Approach to Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to Measurement of the Business Cycle”, Journal of Monetary Economics, 7, 151- 1 74. Box, G.E.P and GM. Jenkins (1976), Forecasting and Control, Revised Edition, Holden- Day, San Francisco Brockwell, PJ. and RA. Davis (1991) Time Series: Theory and Methods, Second Edition, Springer-Verlag, New York. Campbell, J.Y. and NC. Mankiw (1987), “Are Output Fluctuations Transitory?”, Quarterly Journal of Economics, 102, 857-880. Campbell, J.Y. and P. Perron (1991), “Pitfalls and Opportunities: What Macroeconomists Should Know about Unit Root”, unpublished manuscript. 120 Cheung, Y. and F.X. Diebold (1994), “On Maximum Likelihood Estimation of the Differencing Parameter of F ractionally—Integrated Noise With Unknown Mean”, Journal of Econometrics, forthcoming. Choi, S and M. Wohar (1991), “The Performance of the GPH Estimator of the Fractional Difference Parameter: Simulation Results”, unpublished manuscript. Chung, CF. (1993), “Estimating a Generalized Long Memory Process”, unpublished manuscript. Chung, CF. and RT. Baillie (1994), “Small Sample Bias in Conditional Sum-of-Squares Estimators of Fractionally Integrated ARMA Models”, Empirical Economics, forthcoming. Dahlhaus, R. (1989), “Efficient Parameter Estimation for Self-Similar Processes”, The Annals of Statistics, 17, 4, 1749-1766. Davies, RB. and BS. Harte (1987), “Test for Hurst Effect”, Biometrika, 74, 1, 95-101. Davydov, WA. (1970), “The Invariance Principle for Stationary Process”, Theory of Probability and Its Applications, 15, 3, 487-489. Dickey, DA. and WA. Fuller (1979), “Distribution of the Estimators for Autoregressive Time Series with a Unit Root”, Journal of the American Statistical Association, 74, 366, 427-431. Dickey, DA and WA. Fuller (1981), “Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root”, Econometrica, 49, 4, 1057-1072. Diebold F.X. and GD. Rudebusch (1989), “Long Memory and Persistence in Aggregate Output”, Journal of Monetary Economics, 24, 189-209. Diebold, F.X. and M. Nerlove (1990), “Unit Roots in Economic Time Series: A Selective Survey”, Advances in Econometrics, 8, 3-69. Diebold, F.X. and GD. Rudebusch (1991a), “On the Power of Dickey-Fuller Tests against Fractional Alternatives”, Economics Letters, 35, 155-160. Diebold, F.X. and GD. Rudebusch (1991b), “Is Consumption too Smooth? Long Memory and the Deaton Paradox”, The Review of Economics and Statistics, LXXIII, 1, 1-9. 121 Dueker, M. and R. Startz (1992), “On Fractional Integration and Cointegration”, unpublished manuscript. Durbin, J. (1960), “The Fitting of Time Series Models”, Review of the International Statistical Institute, 28, 3, 233-243. Evans, G.B.A and NE. Savin (1981), “Testing for Unit Root: 1”, Econometrica, 49, 3, 753-779. Evans, G.B.A. and NE. Savin (1984), “Testing for Unit Root: 2”, Econometrica, 52, 5, 1241-1269. Fox, R. and MS. Taqqu (1986), “Large-Sample Properties of Parameter Estimates for Strongly Dependent Stationary Gaussian Time Series”, The Annals of Statistics, 14, 517-532 Fuller, WA. (1976), Introduction to Statistical Time Series, Wiley, New York. Geweke, J. and S. Porter-Hudak (1983), “The Estimation and Application of Long Memory Time Series Models”, Journal of Time Series Analysis, 4, 221-238. Granger, C.W.J. (1966), “The Typical Spectral Shape of an Economic Variable”, Econometrica, 34, 1, 150-161. Granger, C.W.J. and P. Newbold (1974), “Spurious Regressions in Economics”, Journal of Econometrics, 2, 111-120. Granger, C.W.J. (1980), “Long Memory Relationships and the Aggregation of Dynamic Models”, Journal of Econometrics, 14, 227-238. Granger, C.W.J. and R. Joyeux (1980), “An Introduction to Long- Memory Time Series Models and Fractional Difl‘erencing”, Journal of Time Series Analysis, 1, 1, 15-29. Hassler, U. and J. Wolters (1993), “On the Power of Unit Root Tests Against Fractional Alternatives”, Discussion Paper No. 2, Fachbereich Wirtschafiswissenschafi, Free University of Berlin. Hauser, MA. (1992), “Long Range Dependence in International Output Series: A Reexamination”, unpublished manuscript, University of Economics and Business Administration, Vienna, Austria. Hosking, J.R.M. (1981), “Fractional Differencing”, Biometrika, 68, 1, 165-176. 122 Hosking, J.R.M. (1984), “Asymptotic Distributions of the Sample Mean, Autocovariances and Autocorrelations of Long-Memory Time Series”, Mathematics Research Center Technical Summary Report #2752, University of Wisconsin-Madison. Jonas, AB. (1983), “Persistent Memory Random Processes”, unpublished Ph.D. dissertation, Department of Statistics, Harvard University. Kwiatkowski, D., P.C.B. Phillips, P. Schmidt and Y. Shin (1992), “Testing the Null Hypothesis of Stationarity against the Alternative of a Unit Root: How Sure Are We That Economic Time Series Have a Unit Root?”, Journal of Econometrics, 54, 159-178. Levinson, N. (1947), “The Wiener RMS Criterion in Filter Design and Prediction”, Journal of Mathematics and Physics, 25, 261-278. Li, W.K. and AI. McLeod (1986), “Fractional Time Series Modelling”, Biometrika, 73, 217-221. Lo, AW. (1991), “Long-Tenn Memory in Stock Market Prices”, Econometrica, 59, 5, 1279-1313. MacNeill, I. (197 8), “Properties of Sequences of Partial Sums of Polynomial Regression Residuals with Applications to Tests for Change of Regression at Unknown Times”, Annals of Statistics, 6, 422-433. Mandelbrot, BB. and J .W. Van Ness (1968), “Fractional Brownian Motion, Fractional Noises and Applications”, S.I.A.M. Review, 10, 4, 422-437. Mohring, R. (1990), “Parameter Estimation in Gaussian Intermediate-Memory Time Series”, unpublished manuscript, Universitat Hamburg, Germany. Nelson, CR. and CI. Plosser (1982), “Trends and Random Walks in Macroeconomic Time Series, Some Evidence and Implications”, Journal of Monetary Economics, 10, 139-162. Newey, W.K. and K.D. West (1987), “A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix”, Econometrica, 55, 703-708. Park, J .Y. and PCB. Phillips (1988), “Statistical Inference in Regressions with Integrated Processes: Part I”, Econometric Theory, 4, 468-498. 123 Phillips, P.C.B. (1986), “Understanding Spurious Regressions in Econometrics”, Journal of Econometrics, 33, 311-340. Phillips, P.C.B. (1987), “Time Series Regression with a Unit Root”, Econometrica, 55, 2, 272-301. Phillips, PCB. and P. Perron (1988) “Testing for a Unit Root in Time Series Regression”, Biometrika, 75, 2, 335-346. Press, W.H., B.P. Flannery, S.A. Teukolsky and WT Vetterling (1989), Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge. Robinson, PM. (1993), “Time Series with Strong Dependence”, invited paper for the 1990 World Congress of the Econometric Society, forthcoming in Advances in Econometrics: Sixth World Congress, Cambridge University Press. Said, SE. and DA. Dickey (1984), “Testing for Unit Roots in Autoregressive-Moving Average Models of Unknown Order”, Biometrika, 71, 3, 599-607. Said, SE. and DA. Dickey (1985), “Hypothesis Testing in ARIMA(p,1,q) Models”, Journal of the American Statistical Association, 80, 390, 369-3 74. Shin, Y. and P. Schmidt (1992), “The KPSS Stationarity Test as a Unit Root Test”, Economics Letters, 38, 387-392. Schmidt, P. and PCB. Phillips (1992), “LM Tests for a Unit Root in the Presence of Deterministic Trend”, Oxford Bulletin of Economics and Statistics, 54, 257-287. Schwert, G.W. (1989), “Tests for Unit Roots: A Monte Carlo Investigation”, Journal of Business & Economic Statistics, 7, 2, 147-159. Smith, A. A., F. Sowell and SE. Zin (1993), “Fractional Integration with Drift: Estimation in Small Samples”, unpublished manuscript. Sowell, F. (1990), “The Fractional Unit Root Distribution”, Econometrica, 58, 2, 495- 505. Sowell, F. (1992a), “Maximum Likelihood Estimation of Stationary Univariate Fractionally Integrated Time Series Models”, Journal of Econometrics, 53, 165- 188. 124 Sowell, F. (1992b), “Modeling Long-Run Behavior with the Fractional ARIMA Model”, Journal of Monetary Economics, 29, 277-3 02. Taqqu, M. (1975), “Weak Convergence to Fractional Brownian Motion and to the Rosenblatt Process”, Z. Wahrscheinlichkeits-theorie verw. Gebiete, 31, 287-302. Tieslau, M.A., P. Schmidt and RT. Baillie (1994), “A Minimum Distance Estimator for Long-Memory Processes”, unpublished manuscript, Michigan State University. White H. (1984), Asymptotic Theory for Econometricians, Academic Press, Inc., New York. Whittle, P. (1951), Hypothesis Testing in Time Series Analysis, Hafner, New York. Whittle, P. (1963), “On the Fitting of Multivariate Autoregressions, and the Approximate Canonical Factorization of a Spectral Density Matrix”, Biometrika, 50, 1 and 2, 129-134. Wooldridge, J .M. (1993), “Estimation and Inference for Dependent Processes”, Econometrics and Economic Theory Paper No. 9201, Department of Economics, Michigan State University. Yajima, Y. (1985), “On Estimation of Long-Memory Time Series Models”, Australian Journal of Statistics, 27, 3, 303-320. Yajima, Y. (1988), “On Estimation of a Regression Model with Long-Memory Stationary Errors”, The Annals of Statistics, 16, 2, 791-807. CAN STRTE UN IV 930'! 141 Ilsjlllellllllllllllllfl