1.1.7" ‘4 "‘ ,,.—. “ 3 ‘. _5__,. --.——3_.. Ejk . )1. ._ I..,.I I “(1H,. :3'14'2": ‘ 5. I I ~ 11 '13. .. ~, .- ,1, 31:12! 5.53, 1.3931157 " (9% "N‘J‘U ‘ ‘.' s I“! ~ ‘25») Mygfiug, 3‘ “3:, a . ' I 3‘ 25%“ ' .4. I L 1! .1555; I?" . 3.5; $5,- ’ 5‘ 1‘52“ 3 3» . , .u. .. w»; LV u—d. I w.u~:~.‘- r-v... 32% $1.51,; "é‘ix‘itz—“W "5“” a ‘ ,fiymu'fim, "£31 2.1;? v '{17‘ :1 3—4:»:— - ,. 154"“ . ‘- . «1 3:“ :4: -. 52.533333-{n17n5‘32-k; ‘~ 3: v- :Q‘ Lu',!‘1 W“ —o- "*1 Pfiy‘. .‘ , ,5 _ _‘ v1 J... vW-NH FW 5. “-14 - ,3- n..- . -'-‘1'-.‘.: -. a. ' 3“ n» _ .3“ v thjflv’u. .33., ~39; ~. __ v '- ’3‘ “3'5 ‘3 nth ‘3 ‘ ”9’35”" «wk-3115.;- ‘s'r.~~§.r,§’~ 95": .5»,— . ~ _ . .caygqgw- 5.,“- ;5 7,}: (v.5; -izw-E .:.q, “’3" ‘V‘,fi-.~:“-._'2? $25; -\ ~ 1 $1.2“ 3:11: ' it; .43“-- gi S;n€.:_;:‘~:-I- Q _.LII::' I I-"- I .r‘ "5?: 1-"? 7" ~1,~\'_.._.::‘,'~. ,5;- 5.2 .- ,5 ., ,_ _ i J: . ..5. H, 5,13"? 1:"! I ,4j ._1.' , g 3“" 31-11 Ir If} 1. 344'” 8‘ Jr: ‘.‘ {1... if“ 1.: I} 0 . 1'; W a; P 1‘1; '12; I- y, 5 I at, ‘1' I I 13" I '4‘ " 1 11 ‘3 ,‘ j, ‘l‘y . u'. "'1. , 3 ,5 -, -, N515: ’.‘. 15\ ' 1,.11lf'5‘5" (1,. ,3: "At-w, ‘, 5!- .lv.»~5-_b were allowed to be con- sidered as diagonal matrices or general matrices with non-zero elements in the off diagonals as well. Taking 2m and 2b separately, it can be readily noted that each model corresponds to those considered by Jo'reskog (1967),. The primary distinction is the fact that the models, each of which represents a set of simultaneous equations, are themselves intended as being simultan- eously operative. Thus, the procedure adopted for estimating the 21 parameters in a particular application must be capable of estimating parameters at both levels simultaneously. Schmidt's method of choice for this was the method of maximum likelihood, and key to its applica- Ation was his formulation of the likelihood function for hierarchical data. He began with a basic situation in which 2 measures were available for each of n subjects within each of 3 groups. Based on the work of Tiao and Tan (1965) he reconceptualized, this situation, as one in which [each of the 2 groups were composed of hp observations. Thus the data was treated as m independent observations, drawn from an np dimensional multivariate normal distribution. This distribution had a mean of 1 x E and a covariance matrix, In with the following p ) structure : an=ll'®7-b+1®7-w- (11.) In this equation, 2w represents the covariance between observations within each group while 2b represents the between-groups covariance. Given the assumption of a multivariate normal distribution, Schmidt derived the following as an expression for the likelihood function: ——2 — {-ltgd -1x)’2( -1x))1}(15) L=(2n) 2 zap 2 e 2 i=1 11 — E npxi - H Substituting the previously-defined expression for in p in terms of Em and 2b into the above expression and simplifying making use of several matrix algebra theorems, Schmidt obtained the following as an expression for the likelihood function in terms of Z and EU” rather than in ° b p' 22 -mnp m-mn -m L=(2n) 2 2w 2 (zwvrnzb)? l e { - % [Inn tr {zw' sw} + [1: tr {(2m + n ab)‘1 sb} (16) + mn tr {(2m + n Zb)'1(§ - HME - 1103]} where . - Xi .)(y.. - yi .)' (17) (18) and 111' is a vector of length p for the 3'91 subject in the 191 group. The values of Eb and Em which cause this function to attain its maximum are the maximum likelihood estimates for 2b and 2w. Since one of the properties of this form of estimation is that the same estimates for 2b and 2w will be obtained through maximizing any monotonic function of L, the usual approach is to maximize a slightly simpler function of L, namely the logarithmic function. Schmidt derived the following as an expression for the logarithm of the likelihood function: log L = 1:2 108(2n) + ""3“ log (IZwl) $10302,” + nzbl) l - % [m tr {2.3- Sw} + [)3 tr {(Zw + nib)-l Sb} (19) + mn tr {(Zw + nzbfl (1.. - g)(x” - HYU- 23 Since the maximum likelihood estimator of LI is y, the last term in the above expression is zero. The first term being constant, it can be effectively ignored with no impact on any results. This yields the following as Schmidt's expression for the effective part of the log likelihood function: mn 1 log L = "Hg“ log (IZwI) - g log (I2w+ anI) - ——2— tr {Zw- S} w (20) - g tr {(Zw + nib)-1 Sb}. When a particular parameterization of 2b and 2w is substituted in this expression, maximum likelihood estimates for the parameters can be obtained by setting the first partial derivatives of log L with respect to each parameter equal to zero and solving for the unknown of interest. Because the approach adopted by the present author makes use of the chain rules for obtaining these derivatives, the first partial derivative of log L with respect to 2b and 2w are necessary. Schmidt's expres- sions for these are set out below: 371?; = m(l-n)Zw-l - m(2w + :1 2b)’1 + mn zw‘l Sm zw'l -1 -1 + (2m + :1 2b) sham + orb) (21) . '1 '1 ‘1 - .5 d1ag{m(l-n)2w - m(2w + n 2b) + mn 2w swim -I + (2w + n 2b)-1 Sb (2w + n ZD)-l} 24 alogL _ -l -l _ -1 32a ' “n (Zw + n 2b) Sb (2w I n 2b) "D (Em + n 2b) an -5ouum(%+ozp”sbuw+ngfl - mn (2w + n 2b)-1}' J6reskog's Linear Structural Equation System Model Two of the distinctive features of Jo'reskog's most recent structural equation model are, first, that the structural relationships may be expressed in terms of latent variables and; second, that such variables are allowed to be fallibly measured. This implies that the overall model is expressible as two components, a measurement model and a structural model. The following presentation of Jo'reskog's formulation is based on the ideas set forth in J6reskog and Van Thillo (1972), Joreskog (1973) and J6reskog (1977) with notational modifications allowing for a ready comparison of J6reskog's model with its extension to the hierarchical data situation to be presented subsequently. Measurement Model X E+A£l+§ (23) 2+F§+w an 5 In this component of the overall model, the vector of y's repre- sents the set of observed endogenous measures which have as their expected value E and error E. The vector of n's stands for the latent or "true" endogenous variables while A is a coefficient matrix relating g to 1. Likewise g embodies the observed exogenous variables with expected value y_ and error w_. The true exogenous variables are 25 represented by g which is related to the observed variables by the coefficient matrix 1‘. Structural Model n = An + Br + 9 (25) The definitions for g and Q remain as before. The vector of 6's contains the equation errors while A and B are the structural coeffi- cient matrices relating the true endogenous and exogenous variables to the true endogenous variables. To fully understand the importance of each of these components we must examine the structure of the variance-covariance matrix of y and 3. In connection with this effort we must define the following addi- tional parameter matrices: 2 = the variance-covariance matrix of y and x composed of 2x, I y’ and ny; I; = the variance-covariance matrix of the latent exogenous variables; 26 = the variance-covariance matrix of the errors in equations; We = the variance-covariance matrix of the measurement errors associated with y_; '1'“) = the variance-covariance matrix of the measurement errors associated with _x. In addition to these definitions, several assumptions are made. The measurement errors, g and g, are assumed to be uncorrelated with each other and with the latent variables, [1 and Q. Finally, the residual errors, Q, are uncorrelated with the true exogenous variables, g. 26 For convenience we may express the variance-covariance matrix of y and g in partitioned form: P2 2] Y Yx Z: 2 2 .XY X‘ where 2 =2'. yx xv Given the preceding definitions and assumptions, each of the components of 2 may be expressed in terms of the parameters discussed as follows: _ _ -1 ' " -t ' 2y - A[(I A) (132:3 + 2.90 A) M t “’8 (26) 2x = F26" + ‘1’“, (27) .. I - -t ' ny - FZCB (I A) A . (28) Estimation If we assume that the composite vector = (15', y') is distributed multivariate normal with a variance-covariance matrix as expressed above, the various parameters of the overall model may be estimated via the maximum likelihood method. The values of the parameters which maximize the effective part of the log likelihood function, Log L = - [(N-l)/2][logl 2 I + cr(sz‘1)] (29) are the maximum likelihood estimators. They may be found by taking the partial derivatives of the log likelihood function with respect to each of the parameters in the model, equating them to zero, and simul- taneously solving the resulting equations. Since the explicit solution is 27 obtainable only for a few restricted versions of the general model, some numerical solution must be employed in actual practice. The particular approach taken by J6reskog and Van Thillo (1972) employs two numerical methods, the method of steepest descent and the Davidon-Fletcher-Powell method. The first approach is used to generate an approximate solution for the parameters in the neighborhood of the actual solution, while the second produces the final solution. Chapter 4 General Structural Equation Model for Hierarchical Data Following the schema established previously with the presentation of Jb'reskog's Linear Structural Equation System Model, the structural equation model for hierarchical data is set out below. To facilitate the presentation, a notational convention has been adopted. whereby a variable or parameter associated with the subject's within-groups level stands alone, and the corresponding parameter at the between-groups level is subscripted with a lower case b. This should serve to preserve the conceptual similarities between J6reskog's and this model while highlighting their differences. As with Jo'reskog's model, the new model may be set forth as two \ related components: the measurement model and the structural model. Measurement Model 2=E+An+Abflb+§+§b (30) y. + r; + rbgb + w + 9b 131)1 E The E and z vectors are simply the expected values of y and x respectively and are conceptually the same as the corresponding terms in Jiireskog's model. The matrix A contains coefficients relating the latent endogenous within-groups variables [1 to the observed variables, 2. Likewise, Ab serves to relate the true endogenous between-groups variables, flb’ to the observed 1. The vectors 5 and 5b represent the errors of measurement associated with the within- and between-groups levels respectively. The coefficient matrices, l‘ and Pb, and the vectors Q, Eb, 93 and 5b bear similar relationships to the observed x vector. 28 Structural Model 29 Reduced Form of Structural Model fl: An+Bt+9 9h = Abnb I Bbgb + 9h (32) (33) (I-A)g = B; + 6 (I‘Ah)”b = Bbgh + 6h (32') (33’) The first equation stipulates that the latent within-groups endog- enous variables are expressible as linear functions of themselves (as determined by the coefficients in the A matrix) and the latent within- groups exogenous variables (as determined by the coefficients in the B matrix) . Finally, we have the vector 9 containing the errors in equa- tions associated with this part of the structural modelf The second equation; is composed of parallel constructs/ dealing with the expression of/the between-groups latent endogenous variables/ As with J6reskog's model there are a number of variance- covariance terms (associated with) these vector-valued variables. are as follows: is IO I00 IE the variance-covariance matrix exogenous variables, Q; the variance-covariance matrix in equations, _e_; the variance-covariance matrix ment error associated with the variables, y; the variance-covariance matrix ment error associated with the variables , 3g ; They of the latent within-groups of the within-groups errors of the within-groups measure- observed endogenous of the within-groups measure- observed exogenous 30 Z; - the variance-covariance matrix of the latent between-groups exogenous variables, g ; 26 - the variance-covariance matrix of the between-groups errors in equations, (3 ; b \l‘ - the variance-covariance matrix of the between-groups measurement error associated with the observed endogenous variables; ‘Pw - the variance-covariance matrix of the between-groups - measurement error associated with the observed exogenous variables. If, as before, we assume the measurement errors to be uncorre- lated with each other and with the latent variables and that the residual errors are uncorrelated with the true exogenous variables and that all variables at one level are uncorrelated with those at another, the variance-covariance matrix of the observed variables can be expressed as a function of the parameters defined above. Let the combined vector of observed scores for an individual be represented by the vector 5 where Ix IN II I I I The variance-covariance matrix among these observed variables, V(_z_), can then be represented as 22 and we have 22 = z I 2h (34) 31 The parametric composition of Z and 2b is then: 1 r 't ! I‘ZCF + ‘1’“, 1‘ch (I-A) A z = .......................... (35) A[ (I-A)-IBZ§B' (I-A)-t]/\' A(I-A)-lB£§l‘ ’ I + ! A[(I-A)-126(I-A)-t]A' + W‘s—J r z r' w l r z '(I— )’t ' - b Cb b + Mb I 1) £1,315 A'b Ab 1b = ----------- I ------------------ (36) . AbI’18b2;8: (WWW Ab(I-Ab)-leZ§brb ' + ' (I- '12 (1- )'tl ' w _ Abl Ah) ab A'b Ah + 2b _ Parameter Estimation: General Considerations If we assume the overall vector, [y', x'], to be multivariate normally distributed, the parameters in the measurement and structural components may be estimated by use of the maximum likelihood prin- ciple. We havg, from Schmidt, Eh; effective part 9: 1.11:.ng 1315' 31,33“ ood Magical situation Winn: _ m-mn _ m _ m_r_1_ -1 F - -———-2 log(|2l) 2 108(IZ + nzbl) 2 tr {2 S} w (37) - % tr {(2+nzb)’lsb}. After replacing Z and 2 with the expressions set forth above we b need but to choose values for the parameters which maximize F. This 32 may be accomplished by, first, taking the partial derivatives of F with respect to each of the elements in the parameter matrices. The resulting first derivatives are set equal to zero and simultaneous solutions found for the parameters. Unfortunately, these equations are even more complicated than those discussed previously; therefore, some numerical solution is called for. The first derivatives are fully set forth in Appendix A. In general, they were found by, first, taking the partial derivatives of F with respect to Z and 2b, and the partial derivaties of Z and 2b with respect to their respective parameter matrices. The application of the chain rule for matrix derivatives completed the process. To carry out the numerical solution for the maximum likelihood estimates, the same procedure used by Jc'ireskog was employed. The method of steepest descent serves as a first stage in the estimation procedure until the estimated values for the solution approach a reason- able neighborhood to the actual solution. The Fletcher-Powell method is then used to accomplish the final maximization of the likelihood function. Numerical Solutions for Parameter Estimates The most satisfying approach to generating parameter estimates would be to find simple analytical expressions for the parameters using the "normal" equations arrived at by setting the first derivatives of the log likelihood function equal to zero. The complexity of these expres- sions, however, is such that straightforward solutions are possible only in the case of very simplified models. Instead, we must turn to the use of numerical techniques whereby parameter estimates are indivi- dually generated for each model and each set of data. 33 A quick perusal of any text that touches on non-linear program- ming (for instance--Luenberger [1965]) reveals a wealth of techniques whereby solutions may be obtained for systems of equations such as those in the present instance. The key criteria in the selection of one or more of these techniques for a particular application seem to be global convergence and rate of convergence. The first of these criteria refers to the ability of an algorithm to arrive at a "true" solution irrespective of the point at which the algorithm starts. The second has to do with the number of iterations required to arrive at a solution. While the first constitutes a necessary condition for the choice of a particular method, the second determines the efficiency of the estimation routine. The general approach adopted by most numerical solution algo- rithms involves a series of steps outlined below: 1) Choose an initial value for the solution, X0. 2) Determine the direction in which the solution is to be modified. 3) Choose as the new value for the solution, X the point at 1, which the function is minimized in the direction determined by step 2. 4) Compare f(XO) with f(X1) to see if a significant change has been made. 5) If nothing has appreciably changed, the solution has been found. 6) If changes have been made, return to step 2 and continue. The primary difference which characterizes the various methods is the way in which the direction of modification is determined. 34 In the present instance the technique actually implemented involves a combination of two fairly widely used approaches, the method of steepest descent and the Davidon-Fletcher-Powell method. According to J5reskog (1969) the first of these approaches offers rapid advances toward the immediate neighborhood of the solution followed by relatively slower convergence upon the solution. The second method, on the other hand, is relatively slow in arriving at the neighborhood of the solution but fast thereafter. The algorithm employed relies upon a number of steepest descent iterations ceasing when the change in the value of the function is less than five percent from one iteration to the next. These are then followed by the application of the Davidon- Fletcher-Powell method to arrive at a fully-converged solution. The. operation of each is set forth below along with that of Newton's method on which Davidon-Fletcher-Powell is based. Steepest Descent If we designate f as the function which we wish to minimize having continuous first partial derivatives 8f, then the method of steepest descent directs us to take as the k+1 value for our parameter estimates the following: .. (38) X1m ‘ xk ' “k3f(xk) where ck is a positive number which minimizes f(Xk - ak8f(Xk)). Repeated application of this method will yield values for X which corre- spond to the solution sought. 35 Newton's Method In its pure form, Newton's method involves a change in the itera- tive approach involved in the steepest descent technique whereby the direction for solution modification is found. The net result is the use of the following expression for Xk +1: _ -1 ’ Xk+l — Xk - E(Xk) 3f(Xk) (39) where F(Xk) is the matrix of second derivatives of f evaluated at the point Xk' Since global convergence cannot be assured for this method, its typical operationalization is usually of the form .. , -1 . xk+1 - Xk akF(Xk) 3f(Xk) (40) which has global convergence properties. In addition, use of this method yields convergence requiring fewer cycles than does the method of steepest descent assuming X0 is chosen sufficiently close to the actual solution. The only drawback to applying this method is the need to constantly reevaluate F(Xk)-1, a process which can be quite time consuming . Davidon-Fletcher-Powell This approach belongs to a class of quasi-Newton methods all of which are characterized by the use of approximations to the inverse of the matrix of second partial derivatives. This particular method in- volves starting the minimization procedure with both an initial estimate for the solution, X0, and an initial estimate of the inverse of the matrix of second derivatives, SO. Successive approximations to the solutions 36 are found by employing the following equation: - _ (41) Xk+1 ‘ Xk “ask 3f(xk) where, as with the method of steepest descent and Newton's modified method, ak is a positive number Wthh minimizes f(Xk - aksk 3F(Xk)). Successive approximations to the inverse of the matrix of second deriva- tives are found through the use of the following relationship: sk+1 - s + PkPk Skgqusk (42) quk qksqu where pk designates the difference between XR and XI”1 and qk is equal to the difference between 8f(Xk) and af(Xk+1). Both of the latter two methods may fail to converge to the appro- priate solution given initial values of X0 which depart too much from the actual solution. This absence of guaranteed global convergence motivates the chaining of the method of steepest descent with that of Davidon-Fletcher- Powell . Identifiability For a particular model to be of any real use, we must be able to estimate the parameters associated with that model. For the parameters to be estimable from a particular set of data two conditions must be met. The first is that 2 and 2b must be of full rank. This will be the case if enough observations are taken within each unit and if enough units are observed. One must also avoid the inclusion of variables which are linearly dependent upon other variables. In practice, the 37 invertability of the unrestricted maximum likelihood estimates for Z and 2b guarantees that this condition is met. The other, and far more difficult to determine, condition for the estimability of a set of parameters is that the model be identified. The identifiability of a model basically means that, for two distinct models to give rise to the same 2 and 2b, their parameters must be identical in g respects. In other words, the parameters of an identified model must be unique. It must be emphasized that this condition is on the model in question and has nothing to do with a particular set of data. When dealing with regression models, the only way in which a model may be under-identified is if one or more of the predictor varia- bles is linearly dependent upon the others. This situation is readily noted due to the fact that the XX matrix has no unique inverse even though more observations were taken than the number of predictor variables. While being a condition easily detected, the remedy may not be so easy without considerable thought on the definitions of the pre- dictor variables. With more complex models such as the ones addressed in this paper, determining if a specific model is identifiable may be much more difficult. Econometricians have addressed this problem extensively and, for a variety of models more complex than the simple regression model, have formulated some mathematical rules for identifying necessary and sufficient conditions for model identifiability (see Fisher [1970 and 1966], for instance). The work that comes closest to addressing model identifiability in a situation similar to that currently considered is represented by Geraci 38 (1977). In this paper, he provides an algorithm by which the identifia- bility of a particular uni-level model might be established. Although he restricted the model considered to have no measurement error or complex factor structure, establishing the criterion for model identifiability involved the solution of a set of equations hardly less formidable than those involved in the system whose identifiability was of interest. Jo'reskog (1977) suggests that as a necessary condition for the identifiability of a particular model that the number of unknown ele- ments be less than 1a“) + QXP + q + 1). While this must be true for a unique solution to exist, it by no means guarantees that one does. Given the complexity of the current model in the face of the rather complicated necessary and sufficient condition advanced by Geraci when dealing with a much simpler model, it is no real surprise that a straight- forward test for the identity of a particular model of the sort consid- ered here is difficult to achieve. Wiley (1973) in considering the identification problem in a uni-level model of the same form as the one considered here offers a very useful suggestion. If a program were available to compute a numerical esti- mate of the information matrix for the parameters and if some reasonable estimates for the parameters were inserted into the model, then model identifiability could be reasonably assumed if the information matrix was of full rank. The benefit from adopting this approach is that the identifiability of a particular model could be reasonably assured prior to the estimation of its free parameters. 39 EstimatinLParameters in the General Model While there exist a variety of methods whereby estimates of the parameters of the general model might be derived including both un- weighted and generalized least squares, the most straightforward in terms of estimation, tests of fit and producing asymptotic standard errors of the estimates is the method of maximum likelihood. Parameter estimates produced by this method are those values for the parameters which maximize the likelihood of the observed data given an assumed underlying distribution where the likelihood of a particular set of obser- vations is their joint probability given the parameter values. Given an expression for the joint probability of a sample of ob- served values, values for the parameters may be formed by first taking the derivatives of the log likelihood function with respect to the param- eters themselves, equating these to zero, and finally, solving the resulting set of simultaneous equations. The key to the entire process is the formulation of the likelihood function. Nearly all of the literature reviewed which dealt with the maximum likelihood estimation of structural equations or analysis of covariance structures addressed itself to the situation where a single sample of observations was drawn from a presumed multivariate normal distribu- tion. Under those circumstances, the effective part of the logarithm of the likelihood function has the general form M = tr (2-18) - log [2 I. (43) Only the work carried out by Schmidt considered the situation involving a two-stage sampling process where observations were sampled from primary sampling units which themselves were sampled. Under the 40 assumption of doubly multivariate normally distributed observations where variables for each individual observation are multivariate normally distributed and observations themselves are similarly distributed within groups, Schmidt derived the following expression for the logarithm of the effective part of the likelihood function: mm 2 log I2 I- % log [2 + n 2b l- mn tr {2.18} log L = _2 m _1 (44) - 2 tr {(2 + n 2b) Sb} where m is the number of groups, n the number of secondary units within each of the m primary units, S and Sb are the within- and between-groups observed variance-covariance matrices respectively, and Z and 2b are the underlying variance-covariance matrices for the within- and between- groups levels respectively. This expression served as the basis for the estimation procedure implemented here. The next step in producing the maximum likelihood estimates for the parameters in the general model calls for expressions for the first derivatives of the log likelihood function with respect to each of the parameter matrices in the general model. The simplest way to arrive at such expressions is through the use of the chain rule for derivatives involving matrices. According to McDonald and Swaminathan (undated), if the elements of a matrix Z are functions of the elements of a ma- trix Y which are themselves functions of another matrix X, the partial derivative of 2 with respect to X can be expressed as: 32 _ ar 32 . or ’ ax 3Y This is also true if Z is some scalar function of X. 41 Since the log likelihood function is a function of but two matrices, 2 and 2b, each of which is a function of a subset of the general param- eters of the model, the partial derivatives of the log likelihood function with respect to any one parameter matrix, say C, can be conveniently expressed as follows: 3% Q§§fi£ , if C is a parameter at the within-groups level is: = .. 359 Blogfl 3C 32b , If C 1s a parameter at the between-groups level. Schmidt has derived expressions for the rightmost partial derivatives of each equation. These expressions are as follows: B§§S§ = {m(l-n) 2-1 - m(Z + n 21)).1 + mn 2-1 SE -1 + (z + n 2b)‘1 Sb (2 + n 2b)'1} - % diag {m (l-n) 2‘1 (45) - m (z + n 2b)"l + mn 2'1 82'1 +(z+ngfl%(z+ngfh 駧fi£ = {mn (Z + n 2b)-1 Sb (2 + n 2b)-1 - mu (2 + n 2b)-1} - % diag {mn (z + n 2b)'1 Sb (2 + h 2,)1 (46) - mn (2 + n 2b)-1} where S corresponds to the pooled within-groups observed variance- covariance matrix and Sb corresponds to n times the between-groups variance-covariance matrix. 42 The remaining components necessary to complete the expressions for the partial derivatives of the log likelihood function with respect to the parameter matrices in the general model are the partial derivatives of 2 and 2b with respect to the parameter matrices involved in each. These expressions have been derived through the use of matrix calculus and are set forth fully in Appendix A. Standard Error Estimation and Test of Fit of the Estimated Model Just as the maximum likelihood principle leads directly to the estimation of model parameters through the use of first order partial derivatives, the second order partial derivatives assist in the estimation of standard errors for the parameters involved. It has been shown that the negative inverse of the expected values of the matrix of second partial derivatives is equal to the asymptotic covariance matrix of the maximum likelihood estimators. This may be simply expressed as follows: A -1 = _ azlogi . (47) V(Q) a l: aeiaej] The square roots of the diagonal values of this matrix yield estimates for the standard errors of their associated parameters. Since maximum likelihood estimators are, for sufficiently large numbers of observations, normally distributed, the estimated standard errors may be used to establish confidence intervals about the parameters estimated and thus provide statistical tests for the parameter values against any particular null hypothesis of interest. While the tests would not be strictly inde- pendent of one another for a given model and set of data, they will yield useful information toward the refinement of a particular model. 43 Schmidt (1969) has shown that, for the hierarchical situation of interest in the current investigation, the expected value of the matrix of second partial derivatives of the log likelihood function is a function of both the first and second derivatives. Furthermore, it can be expressed by the following formula for the general ijth element: E [32—1333] e - % tr ‘32“- I nib) (r + nib)-1£ (43) i j 86.86. 1 J + mgé-n) tr (2 + n 2b)-l 3(2 + n 2b) (2 + n 2b)-1 3(2 + n 2b) . aei aej Expressions for the first and second derivatives of Z and 2b with respect to individual elements of the parameter matrices in the general model have been derived and are set forth in Appendices B and C. So as to conserve space, only the nonredundant expressions are shown. Since the order in which the partial derivatives are taken has no effect upon their value, only the unique formulae are shown. When all of the various elements involved in a given parameteri- zation have been calculated and assembled in matrix form, the negative inverse of this matrix estimates the covariance matrix of the estimators. The documentation for a computer program implementing this procedure ' is included in Appendix D and its listing is included in Appendix E. While the foregoing provides a means whereby confidence intervals may be established about individual parameter estimates in a particular model, it does not actually enable the testing of a model as a complete entity. To this end, we must turn to yet another construct derived from the maximum likelihood principle, the likelihood ratio. 44 To generate parameter estimates for a particular model and set of data of interest, we choose as our estimates those values of the param- eters in the model which yield the largest value of the likelihood func- tion given the data at hand. Under some other parameterization of the model both the estimates and the value of the likelihood functions would likely differ when employing the same set of data. In particular, we can posit as our alternative model one which is least restrictive in that it will yield the largest value for the likelihood ratio. This model simply asserts that the data arise from a multivariate normal distribution with parameters 2 and 2b with no further parameterization placed on these two matrices. Thus our estimates of Z and Zb are unrestricted by any constraints placed upon them and the value of the likelihood function so obtained can be referred to as the maximum value of the likelihood function over the unrestricted parameter space. Under any other particular parameterization of >2 and 2b furnished by our model, the maximum of the likelihood function can be referred to as the maximum over the restricted parameter space and cannot be larger than the maximum over the unrestricted space. This implies that the ratio of the latter to the former has as its maximum value 1 and, since neither term can take on anything other than non-negative values, as its minimum 0. This quantity is known as the likelihood ratio and provides a means whereby the fit of a particular model (i.e. , the ability of a model to replicate Z and 2b) may be evaluated. Since the likelihood ratio is based upon two random variables (the maximum of the likelihood function over the restricted and unrestricted parameter space) it too is a random variable. In addition, for large sample size, the negative value of twice the logarithm of the likelihood ratio has approximately the 45 chi-square distribution. Thus, we have as our test statistic the following : x2 = - 2 log (L d/L (49) restricte unrestricted) ’ This is readily seen to be equivalent to the more convenient expression: x2 = 2(log L - log L (50) unrestricted restricted) The degrees of freedom associated with it are equal to the difference between the number of unique elements in Z and 2b and the number of unique parameters estimated in the restricted model. Larger values of the test statistic which lie far to the right on the reference distribution are unlikely under the assumption that the model fits the data. Thus, likelihood ratio statistics of low probability under the assumption of model fit point to overall weaknesses in the model, the particulars of which should be addressed through inspection of the asymptotic standard errors and the discrepancies between the unre- stricted and restricted estimates for Z and 2b. Chapter 5 Applications Analysis of Artificial Data: Testing the Estimation Procedure Using a Simple Model r. As a part of the work carried out by Schmidt (1969), several sets of data with a predetermined structure were generated. These data sets were then analyzed using four different parameterizations, one of which reflected the true structure of the data. As one test of the estimation routine currently implemented, one of these data sets was reanalyzed making use of the same parameterizations employed by Schmidt. The S and Sb matrices used as input to the estimation routine are displayed in Figure 1. Due to the fact that the model considered by Schmidt did not explicitly allow for the presence of exogenous variables, only the portion of the current model dealing with the interrelationships among endogen- ous variables could be examined. This restricted model parameterizes the within- and between-groups variance-covariance matrices as follows: (I) II A 26 A + we (51) Z ’+\l' . (52) b Ab ebAb 8b U) ll Thus, the matrices associated with exogenous variables 0‘, l‘b, Bb’ Z , B, 2 , \P , and 'P ) were omitted from the model. Additionally, Cb C m wb the elements of A and Ab were fixed to zero, while A and Ab were both 46 mwp.bma omm.~¢ www.moa ¢m~.w~ Nam.m www.mn . 303v ugfinom omN.¢© www.mm m~¢.m~m mom.m omn.~ mmN.NH u E on ll S OH .J oor.mm m~o.o¢ pom.~wa omm.H©~ ~c>.H omc.¢ poo.N ¢-.Na so: 3833 sen Boasts. . a 6.53..“ 48 equated to the following design matrix: ”1.0 .5 .5 _ 1.0 .5 -.5 A = Ab = 1.0 -.5 .5 _1.0 -.5 -.5 J The resulting model was, therefore, a function of but four param- eter matrices, 26, 26 , We, and We . The various forms of these matrices b b for which parameters were estimated are presented in Table 1. The true model, that which actually gave rise to the data in question, is Model 1 for which 26 and :6 are diagonal matrices while ‘Pe and We b b are heterogeneous. The results obtained from the estimation routine using the first parameterization of the model where the 26 matrices were constrained to be diagonal and the diagonal elements of the We matrices were allowed to be heterogeneous are set forth in Table 2. Corresponding estimates obtained from Schmidt's work are presented alongside those from the new estimation routine. The associated asymptotic standard errors obtained from the implementation of the procedure for estimating standard errors are also presented in the same table, as is the chi-square value and degrees of freedom associated with the model. A comparison of the estimates obtained from the current program and that developed by Schmidt reveals that the results are identical to at least two decimal places. Differences beyond this point are attribu- table to the accuracy of the calculations required to obtain S and Sb from Schmidt's work. The obtained chi-square values for the test of fit 49 Table 1 Parameterizations Employed in Analyzing Artificial Data from Schmidt Z and 2 (I: and 1]) Model 9 ab 8 5b 1 diagonal heterogeneous 2 diagonal heterogeneous 3 general heterogeneous 4 general heterogeneous 50 Table 2 Parameter Estimates, Standard Errors, and Test of Fit for the Analysis of Schmidt's Data Using Model 1 Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Program Work Variance )2; 4.875 4.875 .211 11 z -- -- -- C21 2; 4.075 4.075 .821 22 2 -- -- -- C31 2 -- -- -- C32 2C 6.401 6.403 .984 33 We 6.959 6.957 .613 11 We 6.569 6.570 .632 22 We 7 . 129 7.127 .648 33 We 9.376 9.377 .699 44 2 7.013 7.014 3.353 gb 11 z .. -- -- C b21 Z 6.840 6.842 10.923 Cb22 (continued) 51 Table 2 (continued) Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Program Work Variance 2 -- -- -- C b31 z -- -- -- C b32 2 .000 . 000 5 . 258 Ch 33 \P 3.694 3.693 4.588 8b 11 ‘P 6.713 6.713 5.327 8b 22 ‘1' 11.082 11.086 8.679 8b 33 ‘1' 7.662 7.661 8.202 8b 44 xz/dr 17.4778 17.5 6 6 52 of the model are identical within rounding error. Additionally, the non-zero parameter estimates all differ from zero by more than one standard error, as would be hoped for, given that the form of the estimated model corresponds to that employed in generating Schmidt's data. Parallel results with respect to parameter estimates, asymptotic standard errors, and chi-square statistics for the tests of fit of the remaining three models are contained in Tables 3 through 5. The parameter estimates obtained from the implementation of the present, more general model are nearly identical to those reported by Schmidt as are the chi-square statistics for each model. Since the asymptotic standard errors reported by Schmidt were obtained as a by-product of the Fletcher-Powell algorithm and not from the evaluation of the expected value of the matrix of second derivatives, they are not reported here; however, where comparable values were computed, the standard errors were of similar magnitude. The results of these analyses offer evidence that the currently implemented estimation procedures perform accurately with models of at least the complexity of those considered earlier by Schmidt. A more comprehensive test of the accuracy of the estimation procedure required data arising from a model with a more complex structure. The next section presents results from the analysis of data with such a complex structure . 53 Table 3 Parameter Estimates, Standard Errors, and Test of Fit for the Analysis of Schmidt's Data Using Model 2 Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Progam Work Variance EC 4 . 965 4.965 .216 11 Z -- -- -- C21 2C 4.329 4.330 835 22 z -- -- -- C31 2 -- -- -- C32 2: 6.430 6.433 .990 33 We 7.398 7.396 .139 11 W8 7.398 7.396 .139 22 W8 7.398 7.396 .139 33 We 7 .398 7.396 .139 44 2 6.259 6.259 3.309 Ch 11 z -- -- _- C b21 2 5.833 5.834 11.431 Ch 22 (continued) 1 "1. .‘.L 54 Table 3 (continued) Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Program Work Variance z -- -- -- § b31 z -- -- -- C b32 z . 000 . 000 6. 315 Ch 33 W 7.717 7.717 1.880 6b 11 W 7.717 7.717 1.880 8b 22 W 7.717 7.717 1.880 8b 33 W 7.717 7.717 1.880 8b 44 xz/df 23.9121 28.9 12 12 55 Table 4 Parameter Estimates, Standard Errors, and Test of Fit for the Analysis of Schmidt's Data Using Model 3 Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Program Work Variance 2C 4 . 964 4 . 964 . 222 11 2; -1.541 -1.542 .216 21 2C 4. 324 4 . 326 .850 22 I: — . 356 - .357 .235 31 2C .755 . 755 .412 32 2C 6.417 6.418 1.053 33 W8 7.328 7.329 .908 11 We 7.508 7 .504 .796 22 Ws 6.744 6.747 .810 33 Ws 8.020 8.019 1.323 44 2C 6.342 6.342 3.364 b 11 Z 4. 084 4. 083 2.644 Ch 21 2 6.142 6.145 11.792 C"22 (continued) 56 Table 4 (continued) Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Program Work Variance I; - .997 - .997 1.913 b31 2 -1.745 -1.747 5.410 cb 32 Z .503 .504 5.579 Ch 33 W 4.402 4.399 8.317 8b 11 W 4.567 4.565 8.550 8b 22 Ws 11.089 11.084 10.278 b33 W 9.741 9.738 12.960 55 44 xz/df .2635 .26 0 0 57 Table 5 Parameter Estimates, Standard Errors, and Test of Fit for the Analysis of Schmidt's Data Using Model 4 Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Program Work Variance 2C 9 4.964 4.964 .222 11 2C -1.533 -1. 533 .169 21 2: 4.325 4. 325 .850 22 2C - .538 - .538 .187 31 2C 1.029 1.029 .235 32 2C 6.418 6.418 1.032 33 W8 7.398 7 .400 .198 11 We 7.398 7 . 400 .198 22 W8 7.398 7.400 .198 33 W8 7 .398 7.400 .198 44 7. 6.349 6.349 3.244 Ch 11 2 2.618 2.618 2.016 ;b 21 2 6.258 6.258 11.391 Cb22 (continued) Table 5 (continued) Estimate From Estimate From Asymptotic Current Schmidt's Error Parameter Progam Work Variance Z —.935 -.935 1.618 ch 31 2 -2.119 -2.119 2.670 gb 32 z .718 .718 6.290 Co 33 W 7.360 7.358 2.286 ab 11 W 7.360 7.358 2.286 8b 22 W 7.360 7.358 2.286 8b 33 W 7.360 7.358 2.286 s b(14 xz/dr 7.3587 7.36 6 6 59 Analysis of Artificial Data: Testing the Estimation Procedure Using_a_ Complex Model The first step in testing the program through the analysis of artificial data was to generate an S and Sb arising from a more complex structure. The underlying principle employed was to assign arbitrary values to the parameters in the general model and, from these values, produce within- and between-groups variance-covariance matrices. At least two methods are available for carrying this out. One method involves a two-stage process characterized by, first, generating obser- vations from a multivariate normal distribution with the appropriate characteristics and, second, calculating the within- and between-groups variance-covariance matrices based on the artificial random observations from the first stage. This method is ideally suited to studies of the empirical distribution of the parameter estimates over repeated analyses of data with the same underlying structure using different samples of observation. Since this was not a goal of the present investigation, it was determined that such an approach would be excessively laborious and time consuming. Instead, an alternative was adopted that more readily yielded analyzable data with a known structure, but did not rely on any stochastic processes. Arbitrary values were assigned to the param- eters in the general model and the resulting matrices were mathematically combined according to the model to yield artificial underlying matrices, 2 and 2b. These were then placed in the equations for the unrestricted maximum likelihood solutions and values for S and SI) were obtained. Analysis of such data using the correctly specified model should result in parameter estimates that exactly match the original arbitrary values. 60 So as to simplify the calculation of the artificial data, the struc- ture and parameters at both levels were defined to be identical. Since the model to be estimated did not explicitly constrain the corresponding within- and between-groups parameters to be equal, no unfair advantage was accorded to the program through the use of this convention. In the absence of such specified constraints, the program must still inde- pendently estimate the parameters at both the within- and between- groups levels. The values assigned to each of the sixteen parameter matrices associated with the model are found in Figure 2. To additionally simplify calculating S and Sb’ the number of observations within each group and the number of groups (111 and 11, respectively) were set to 100. The resulting values for Z and 2b are set forth in Figure 3. These two matrices gave rise to the generated observed matrices S and Sb using the following formulae from Schmidt (1969): _ n-l (53) E(S) --n—'Z E(Sb) = n 2b + Z. (54) For the purpose considered herein the expectation operations can be disregarded and the relationships treated as simple equalities. The resulting values for S and Sb are presented in Figure 4. When these data were analyzed using the programs operationalizing the previously described estimation procedure, 4 steepest descent iterations took place before the stopping criterion was reached after which 26 Fletcher-Powell iterations followed. The parameter estimates are dis- played in Table 6, and duplicate the original generating values to at least three decimal places. The matrices S and Sb were duplicated with -1 o1 o 1 o .5. A&Ab 1 o] o 1 2 8:2 c (b ”o o o o o 0 tr 8.1: 8 8b 0 o o o o o (I am u“b 61 1 o .2 4 - 5 1 3 (I-A) B a B & (I-Ab) 6 2 o o .03 z a z 6 6b -1 0 _ o 1 1-0 5 _ r & rb Figure 2. Parameter Values Used to Generate Variance-Covariance Matrices for Example 11. 62 mm. om. ooé cod cod ooé .: oaeaxm Son Beasts. hoe om. cm. :4. on. ooé ow. mm. «6N4 om. ow. om . mm. mm. 0%. n w c5 w to 3:3, .m canary 63 .HH Omanemvnmu Gama HGmOmMfl—hdx 90M QM UGQ m MO m05~m> .ww Qhflwmh ..mm.mm om.om oo.o so.se No.mo mm.m~.. oo.~o3 co.c No.~e om.mms mm.mm oo.soH om.mm mm.eh oo.oo u em He.so No.me mm.m~ e~.mma mm.mm r oe.oo.. 1 moon. ammo. coco. meow. ammo. acme... some. coco. ammo. comm. comm. comm. owes. come. one“. once. emsc. seem. - m eemm.s mosh. f comm... 64 Table 6 Estimated Values for Parameters Treated as Not Fixed in Example 11 Asymptotic Asymptotic Error Error Parameter Estimate Variance Parameter Estimate Variance A31 .500 < . 001 Ab .500 <. 001 31 A21 - . 500 < . 001 Ab - .500 < . 001 21 B11 .200 <.001 Bb .200 <.001 11 B12 .400 <.001 Bb .400 <.001 12 321 .300 <.001 Bb .300 <.001 21 B22 .800 <.001 Bb .800 <.001 22 r31 1.000 <.001 rb 1.000 <.001 31 2 1.000 <.001 2 1.000 <.001 c «2,, 11 Z .200 <.001 Z .200 <.001 C2 Ch 22 Z .030 < . 001 Z .030 < . 001 911 6b 11 2 .100 < .001 2 . 100 < . 001 622 6h 22 (be .500 < . 001 (be .500 < . 001 33 b33 1|; .100 <.001 W .100 <.001 "83 "h 65 at least the same level of accuracy, yielding a value for the chi-square test of fit of 0.00 with 16 degrees of freedom. The results of the analyses carried out thus far indicated that the estimation routine provides maximum likelihood estimates for parameters in all components of the model considered here. In addition, chi-square statistics for the test of fit of the model agreed with those independently arrived at by Schmidt for the cases where his data were reanalyzed. The chi-square value resulting from estimating parameters in the cor- rectly specified model for the new set of artificial data was, as would be expected, quite close to zero. The asymptotic standard errors, while no strictly comparable values were available, appeared to‘ gener- ally agree with their approximations in Schmidt's work in instances where comparisons could reasonably be made. In the analysis of the new data these values were, as expected, quite close to zero. Based on these results, it was concluded that the estimation routine performed satisfactorily and could be used in conjunction with a real set of data. It is to the results of this effort that we now turn. Analysis of Data Drawn from the National Longitudinal Study of the gig School Class of 1972 As an attempt to illustrate the applicability of the model developed herein it was applied to a real set of data drawn from the National Longitudinal Study of the High School Class of 1972. Sponsored by the National Center for Education Statistics, the NLS is an ongoing large- scale survey project whose primary purpose is the observation of the educational and vocational activities, plans, aspirations, and attitudes of young people after they leave high school. The Educational Testing 66 Service began full-scale base year data collection in the spring of 1972. Data from over 18,000 seniors from a national probability sample of more than 1,000 high schools was collected. Beginning in the fall of the following year, Research Triangle Institute initiated the first of four follow-up surveys of these same subjects. As of the end of the third follow-up, over 13,000 subjects had responded to all of the instruments administered. The particular variables drawn from this data base included sex, ethnicity, father's educational level, mother's educational level, hours of English and foreign language coursework, and reading and vocabulary scores. To facilitate the analysis, sex and ethnicity were recoded to binary-valued variables. For sex, zero represented female and one represented male; for ethnicity, zero represented Black and one repre- sented white. Within- and between-school variance-covariance matrices were obtained involving these variables. These matrices are found in Figures 5 and 6 respectively. The model to be estimated treated sex, ethnicity, father's educa- tional level, and mother's educational level as observed exogenous variables. Hours of English and foreign language, together with reading and vocabulary scores, constituted the observed endogenous variables. Father's and mother's educational levels provided observed measures of a latent variable of socio-educational status. The hours of English and foreign language were seen as observed measures of a verbal-skill coursework variable. Likewise, reading and vocabulary scores were construed as measures of a verbal achievement trait. The latent exogenous variables of sex, ethnicity, and socio-educa- tional status were hypothesized to have a causal relationship with both 67 .33 32 6o: 5582 coder—Eyoonoocflamxw goonomufifims po>aomn0 05 no mason—2m 953.9829. .833 m r o memo . N mmo . H wmo. mmm . n «ma . mmo . m8. mac. moo. mam. w @wm . 5» omm . He won . n ma" . b1 moo . who . «em con—336m mo 7:64 9.5502 :oflmosom no Bayou mlofimm basses New unmanned sweeten mo 9:5: 0.80m humganaooaw onoom wficwom nmmwcm «o 9503 m mos—Y H mar . a 9mm . «no. .. «Sudan £04; 252 N H mom.~ m~m.m N¢O.N mmH.H~ mam. m¢o.~ moo.u oma.n pmN.H©N «Ho.MSN.~ moo.~u www.mm mom.NN mmo.Nv 8953.0 . m «mama HNO’JQ‘LOCOP-m 635.8 > NCOQ‘LOCO 68 w «36 H. ¢HHV . m com . 3 gm. mHN . H mom. mmH . 00H . wmc . H4 H5. . HoHV avenue and mfimdmu own . mam . ”NH £an maz 89: x232 oo§m>oouooamtm> Hoonomuaoosuom Homeroomno 65 no mason—3m magma—«EH. 630..— aoflmozpm .8 H9904 9.8502 coflwoscm «0 H264 9.85am 563mg New 5%mequ FwHonom mo memo: 29% P839899 osoom wfipmom amnwcm «0 950m 0:82 m. N 05.3 www.mH Hams: $0.3 Shim HHNH‘ m5. H3... www.mmHJ mHVH.N~.H.H oodeu www.mv macho .o 26m: Homeowner-co oHQmEm> mmH.mm1 www.mmu www.m- 66¢.» mmH.mwp.H omc.¢mui www.mmwa ccm.p¢m.>NH NOOQ’IOCOLV'Q 69 verbal-skill coursework and verbal aptitude. Similar models were assumed to operate at both the within-and between-schools levels. The non-error-related components of the model are diagramatically presented in Figure 7, while the general parameterizations of the components of the variance-covariance matrices are set forth in Figure 8. These variance-covariance matrices, together with the within- and between-group sample sizes, served as the input to the estimation routine. In spite of the expectations that a solution would be readily produced, even after 500 iterations the values of the derivatives of the non-fixed parameters had not converged on the criteria for the termina- tion of the iterative estimation procedure. Examination of the interme- diate estimates and the values of their first derivatives indicated that, while changes of considerable magnitude continued to take place at each iteration with respect to the parameter estimates, little improvement could be discerned in terms of their derivatives approaching zero. Analysis of a Final Set of Artificial Data As a final step in confirming that the problems experienced in estimating parameters for the model employing the NLS data were not simply due to some undetected flaw in the estimation routine, one addi- tional set of artificial data was generated. This data was based upon a model of similar complexity as that used to produce the second artificial data set. The most pronounced difference lay in the fact that the elements of Z; and 26 were no longer restricted to be relatively similar in magnitude. The values of the parameter matrices used to generate _this data set, are presented in Figure 9. Once again the same under- lying structure prevailed at both the within- and between-groups level. Sex \ Ethnicity Educational Socio- Status / \ Father's Education Mother's Education Figure 7. n O Reading Scores Verbal Achievement Vocabulary Score Verbal Skill Coursework Hours of H;:::. 3: English 1g Language General Diagrammatic Structure of Estimated Model Using NLS Data. 30:32 ooCmCm>oanocfl..m> 3602 no conmfi..ouofimnmm 3.595 .m 95w; ..:._.::.. 5:22:12... :1: h: a..:......::_ .5. 93:32.; :_ _..._:.E e... .c 75:. n..::..u-:e..3_.:. c:— .: $52.51.... _zx...:;x: 5.2525: .c.._.._.__....._:.. .: z: :_ 12...; t;:.a...m::.. $.53 i_“._..:_:..:._ 33;... s. ___. < = = = .2 X c Ea . .. _N: a : .< .-_<..: 3 2:2: we u: : 2 .-_<-: t... a >> . . . - w v ‘1 n." o t O. c c... c : Le _ c _ Sc f: a c c: w: M:. = 2: L: 2: . Sc . c c : i: .5: c = _ Tc _ c t: -__c a: a e .2, E: :c c: Tc _ c _ _ .: SW _= = c . c .< .-_<-._ 2: E: «c me e _ c c .2 u a A. . J u c _c _ c _ :e 1e a: we we a c _ c . . a... c c _ T c . 2.. a: c a »c c = _ :- ... _ we : = : ._ H“ _a c c e . .. c : = : e . c = no we o _ c c a x o V H . N c = c c : = . c a... me e c _ e 72 1 0 1 0 5 2 0 1 5 1 4 10 00 4 _ B 8: Bb (I-A) A 8: Ab 8: 10 1 1 0 0 100 0 10 Z 8: Z Z 8: 2 C Cb 9 9b — q - fl 25.000 0.000 0.000 1 0.000 1,413.400 0.000 0 _ 0.000 0.000 226.1444 .5 We 8: ‘l'sb LO 1‘ 8: F ’1 o o o 0 10 0 0 0 0 .5 O L_0 0 0 5 Wm 8: ‘l’wb Figure 9. Parameter Values Used to Generate Variance-Covariance Matrices for Example IV. 73 The resulting values for 2 and 2b are displayed in Figure 10. Based on values for n and m of 100, the final S and Sb matrices were derived and are set forth in Figure 11. These data were then analyzed treating as free the parameters included in Figure 12. As with the NLS data, the estimation routine failed to provide stable parameter estimates even after more than 250 Fletcher-Powell iterations. In addition, correct values for the param- eters were used as starting points for the estimation procedure. The first derivatives of the parameters evaluated at this point proved to be quite close to zero, as would be the case assuming the formulae were correct. Since the specific parameters being estimated and the form of the model itself is quite similar to that successfully treated in the case of the analyses of the second artificial data set, the problem is not with the program itself, which has successfully implemented the steepest descent and Fletcher-Powell methods. Further, the problem seems to be with their application to data sets which are difficult to analyze. Attempted Solutions for the Estimation Problem In analyzing the second set of artificial data for which parameters at both levels were defined to be equal, it was noted that, given iden- tical starting values, the derivatives associated with the parameters at the within-groups level were larger than those for the comparable parameters at the between-groups level by a factor associated with the number of subjects within each group. Since the steepest descent method alters estimated values for parameters in proportion to the size of their first derivatives, changes in estimated parameter value first took place with respect to the within-groups parameters. Initial values 74 on O on o: z n .>_ mas—Sam a8 $3me w new w mo won—Er ONN mdm O3» 0. 2: 255 can b2 03 . H 6mm we . $3 . 5 sec . cm mm ON on n w u w mmm mafia omm .2 8:5 75 oo.omo.m . >~ man—mam Lou moombmz n m can m 2: “o mucus—2m bflzwcflbfi .330A .3 opswE oo.o oo.omo.m oo.o oo.omm.mm oo.omm.mm oo.mom oo.o oc.mom om.omfi.m oo.s«m.- oo.oHH.HH coho oo.o¢¢.¢¢ oo.ooH.H~H oo.HH_.H oo.~om.o~ co.¢mo.mm oo.oom.mmm wo.mmm.oms oo.ooo.o~o.m 1. cp.m~ oo.o om.m¢ oo.c om.sHN om.¢¢m sm.~ oo.o mm.¢ mm.om ma.m- om.mo~ oo.o om.mm¢ oo.mmo.~ mm.o~ mm.oo~ m¢.Hm~ oo.moa.m mm.omm.s oo.oom.m~ oo.o~o.~ co.mwm.m oo.owc.m oo.mmm.oo oo.mmm.om~ 0o. 0mm .mw oo.omo.m u m cad mpg—Va omda cmdw n m No.Nmm mo.cm¢.~ on . Eh 76 1 0 1 0 0 1 02 1- 0 01 - - (I-A) A 3: Ab & e7 1 99 0 0 98 0 910 2 s: 2 z a 2 C Cb 6 6b P -a 25.000 0 000 0.000 0 000 1,413.400 0 000 L 0.000 0.000 911 __ w a w 8 8b 1 0 0 0 q 0 10 0 0 0 0 5 0 _0 0 0 5 _ ‘Pw & wmb Figure 12. General Model Underlying Example IV. 77 for the Fletcher-Powell algorithm were, as a result, closer to the actual value .for the within-groups parameters than for those for the between- groups parameters. Initial efforts of the Fletcher-Powell algorithm were also directed toward changes in the values of the within-group parameters since Fletcher-Powell departs from steepest descent only as the successive corrections to the initial estimate of the information matrix (an identity matrix) make themselves felt. As a result, the values of the between- groups parameters tended to lag behind those of their corresponding within-groups parameters. Considerable oscillation in their values continued to be observed even after the values of the within-groups parameters had substantially stabilized. One set of attempts to improve the behavior of the estimation procedure involved forcing the steepest descent segment to perform additional iterations. In retrospect, it was not terribly surprising that the marginal improvements in reducing the number of Fletcher-Powell iterations for the simpler problems did not translate into a successful approach to solving the problems associated with the more difficult problems. As an alternative, the steepest descent segment was modified so as to alternate between activity in changing the values of all parameters and simply concentrating on changes in the values of the between-groups parameters holding the values of the within-groups parameters fixed. When this modified approach was tried with the data sets that had demonstrated convergence previously, small reductions in the number of Fletcher-Powell iterations for the estimation procedure to arrive at solutions were observed. Once again, the modified estimation procedure 78 failed to generate correct, fully-converged estimates for the parameters associated with the more difficult data. When the third set of artificial data was used as input to the estimation routine a potentially revealing result was obtained. Once again, the derivatives displayed the same pattern {vis a vis) the different levels of the parameters. In addition, however, several specific param- eters were observed to have values for their derivatives that far ex- ceeded those of the other parameters at the same level. The steepest descent phase terminated after making some modifications to the values of these parameters and little impact on those associated with the re- maining parameters. Fletcher-Powell proceeded in the same vein until from 30 to 50 iterations had taken place. At that time, considerable changes were observed to take place in the values of nearly all param- eters. This would appear to reflect the attainment of an approximate information matrix that departed substantially from the identity matrix. Despite the substantial changes in the values of all the param- eters, convergence proved to be elusive. Many of the parameters were observed to take on relatively stable values while several continued to slowly oscillate. Inspection of the relatively stable parameter values revealed them to be within 10 or 20 percent of the generating values. Those for the unstable parameters proved to be nearly unrelated to those of their progenitors. An Illustrative Interpretation of the NLS Results At this point it may be useful to illustrate how interpretation of the results from ‘aan application of the model might be performed. While the estimation procedure failed to yield fully converged estimates for 79 the parameters in the model dealing with the NLS data, the intermediate values at the point at which the estimation routine stopped provide a reasonable set of results for this purpose. These intermediate results are presented in Table 7. The following interpretation of these results is based upon the assumptions that the model had acceptable fit to the data and that all of the point estimates differed from zero by more than two standard errors. For ease in interpreting the parameter estimates associated with the linkages in the model (with the exception of those associated with measurement error), the estimates have been included in the diagram- matic form of the model presented in Figure 13. The estimates arising at the within-schools level appear alone while those at the between- schools level are contained within parentheses. Parameter values that were fixed are underlined to distinguish them from those which were unconstrained during the estimation process. With respect to both the between- and within-schools levels there are two general sets of results that might be of interest. The first involves the measurement aspect of the model while the second is asso- ciated with the interrelationships among the latent variables themselves. To keep the discussion at a more substantive level, the measurement related results are not addressed in any great detail except to note that and 6 the very large values associated with 6 at both levels points 19 22 to a serious problem in the definition of a common, verbal coursework variable. This is clearly a function of the low degree of association between the two variables. Were the model to be reformulated based on these results, it would be preferable to posit independent latent variables 80 Table 7 Intermediate Parameter Estimates for NLS Data General Within-School Between-School Parameter Level Estimate Level Estimate 61 .762 .643 62 .259 .014 63 .108 .062 64 .084 .093 65 2.352 .919 66 1.910 .008 67 1.590 .051 98 -5.042 -.775 69 2.323 -.378 610 4.680 .304 611 .868 -.633 612 2.053 3.267 613 1.085 .908 614 .209 .809 615 .823 .903 616 6.107 179 . 138 617 166.412 .152 618 5.155 .379 619 6973.252 9284.864 620 8.803 .343 (continued) 81 Table 7 (continued) General Within-School Between-School Parameter Level Estimate Level Estimate 621 5 .429 . 234 6 18297 . 432 " . 002 22 82 Reading Scores Sex . . Vocabulary r— Ethmcxty Score Verbal Skill Socio- Coursework —-— Educational ' Status 9 3‘ V; 26/ my Father's Mother's Hours. of Hgggsig Education Education Enghsh Language Figure 13. General Diagrammatic Structure of Estimated Model Using NLS Data with Selected Parameter Estimates. 83 associated with both of these variables rather than one common latent variable. Turning to the estimated relationships among the latent exogeneous and indogeneous variables set forth in Figure 13 we now consider the within- and between-schools results in turn. At the within-schools level several interesting results may be noted. Sex, ethnicity, and socio-educational status (SEdS) are all related to verbal skill coursework in ways that might be expected. The positive coefficients for ethnicity and SEdS simply indicate that, at the within-schools level, those who are white and those who come from families with parents having high levels of education tend to take more units of verbal-related courses. The negative coefficient associated with sex indicates that males are less inclined to take such courses, all else being equal. With respect to verbal achievement the results at the within-schools level are not entirely anticipated. Sex has a coefficient with a positive value while SEdS has a negative, albeit small, value. As would be expected, ethnicity and verbal skill coursework both are related to verbal achievement with positive coefficients. It would appear from these results that the typical univariate relationships observed among sex, parents' education, and verbal achievement are explained more by the indirect effects of these two background variables through verbal skill coursework than through direct effects on verbal achievement itself. The results at the between-schools level differ considerably in both magnitude and direction from those at the within-schools level. The goefficients ,linking_,the school-level aggregates of sex, ethnicity, and SEdS to verbal skill coursework are considerably smaller than their ‘ J i 84 counterparts at the within-schools level. In addition, at this level, ethnicity is associated with verbal skills coursework through a coefficient having a negative value implying that, for schools of equivalent sex and SEdS, those that are not composed entirely of white students have marginally more verbal skill coursework taken by the students. With respect to the coefficients relating sex, ethnicity, SEdS, and verbal skill coursework to verbal achievement at the between-schools level several differences may also be noted with the results at the within-schools level. At this level, sex has a small negative coefficient while the SEdS variable has a coefficient with a small positive value. Both are opposite in sign to their within-schools counterparts. On the other hand, both ethnicity and verbal skill coursework are associated with positive coefficients as they are at the within-schools level. In general the exogeneous school-level aggregate variables of sex, ethnicity and SEdS have a much weaker effect on verbal skills course- work at this level than at the within-schools level. This would appear to reflect an institutional emphasis on verbal skill type coursework that is considerably less sensitive to such factors as sex, ethnicity, and parents' educational level than the behavior of the students within the schools. Unfortunately, such factors as ethnicity, parents' level of education, and verbal skill coursework are even more strongly related to verbal achievement at the between-schools level than at the within- schools level. The results discussed in this section should be viewed as illustra- tive of the type of interpretation afforded through the use of this model. Since the parameter estimates used were not fully-converged 85 estimates, their values should not be used to come to any real substan- tive conclusions with repsect to this set of variables. Furthermore, the lack of any estimated standard errors makes the exercise carried out at this point even more tentative in nature. Chapter 6 Summary of Results, Conclusions, and New Directions In some respects the nature of the work presented here is atypical of that necessary for most dissertations in educational psychology. Rather than being directed at answering a specific set of questions through the use of available analytic techniques, its purpose was the implementation of a relatively new analytic technique in the context of multi-level data. As such, the most desirable result would be a useful process. The results of the work carried out thus far are not, therefore, confined simply to the set of analyses performed, but include the devel- opment of the components necessary for those analyses. These compo- nents included the statement of the model itself, the first and second derivatives of the effective part of the log likelihood function and the computer program which makes use of them for the estimation of param- eter values and asymptotic standard errors. Inasmuch as can be deter- mined from the behavior of the computer programs on the first two sets of artificial data where the expected and correct results were obtained, the process has been defined and implemented. The question that remains at this point has to do with its broader usefulness. The failure of the estimation routine when applied to the NLS data clearly indicates that, as things now stand, the process cannot be successfully imple- mented for all sets of data. In the following section, these results are reviewed and the implications for further work in this area are con- sidered. For the convenience of the reader, where equations are referenced they appear fully, with their original equation number. 86 87 The Model The model which was developed has, as its basis, the simple random effects model in the multivariate form. Thus, the overall variance-covariance matrix was seen as being composed of two additive components: 22 = Z + 2b (34) where the terms on the right hand side of the equation arise at the between-groups and within-groups levels, respectively. Under the assumption of multivariate normality, previous authors have addressed the problem of arriving at unrestricted maximum likeli- hood estimates for 2b and )1. Work has also been carried out to permit the restricted maximum likelihood estimation under the constraints of some very simple models. The efforts of the current author were directed at formulating a more general structural equation model appli- cable to this type of hierarchical data. The model developed is appli- cable to both the within-groups and between-groups variance-covariance matrices simultaneously, and was generally patterned after the linear structural equation model considered by J6reskog, Wiley, and others. The full models that were developed for 2b and Z are as follows: 1 . ' -t ' rzcr + Wm rigs (I-A) A z z .......................... (35) A[(I-A)-IBZ§B'(I-A)-t]A' (I-A)-IBZCF’ + -1 ‘t , . A[(I-A) 26(I-A) ]A + we ' b - 88 rbzcbrfi + wwb : rbzcb Bé(I-Ab)-tAg 2b: .......... ."$ ----------------- am I Ab[(I-Ab)'13b2§b3g (I-Ab)'tlAg ‘l “b“‘AbYIBbZCJQ : -1 ” -. I Ab[(I’Ab) 205 (I'Ab) 1A8 I Web _ - “._ Maximum Likelihood Estimation Under the assumption that the underlying data follow a multivariate normal distribution, Schmidt (1969) has shown that the effective part of the log. likelihood function, where the parameters are simply I and 2b, can be expressed as follows: _m-mn _g _m_n -1 log L - log (III) 2 log (IZ + anl) 2 tr {2 S} 2 (20) -§u{u+ngfl%i where 1 n m , S ‘ E jil 1:1 (113‘ " Xi.)(zij ' 3’1.) (17) n m . Sb = ; 1:1(Xi° - 2,.)(21, - 2,.) (18) Substitution of the parametric expressions for Z and 2b in this equation yielded the fully paramterized version of the log likelihood function. The values of the parameters which maximize this function for a given S and Sb are maximum likelihood estimates of the parameters . 89 Test of Fit and Standard Error Estimation Given a set of maximum likelihood estimates for the parameter matrices in a particular application, it was seen possible to produce a statistical test of the fit of the model to a given set of data. This test made use of the ratio of the value of the likelihood function evaluated at the solution point for the maximum likelihood estimates to the value over the unrestricted solution space. The test statistic, x2 = 2(log L - log L (50) unrestricted restricted) ’ is, for large sample sizes, distributed as chi-square with degrees of freedom equal to the difference between the number of unique elements in 2b and 2 and the number of unique parameters estimated in the model. I It was also possible to produce asymptotic standard errors asso- ciated with the estimated parameters. The procedure considered made use of the fact that the asymptotic covariance matrix of the maximum likelihood estimators is equal to the negative inverse of the expected value of the matrix of second partial derivatives. Based on earlier work by Schmidt (1969) it was seen that this could be expressed as follows: 2 2 - . [42.13:] = W... as; i j 36.30. 1 J (48) + ngi-n) tr (2 + n 2b)-1 ac: + n 2b) (2 + n 2b)-1 3(2 + n 2b) : 86. 86. 1 J 90 Thus, it was necessary to obtain expressions for the first and second derivatives of Z and 2b with respect to individual parameters and parameter pairs, respectively. These are set forth fully in Appen- dices B and C. Obtaining the Maximum Likelihood Estimates It was seen that obtaining the maximum likelihood estimates of the parameters in a particular application is not a simple process. In theory, the solutions for the set of equations resulting from setting the first partial derivatives of the log likelihood function equal to zero would provide estimating formulae for the various parameter estimates. The complexity of the set of simultaneous equations precluded the derivation of such a. set of formulae. Alternatively, a set of numerical procedures based on the method of steepest descent and the method of Davidon-Fletcher-Powell were adopted to provide the values of the estimates for any particular application. The matrix expressions for the first partial derivatives of the log likelihood function, necessary for the application of both methods, were derived and presented in Appen- dix A. The adequacy of these approaches is considered in the dis- cussion of the results of their application. Results of Analyses The first set of analyses which estimated parameters in the model made use of a set of data employed by Schmidt (1969). As a result of Schmidt's work, the parameter estimates for four relatively simple models of the sort considered herein were already known. When the present estimation procedure was applied to those data to estimate the 91 parameters in the same model the same results were obtained in each case. It was concluded that the estimation procedures implemented by the present author were correct and satisfactory insofar as these simple models were concerned. A new set of data was generated based on a more complex struc- ture than that found in Schmidt's work. The model used to generate . this second set of data contained non-zero values for at least one element in each parameter matrix of the full model. This set of data was then analyzed employing the correctly specified model to see if the generating values would be faithfully reproduced. Once again, the estimation procedure performed adequately and yielded the expected results. Since the estimation procedure would be highly unlikely to provide correct parameter estimates in the event that some error were present anywhere in the conceptual process, including the programming stage, it seemed reasonable to conclude that the estimation procedures had been, in fact, successfully implemented. The estimation routine was then used in an attempt to generate parameter estimates for a model which addressed aspects of the within- and between-school variability of a set of variables drawn from the National Longitudinal Study of the High School Class of 1972. Despite the use of an excessive number of iterations, the estimation routine failed to yield fully converged estimates for the free parameters in the hypothetical model. In an attempt to gain a better understanding of the failure of the estimation procedure in the NLS application, an additional set of artifi- cial data were generated using a model which would yield sharply 92 unequal variance terms within each variance-covariance matrix. When the estimation routine was applied to this set of data using the correctly specified model, a satisfactory solution was not obtained. A variety of attempts were then carried out to improve the conver- gence of the estimation procedure. Initially, the procedure was modi- fied to require the performance of a larger number of steepest descent iterations. Further modifications were directed at allowing the steepest descent method to alternate between improvements on all parameters and improvements on the between-groups parameters only. These efforts failed to produce an estimation procedure that would accurately replicate the generating parameters associated with the final artificial data set. Conclusions At this point, it seems safe to conclude that the original goals of the effort reported here have been met to some extent. The model for linear structural relations applicable to multi-level data was successfully derived. The conditions that must be satisfied for the attainment of maximum likelihood estimates of the parameters in the model were derived. Procedures for testing the fit of an estimated model and for producing asymptotic standard errors associated with estimated param- eters were set forth. Finally, a computer program intended to yield parameter estimates through the use of iterative methods was success- fully implemented. What remains as a problem confronting the general use of this set of products is the inability of the estimation routine to provide satisfac- tory parameter estimates when confronted with a difficult set of data. Should it have been possible to specify the conditions under which 93 convergence could be assured, this handicap would not prove to be such a problem. Alternatively, had an alternative estimation procedure insensitive to such problems been found, this problem would have been overcome. Such was not the case. Further work in either or both of thes areas is clearly necessary. While the constraints facing the present author precluded an attack on either of these fronts, experience with the currently implemented estimation routine provided some results leading to speculation on the direction in which efforts to attain the latter goal should proceed. This speculation is briefly set out below. For the steepest descent procedure to operate effectively in making progress toward the maximum likelihood estimates associated with a particular problem it seems necessary that the matrix of second deriva- tives be similar, within a constant multiple, to an identity matrix. In this situation, changes in the intermediate values of the parameter estimates would proceed relatively uniformly. The nature of the current application vitiates against this. Where the structure at both levels of the hierarchy is the same, changes take place at the within-groups level much more rapidly than at the between-groups level. In light of the behavior of the estimation procedure when applied to the third set of artificial data, it seems clear that this phenomenon can also take place independently of the multi-level nature of the data to which it applies with some parameters at a given level being subject to substan- tial changes while others change but little. It is also clear that the effective operation of the Davidon- Fletcher-Powell procedure is dependent upon the characteristics of the 94 data with which it operates. Jo’reskog has indicated the the interme- diate parameter estimates associated with the LISREL problem must be "close" to the solution point for convergence to take place. The nature of this closeness would seem to be related to obtaining a region within which the matrix of second derivatives is relatively constant. Ideally, steepest descent would terminate only after taking the intermediate estimates into such a region. In the current application, it seems as though this does not happen. At least with the two more difficult sets of data to which the currently implemented estimation procedures have been applied, the steepest descent phase terminates prior to the attain- ment of such a region. At these points, the Fletcher-Powell routine is incapable of providing a converged solution, probably because the matrix of second derivatives is quite variable within these regions. Should the foregoing speculation prove to characterize the nature of the estimation problem encountered in the course of the current research, any iterative approach which will generally be capable of the attainment of maximum likelihood estimates for the type of model addressed here needs to explicitly incorporate the matrix of second derivatives and not an iterative approximation to it. Thus, it seems that the most promising approach to adopt is that of Newton's method, which makes use of expressions for both the first and second deriva- tives of the likelihood function with respect to the parameters of the model . LIST OF REFERENCES 96 LIST OF REFERENCES Bielby, W. T., R. M. Hauser, D. L. Featherman. "Response Errors of Black and Nonblack Males in Models of the Intergenerational Trans- mission of Socioeconomic Status." American Journal of Sociology, 82 (1977), pp. 1242-1288. Blalock, H. M., Jr. Causal Inference in Non-Experimental Research. Chapel Hill: University of North Carolina Press, 1964. Blalock, H. M., Jr. "Multiple Indicators and the Causal Approach to Measurement Error." American Journal of Sociology, 75 (1969), pp. 264-272. Blalock, H. M., Jr. Causal Models in the Social Sciences. Chicago: Aldine-Atherton, 1971. Bock, R. D. "Components of Variance Analysis As a Structural and Discriminal Analysis for Psychological Tests." British Journal of Statistical Psychology, 13 (1960), pp. 151-163. Bock, R. D. Multivariate Statistical Methods in Behavioral Research. New York: McGraw-Hill, 1965. Bock, R. D. and R. E. Bargmann. "Analysis of Covariance Structure." Psychometrika, 31 (1966), pp. 507-534. Brookover, Wilbur, Charles Beady, Patricia Flood, John Schweitzer, and Joe Wisenbaker. Schoo_l Social Systems and Student Achievement. New York: Praeger, 1979. Burstein, L. J. and M. D. Miller. "The Use of Within-Group Slopes as Indices of Group Outcomes." Paper presented at the meeting of the American Educational Research Association, San Francisco, 1976. Burstein, L. J. "Assessing Classroom Effects Using Within Class Regression Curves." Paper presented at the meeting of the American Educational Research Association, San Francisco, 1979. Burt, C. "Factor Analysis and Analysis of Variance." British Journal of Psychology, 1 (1947), pp. 3-26. Cramer, J. S. Empirical Econometrics. Amsterdam: North Holland Publishing Company, 1969. 97 Creasey, M. A. "Analysis of Variance as an Alternative to Factor Analysis." Journal of the Royal Statistical Society, 19 (1957), pp. 318-325. Cronbach, L. J. "Research on Classrooms and Schools: Formulation of Questions, Design and Analysis." An occasional paper of the Stanford Evaluation Consortium, undated. Davidon, W. C. "Variance Algorithm for Minimization." Computer Journal, 10 (1968), pp. 406-410. Duncan, 0. D. "Path Analysis: Sociological Examples." American Journal of Sociology, 72 (1966), pp. 1-16. Duncan, O. D. Introdu_c1tion to Structural Equation Models. New York: Academic Press , 1975. Fisher, F. M. The Identification Problem in Econometrics. New York: McGraw-Hill, 1966. Fisher, F. M. "Generalization of the Rank and Order Condition for Identifiability." In Readings in Econometric Theory. Eds. J. M. Dowling and F. R. Glahe. Boulder: Colorado Associated University Press, 1970, pp. 331-347. Fletcher, R. and M. J. D. Powell. "A Rapidly Convergent Descent Method for Minimization." Computer Journal, 6 (1963), pp. 163-168. Geraci, V. J. "Identification of Structural Equation Models with Measurement Error." In Latent Variables in Socio-Economic Models. Eds. D. J. Aigner and A. S. Goldberger. Amsterdam: North Holland Publishing Company, 1977, pp. 163-185. Glendening, Linda. "The Effects of Correlated Units of Analysis: Choosing the Appropriate Unit." Paper presented at the meeting of the American Educational Research Association, San Francisco, 1976. Gruvaeus, G. T. and R. G. J6reskog. A Computer Program for Minimizing a Functionyof Several Variables. Princeton, New Jersey: Educational Testing Service, 1970. Hannan, M. T. "Aggregation Gain Reconsidered." Paper presented at the meeting of the American Educational Research Association, San Francisco, 1976. Harman, H. H. Modern Factor Analysis. Chicago: University of Chicago Press, 1967. Hemmerle, W. J. "Obtaining Maximum-Likelihood Estimates of Factor Loadings and Communalities Using an Easily Implemented Iterative Computer Procedure." Psychometrika, 30 (1965), pp. 291-301. 98 Jo‘reskog, Karl G. "Testing a Simple Structure Hypothesis in Factor Analysis." Psychometrika, 31 (1966), pp. 165-178. Jc'ireskog, Karl G. UMFLA: A Computer Program for Unrestricted Maximum Likelihood Factor Analysis. Princeton, New Jersey: Educational Testing Service, 1966. J6reskog, Karl G. "Some Contributions to Maximum Likelihood Factor Analysis." Psychometrika, 32 (1967), pp. 443-482. J6reskog, Karl G. "A General Approach to Confirmatory Maximum Likelihood Factor Analysis." Psychometrika, 34 (1969), pp. 183-202. J6reskog, Karl G. "A General Method for Analysis of Covariance Structures." Biometrika, 57 (1970), pp. 239-251. J6reskog, Karl G. "Estimation and Testing of Simplex Models." British Journal owathematical and Statistical Psychology, 23 (1970). PP. 121-145. J6reskog, Karl G. "Statistical Analysis of Sets of Congeneric Tests." Psychometrika, 36 (1971), pp. 103-133. J6reskog, Karl G. and Marielle van Thillo. LISREL: A General Computer Program for Estimating a Linear Structural Equation System Involving Multiple Indicators of Unmeajsured Variables. Princeton, New Jersey: Educational Testing Service, 1972. J6reskog, Karl G. "Analysis of Covariance Structures." In Multivariate Analysis III. Ed. P. R. Krishnaiah. New York: Academic Press, 1973, pp. 263-285. Jo‘reskog, Karl G. "Analyzing Psychological Data by Structural Analysis of Covariance Matrices." In Contemporary Developments in Mathe- matical Psychology-Volume II. Eds. R. C. Atkinson, D. H. Krantz, R. D. Luce, and P. Suppes. San Francisco: W. H. Freeman and Company, 1974, pp. 1-56. J6reskog, Karl G. "Structural Equation Models in the Social Sciences: Specification, Estimation and Testing." In Applications of Statistics. Ed. P. R. Krishnaiah. Amsterdam: North Holland Publishing Company, 1977, pp. 265-287. J6reskog, Karl G. and A. S. Goldberger. "Estimation of a Model with Multiple Indicators and Multiple Causes of a Single Latent Variable." Journal of the American Statistical Association, 70 (1973), pp. 631-639. J6reskog, Karl G. and G. Gruvaeus. RMFLA: A Computer Program for Restricted Maximum Likelihood Factor Analysis. Princeton, New Jersey: Educational Testing Service, 1967. 99 J6reskog, Karl G. and D. F. Lawley. "New Methods in Maximum Likeli- hood Factor Analysis." British Journal of Mathematical and Statistical Psychology, 21 (1968), pp. 85-96. Kendall, M. G. and A. Stuart. The Advanced Theory of Statistics, Vol. II. New York: Hafner, 1961. Kerlinger, F. Foundations of _Behavioral Research. New York: Holt, Rinehart, and Winston, 1973. Kirk, R. E. Experimental Design: Procedures for the Behavioral Sciences. Belmont: Wadsworth, 1968. Kohn, M. L. and C. Schooler. "The Reciprocal Effects of the Sub- stantive Complexity of Work and Intellectual Flexibility: A Lon- gitudinal Assessment." American Journal of Sociology, 84 (1978), pp. 24-52. Koopmans, T. C. "Identification Problems in Economic Model Construc- tion." Econometrica, 17 (1949), pp. 128-143. Koopmans, T. C. and O. Reiers¢1., "The Identification of Structural Characteristics." Annals of Mathematical Statistics, 21 (1950), pp. 165-181. Kuhn, T. S. The Structure of Scieptific Revolutions. Chicago: The University of Chicago Press, 1970. Lawley, D. N. "The Estimation of Factor Loadings by the Method of Maximum Likelihood." Proceedings of the Royal Society of Edin- Lawley, D. N. "Estimation in Factor Analysis Under Various Initial Assumptions," British Journal of Statistical Psychology, 11 (1958), pp. 1-12. Lawley, D. N. and A. E. Maxwell. Factor Analfiis as a Statistical Method. London: Butterworth's, 1963. Lawley, D. N. and A. E. Maxwell. Factor Analysis as_a Statistical Method, 2nd Ed. New York: American Elsevier, 1971. Lord, F. M. and M. E. Novick. Statistical Theories of Mental Test Scores. Reading: Addison-Wesley,1968. Luenberger, D. G. Introduction to Linear and Nonlinear Programming. Reading: Addison-Wesley, 1973. Malinvaud, E. Statistical Methods of Econometrics. Chicago: Rand McNally, 1966. Malinvaud, E. Statistical Methods of Econometrics. 2nd Revised Ed. Amsterdam: North Holland Publishing Company, 1970. 100 Maw, C. E. "The Problem of Data Aggregation and Cross-Level Infer- ence with Categorical Data." Paper presented at the meeting of the American Educational Research Association, San Francisco, 1976. McDonald, R. P. "Testing Pattern Hypotheses for Covariance Matrices." Psychometrika, 39 (1974), pp. 189-200. McDonald, R. P. "Testing Pattern Hypotheses for Correlation Matrices." Psychometrika, 40 (1975), pp. 253-255. McDonald, R. P. and H. Swaminathan. "A Simple Matrix Calculas with Applications to Structural Models for Multivariate Data--Part 1: Theory." The Ontario Institute for Studies in Education, undated. Miller, J., C. Schooler, M. L. Kohn, and K. A. Miller. "Women and Work: The Psychological Effects of Occupational Conditions." American Journal of Sociology, 85 (1979), pp. 66-94. Morrison, D. F. Multivariate Statistical Methods. New York: McGraw-Hill, 1967. Mortimer, J. T. and J. Lorence. "Work Experience and Occupational Value Socialization: A Longitudinal Study." American Journal of Sociology, 84 (1979), PP. 1361-1385. Murkherjee, B. N. "Likelihood Ratio Tests of Statistical Hypotheses Associated with Patterned Covariance Matrices in Psychology." British Journal of Mathematical and Statistical Psychology, 23 (1970), p. 120. Murkherjee, B. N. "Techniques of Covariance Structural Analysis." Australian Journal of Statistics, 18 (1976), pp. 131-150. Potthoff, R. F. and S. N. Roy. "A Generalized Multivariate Analysis of Variance Model Useful Especially for Growth Curve Problems." Biometrika, 51 (1964), PP. 313-316. Rao, C. R. Advanced Statistical Methods in Biometric Research. New York: Hafner Press, 1952. Rao, C. R. Linear Statistical Inference and Its Applications. New York: Wiley, 1965. Rock, D. A., C. E. Werts, R. L. Linn, and R. G. Jareskog. "A Maximum Likelihood Solution to the Errors in Variables and Errors in Equations Model." Journal of Multivariate Behavioral Research, 12 (1977), pp. 187-197. Echmidt, W. H. Covariance Structure Analysis of the Multivariate RandomeffE‘cts Model. Dissertation, University of Chicago, 1969. 101 Schmidt, W. H. "Structural Equation Models and Their Application to Longitudinal Data." Paper presented at the Conference on Longi- tudinal Statistical Analysis, Boston, 1975. Schmidt, W. H. and D. E. Wiley. "Analytic Problems in Longitudinal Data." Paper presented at the Conference on Methodological Concerns in Evaluational Research, Chicago, 1974. Tiao, G. C. and W. V. Tan. "Bayesian Analysis of Random Effect Models in the Analysis of Variance, I: Posterior Distributions of Variance Components." Biometrika, 52 (1965), pp. 37-53. Tiao, G. C. and W. V. Tan. "Bayesian Analysis of Random Effect Models in the Analysis of Variance, II: The Effect of Auto- Correlated Errors." Biometrika, 53 (1966), pp. 477-495. Wiley, D. E. "The Identification Problem for Structural Equation Models with Unmeasured Variables." In Structural EcLuation Models in the Social Sciences. Eds. A. S. Goldberger and O. D. Duncan. New York: Seminar Press, 1973, pp. 69-83. Wiley, D. E., W. H. Schmidt, and W. J. Bramble. "Studies of a Class of Covariance Structure Models." Journal of the American Statis- tical Association, 68 (1973), pp. 317-323. Wishart, J. "The Generalized Product Moment Distribution in Samples from a Normal Multivariate Population." Biometrika, 20 (1928), pp. 32-52. Wright, S. "On the Nature of Size Factors." Genetics, 3 (1918), pp. 367-374. 9 Appendix A First Derivatives of the Log Likelihood Function With Respect to the Parameter Matrices Al >> >> _ -A<-HV< :.>a.> _ms-a<-Hv< a. >> > _ a_wmae- nm N _xxs_m.ss- . as ha xx a.xa.uMwms-A<-Hv + u; 5-A<-Hv.m we :.<5-A<-Hv + I Q U I a I <0 ~-A¢ HVA w + .m wmvs-a< HV< :.1>os.>o Wm nx>u Na u>xu I u Huoam Hmoam Amodm duodm as am Hmoam A uwm amoam sum Hmoam A4 a s 8 ss s as s I ma wssmsIasaIHVs< o + u QC . Title Card < (required) > Main Parameter Card < (required) > Parameter Set < (1 required for each non-zero parameter matrix) 99 (required to terminate reading of parameter sets) *EOR Title Card Descriptive information printed at the start of each job Main Parameter Card Consists of eight fields of three digits each, right justify all information 1. number of groups 2. number of subjects per group 3. number of observed exogenous variables 4. number of observed endogenous variables 5. number of latent between groups exogenous variables 6. number of latent within groups exogenous variables 7. number of latent between groups endogenous variables 8. number of latent within groups endogenous variables D2 Parameter Sets One set of these is required for each non-zero or non-fixed parameter matrix. Each set is composed of five items: 1. The The The The The A parameter identification card containing a number from 1 to 16 right justified in columns 1-2. A format card describing a row of the parameter matrix. One card for each row of the parameter matrix containing the estimated (or fixed) values for the elements in that row. A format card describing a row of the parameter specification matrix. - One card for each row of the parameter specification matrix. correspondence between the parameter ID numbers and the specific parameter matrix is set forth in Table 1. format card describing the rows of the parameter matrix should contain only F or E formats. format card describing the rows of the parameter specification matrix should contain only 1 formats. parameter matrix should be presented as a rectangular matrix (e.g. , symmetric matrices cannot appear as lower triangular). parameter specification matrix is obtainable from the printout of the estimation routine and should consist of only 0'5 (for fixed elements) and integers ranging from 1 to the total number of unique parameter estimates. Elements which have been constrained to be equal should have the same number. D3 Table 1 Correspondence Between Parameter ID Numbers and Parameter Matrices Number Matrix 1 A 2 B 3 A 4 26 5 2 C 6 ‘I' s 7 ‘I’ w 8 I“ 9 Ab 10 Bb 11 Ab 12 2 6b 13 Z Cb 14 W8 b 15 ‘I’ “’b 16 I‘ Appendix E Listing of Standard Error Program E1 31.8323.308822808832 <30 HIH2HHIH...H.<220 202222002 H 22.222220 . H 5. . s0 . $.03 202282002 883.22 205225 H 8302mm. HH .82 0.5.22 2052225 H8323. HH .82”; . H H .0882 2052225 HH.38st3.2.83532262000802 $2.22 202200 HH.88st5.2.83352:62000202 >32me 20200 H H 5800223. HH . 83323. H H .8332? . H H 680822.. H H 58:322.. H H 683523 222.22 202200 HH . 832.23% . H H . 8H vssH3mm . HH 52 253% 82.22 20200 H H . £82232 H H .cNH vssH3m . H H 62253.0. 2053 202200 HH£80932HH.o~H0ss3m.HH.8H0223m 2.2.2: 202200 HHdmmvxsmmHH6205222520223 32.2: 202200 2220.20.83200783302? . H 80.2. 88302 . H 83002 832225 32.2: 20200 2.5.83 2.2.2! 202200 5.03 2035 202200 HH.ooSm00.HH.8$mxm0.HH.88HH3.Ho.HH.8~EHH.H0I .HH.88HH.H.m0.HH.8830.HH.83HHm0.HH.8$m<0I .HH.8~020.HH.SSmxm.HH680032.:.8822? .HH68025.26802228822.:68023833 202200 HH8800.383080.2883292.885? .HH.8~0.H.m0.HH.8300.HH.8$m0.HH.88<0I 21830.:5802228832.3.882.? .HH.83.H.m.HH.88.H.HH.88HH.HH.88<\H0H.HHH\ 202200 322.024 .222 200. 2320 . H230. mxm0I .520. 220 . 220 . H20 . 00 .320 . H50 . 080 . .50 . 20. m0 . <0 220235 MO. NO mmwmhzu A BDQBDOHN MASH. . BszunH magma. . BDABDO . HDAZ—vmm imoomm E2 2783000.. 2783002. 2.7830008 27830008 2723000.. 29833008 20783305. 20.783305. 02783300.. 02783300.. 20uHH3300_. 027833000. 02783300.. 29833008 20.533008 20783300.. 2.7833008 27H3300H. 20uH3300_. 2783300.. 27H3300H. 22233002000 20.020.20.020.20.20.2.2 82.302003 85.52000 20.020.20.020.20.20.2.2 82.33.00 82323002000 00.30. 22.300203 82322000 00.30. 82.33.00 3 . 9.0212230 . 223m . 223m «.20 8.888220022002200 5.00 mg NS Z: CH: E3 8. 15. Pad. qu. m<0vm0m2 AA0.>0.mm0vm0m2 4:5 Am.AA.NA.NA.NmOVm0m2 4.3.0 A¢.AA.>A.>A.PmOvamE 4.3.0 Am.AA.NA.WO.ADvaME AAA.mOvm0mE AAA.>A.>H3m.an3m.u093wvm0>NQ-. NMNQH. gab: . wwflflh. XNHQm . Moan—h . .fwzsmm . szsmm . §~3mmvm0WNQh.ng.E.:Hmm 41.20 mom OH. 00 Am..HU.A2.vN~mB>~Qm.§~flm.=vmhmm 41:5 Now OH. 00 AN>~QR.WVHQO.NNHQHHCHBmm AAA.XA.QZmflBQaJCZ—Zm AA>~3mm.0mEH.>OvAmEm~ AA>~3mm . X>~3mm . NN~3mm . N0 um . m . HASH. . MASH. . NASH. . >0 . XOvAmSmmH 4.55 mg mom «on now Now How mmmm Hon com E6 07332000220 H+0u0 H 00. 00 800.223.0223: 700027372020 00.70. H 00 00.70 H 00 a 00. 00 800.200.900.032 3.30.0020. 200020200 2.2.20.20.83002..833000... . 83:. H 833000. 83002. 832000220 3003 202200 H0.0.0H.0H.0.<20.30002 02000000000 020 201230002000 3: 823300003 H 000.00.200.00 00 2000.02 00255200002300.2802. .0000. 00.002. 002. 002. 8. 0.0002. 02002. «020.300.0000 00.4.0 :oHuH3000: 70022 087002 8020..32220002028023.0220 00<0 H0<002.o.o.0<02.0<002.~020..0.<22.32.3202 00<0 H 0<02. H .o . 5.002. 00002. H022. 0200. 022300202 0000 00200200 22. 780030.022 8.00.303: . 0780030222 70000200707002 00002.70. 92. 00 0.22.70 80. 00 2000.2 2 00.00200 0 002.233.2000 00: 823300003 E7 8. 20. 208.022.0020.. 323000 H208.120.020.0020..22000220303202 20.88.20. 20. 320222200. 220303202 8. 20.20.222.222. 00203000. 908.820.20.222. 22502203030220: H20.o.H 20.20. 32222000223030.0202 8.20. H02. 002300000. 020.88.20.20. H022.0.022.m0203202 820.208.022.222.002.3004 H208.8.20.20.00222230223030.2202 H208. H 20.20.002222000220303202 8.20.20.00220022002303. H2088.20.20.2022.22502230300202 H20.o.H20.20.002222000220303202 AA039"? . N225 . Manama . wwtsmm . N>H3mm . Nunzsmmvmnumwmfim NZHHDOmmDm QZM szBmm MDZHBZOU AECEHNEHAAVAH 011510000 WANCSOmH HN E8 HH50000.2520002132200.2.330., .HH.H000.HH.H020.HH.39.0.3.H0.0.HH.H00.HH.H2 2052025 83.2.7 20.200000203220050.0020.908.0.0.3.2: 0729000000 020 200900 8.000980290004009 0.20 90.880000802980029.3290002 0.20 H008. H . 00. 00. «029. 00000000503002 0.20 H008.H.00.00.329003002308002 0.20 0059200 200900 80950300930909 8.008053290008209 0.20 9088.00.00.32980298029000: 0.20 8.00.0080298029.329002 0.20 H008.H.00.008029.003020308002 0.20 H0088.20.00.329.20302050890002 .H.20 8.00.008029802232933. .H.20 H008.H.00.00.0029.0050000508002 0.20 9088.20.00.329.20502030390002 0.20 8.00.29.329000209 0.20 H0088.20.00.32980298029032 0.20 8002080298029.329002 0.20 H0088.20.00.0029.00000223080932 0.20 H008; .2020.329.203022500090002 0.20 820.008.029.329.329002 0.20 H208.H.00.00.329205000508002 0.20 2088.20.00.32922302050332 0.20 8.208094029000209 0.20 2088.00.20.32980298029000: 0.20 E9 70 H302 . HHVH2 200020220 8008202000902 059000000 020 200900 H 00.000000290009002 020 8.00.000029008029002 020 H0088.00.008029000259000: 020 9088.00.00.32980298029090002 020 908.800.00.329.32980290002 020 H0088.00.008029802900002 020 8.00.00.3298.H.900200 020 8.00.202: 8.00.00.329908029002 020 908820008029.0.329090002 020 H2088.20.00.329.20.00002 0.20 H2088.20.00.2000.0.329090002 020 2088.20.00.3292080290002 020 2088.00.00.0029.0.3290002 020 9088.00.00.329.329.000.02 0.20 0 09 00 8.00.2002 80298029.9000080290005 020 8.0020802920025000 020 8008000292200 020 00209200 20.220000009000902 020 8.20.20.329208029002 020 208820208029.00029090002 020 2088.20.20.329.20.000.02 020 H 09 00 8.00.202: H H . 88000029. H H 88000029. H H 88000029.. . H H 88000029. H H . 88.2 00029. H H 8803329 \8H200\ 202200 E11 382020200200 020 200900 38.00.00.908200 020 28.00.00.900200 020 200900 38.00.00.000200 0020 28.00.00.522 020 200900 28.20.00.000200 0020 28.20.00.822 020 200900 28.00.00.200200 020 28.00.00.5200 020 20200 8888888.: 09 00 02020220200 2.2.00.2083000...83300? . H 88.0. 883000. H 880002 832009200 \0200\ 202200 00.20 >008 202200 288800.:8882002183309:888000., .HH888900.:88800.HH.8800.HH8220+ .HH88000588820.3880032288000? .3880090.HH8880.38880018020003 202200 220030 0209000000 020 200900 28.20.20.002200 0020 382020.822 0.20 200900 2820202002200 020 28.20.20.300200 020 200900 28.00.00.008200 020 E12 0059200 82000882008200 020 82020882200200 020 H 09 00 8.0020000 88.52009 0002 8 . 90 . 2020022922000 8u0000 0272020.. 272022 H30.320.3030.300.2090.HH00.HH00.HH2 2202025 H 80 88020. H 80 . 88200020. 80 . 80002020. H 80 . 8020020 2902025 H H 8082000. HH 8030000. H H 8032200 \HH000\ 20200 H H 88000029. H H . 88000029. H H 8800329.. . H H 88000029. H H . 82 00029. H H 8803329 \8H000\ 202200 HH 808200500. H H 80H 0000500 . H H 80H 022500 8003 202200 2.2.00.20.830000183300? . H 8800 . 8003002 H 8000002 H 80022925 \0000\ 202200 00.0 200 00.20 000095 H0.20.30.00.90.0.02.00.20.0200982220520 0209000000 020 200900 HH8.20.20.005<00 0020 28.20.20.322 020 200900 HH8.20.20.3005<00 020 HH.8.20.20.305<00 020 200900 28.00.00.0002200 020 HH8.00.00.005<00 020 200900 2820202000200 020 E13 €40.06.>A.>A.mmzfi.mm29.Hmzfivhmwmz 4.0000 Ao.wq.>A.HmEB.o.0.BmvEOm 4.0000 Ao.Gm.XOv.n= 8.020.004.0029.9mémzfivfladw 4.300 awn—56.5.2.0.MQEE.m.~m29vbm>m2 AAq.~mzh.vmm.mvwmz 4.30.0 H20.8.8.20.00.2000.08029090002 020 20882000802920.3290002 020 H20. 8. 8. 00.000029080290002 020 H008. 8.00.00.32980298025002 020 H008. 8.00.00.0029.32980298002 020 H008. 8.0000802980290832 020 .8..00..0 0.0 3298029802952 020 H00. 8. 8. 00. 00. 0029502080298002 020 H0088.00.00.329802900028002 020 H00.8.8.00.00802900020.3290002 020 9088.00.00.32980295029002 020 0 09 00 H8. 00 20000 0000- 20200229020 88H. 08H 08. 98H. 881081.02 8309 00 H0. 220. 8. H .0202200 020 002220 H00. 90 20000 202220 8.HuH202HV02020 H20<03300H+H20203200902000.50827202030000205 8. H 22032292200008. H H 20203200902000.5088 . 8000208200 020 8. HnH202020020 H20<0H0300H+H220322922030527220000002020 H8 . H H 20.003200922080008. H H 22002009020300? 8 . 800020200 020 H 0029 . 0029. 900. 0080290005 020 8.0000802920029500 020 8.008000292200 020 8.00.00.88.00000200 020 Hem E14 Awq.o.o.wq.wo. 0mSB.NmEB.Av>mS 0440 mmdA—zmdwmaxmmfilwefi 38.8809 00 Zmbfimm H8. 00. 02009802980209 AA0. >0. maze. >>~3mm . wwamvwmz AA* '4. O °.°. . °>4° 02°. --10 H00. 8. 8. 00. 00. 329. «029. 00298002 020 8. 00. 00. 0029. 329. 002952 020 H00. 00. 329. 0029800209 020 H0088. 20. 00. 0029. 0. «029090002 020 H2088. 20. 00.. 329. 2.0. 020208002 020 2088.20. 00. 2.000. 0. 3.29090002 020 H2088....20 00. 329. 20. 00298002 020 H20. 8.8. 0. 0. 0029. 02020. 3298002 020 H00. 8 8.00 00. 329. 0029. 800200002 020 882 09 00 H8.00.000000802982952 020 90.008029802930209 020 H0088.5.0080290802390002 020 H0088.00.00.002980298029090002 020 H0088.00.00.002980298228002 020 H0088.00.00.0029802980290002 020 8 . 00. 00 . 0.029 . 0029. 32952 020 90.008029402930209 020 9088.00.00.32980298023002 020 H0088.00.008029802900028002 020 H8.00.00.00298.H.908200 0020 8.002020 H8.00.008029008029802 020 908.820.000.029.00029890002 020 2088.20.00.329.20.080.02 020 H20.8.8.20.00.2000.0.329890002 020 2088.20.00.3292080290002 020 20.88.00.00802900028002 020 gm E17 000.88.00.00.H029.0029.0029890002 000.0 9088.00.00.0029.02002000298002 000.0 000.88.00.00.32900290202802 000.0 882 09 00 00088.00.00.000000029090002 000.0 000.88.00.00.H029.0029.0029890002 000.0 908.800.00.00298029.00298002 000.0 000.88.00.00.329002908002 000.0 8.00.00.002980290029500. 000.0 00.000029802900209 000.0 908.800.00.229.00.2020.3298002 000.0 000.88.00.00.52980290002000; 000.0 00012040323320 88108088108809 00 Cad . c . N . >4 . VA. HEAMZO. BHZO. mZvamE AA0. MASH. ANMZO. Amihvkmz AA0. .AAAEH. NASH. AwMAAAz AAQm.mmzh.Amzhvanz. AAO. Amzhémzhvmmzmh AAA.>0.mmZB.A.AaAZBvH~ANAE AA0. AmEHNnASHuAvaAOvVAAE AA<0 A. W AAN moA oAN E20 002009.20<00..200.0H80020 0209000000 020 83 09 00 08.20.20.22008029.0290000. 00<0 020.20.029.0029800209 00<0 02088.20.20802980020.029090002 0000 020.88.20.20.02920020208002 0000 82 09 00 02088200920000.0020.029890002 00<0 020.88.20.00.02980298029002 000.0 0208800008029.000290002 00.00 000.88.00.00.0029002908002 000.0 08.20.20.22000029.0298000. 000.0 90.200029002900209 00<0 02088202000290.0020.029890002 000.0 020.88.20.20.0029802908002 000.0 ANA.o.N.NA.NA.N0AE.A..ANMA20.MA200>0A2 AA<0+ 00020<0082H09<20000700.00200032009020300300 08.20.20.002900020.0290000. 000.0 020.20.029.02020000209 000.0 882 09 00 02088.20.00.200080020.H029890002 00<0 020.88.20.00.H029.20.00290002 000.0 02088000080290.H0298002 000.0 000.88.00.00.H029.0029.020208002 000.0 82 09 00 90.982009200080020.029890002 000.0 02088.20.00.029.20.00290002 000.0 02088.00.00.0029.00020.H0290002 000.0 000.88.00.00.029002908002 0000 82 09 00 mAN «AN NAN NAN E21 00.20.20.8002080200 0000 0020.20 0009020000 2020.20 8.Hn00000:020 020<0H8300H+0020<00820090.20830000070200000000000 0 8. 22000020032033. 0 02000200902000.0000. 8 . 8. 20200200 000.0 00029. 0029. 900. 00. 0029000020 0000 08.000000290102900 000.0 08.00.8.H.H02980.200 000.0 08.000088000008200 0000 00209200 08.200088200000200 00<0 8.20.20.88.22008200 0000 H 09 00 08.00.200.00 00029 . 000208 . 00029. 0208 002.000.2000 0H . H 2.020. 0 H . H 0020 200020200 0H 02000. AH 0000. 0 $2200 200020200 0H 88080029. 0 H 88080029. 0H 88080029.. . 288080029. 0H 88.030029218803029 \8H200\ 202200 2.2.00.20.0030007003300? .8800 . 08003000. 08080000. 8082009020 00002 202200 00.20 \0200\ 202200 2880800.:8808200.:88003002188800? .28800900.:.880000.0H.880000.0H880800.. 588000.2880820.:8808302288080? .0H880890.288080.01880802288080.0003 202200 00.0 000 00.20 0000920 020000000.2200200002900 0209000000 020 200900 E22 CAA. .o .o.>A.>0.NAAE.A..AnAE,A..Av>nAZ AA<0 AWAS. o .NACAAMAAEB.¢AASB.NAAE.Hv>nAE AA<0 8. WA. A .¢AZP.NAZB.AAASEVQQ< AA<0 0 000. 00. H029. 0029000209 0000 000. 8.20.00. 0029. .0. 2029090002 000.0 02..0. .8..8. 20. 00. H029. 20. 000200002 000.0 ..20 00. 2000. 0.. 0.2909002 000.0 ..8 2.0 00. H029. 20. 00290002 000.0 020. 8 8.00. 00.002950200029002 000.0 000. 88. 00.000029802900002 000.0 200900 00088000000000.029890002 000.0 000.88.00.00.H029.0029.0029090002 000.0 000.8.8.00.00.0029.0029.H0298002 000.0 000.88.00.00.029002900002 000.0 08.00.00.00298029.0290000. 000.0 0 00. 00. 0029. 029800209 00<0 000.88.00.00.029800208029090002 000.0 00.088000080290029.029090002 0000 08.000040298000200 0000 08.00.2000 08.00.00.0290080298000. 000.0 0008820000029.00029890002 000.0 02.0.8..8.20.00.H029.20.08002 000.0 020.8. 8. 20. 00.200060029090002 000.0 020. 8 .20.00.H029.20.00298002 000.0 020..88. 00.000029000290002 000.0 000.88 .00 00.029002980298002 000.0 00088. 00. 00.002980200029002 000.0 000.88.00.00.029002900002 000.0 0 09 00 08.00.2080 02000002009020 008H88H.08H.0.8H.08H.08H.08H.H8H809 00 NoA AoA E23 20.802020220080020.0208002 0000 200900 0008000000000.000200200002 000.0 200900 00088000000000.029890002 000.0 00088.00.00.H029.0029.0029090002 000.0 000.88.00.008029.029.0002 000.0 000.88.00.00.02980290029002 000.0 CNA . o. N . NA . VA . N093. . AAmAZO. ”50035:: AA<0+ 02200.0:20090230000.00.0020<0H82009<20300300 08.00.00.0029.029000208000. 000.0 000. 00. 02980020000209 000.0 200900 08.00.00.0000.0029.0298000. 0000 00.008029002900209 000.0 000.88.00.00.02908029890002 000.0 0008800008029.029800208002 000.0 000.88.00.00.H029.0029.0029090002 000.0 0008800008029.02900298002 00<0 08.000000298008200 0000 08.00.208.00 08.00.00.029.90.00290000. 000.0 0008820008029.00029090002 000.0 020.8.8.20.00.H029.20.08002 000.0 02088200020000.029090002 000.0 020.8. 20.000029208029002 000.0 0208. 00.008029000290002 000.0 000.88. 0.0000298029802802 000.0 2 09 00 08.00.200.00 200900 00088000000000.029090002 000.0 000.88.00.00.H029.0029.0029090002 000.0 mg moA AVOA CA ”GA E24 .0H.8800900.0H.880800.0H88000021880009, .0H8800022880820.2880830.0H880800.. .0H880890.:88000.:88080018830053 202200 00.0 00.00 00.20 0000920 02000.0000.220020000900 0209000000 020 200900 02088200020000.0020.029890002 0000 020.88.20.00.0292080290002 000.0 0208800008029.000298002 0000 0008.8.00.000029002908002 0000 08.20.20.2200.0029.0298000 0000 20.200029802900209 0000 208.820.20.002900020.029890002 0000 020.88.20.20.029.20.080.02 0000 200900 000..88 00..00.0000.0.H029890002 000.0 000. 8. 8. 00. 00. H029.0029.0029890002 0000 C00. 8.. 8.. 20. ..00.0029.0.H029890002 0000 020. 8.. 8. 20. 00 .2000..0.H029890002 000.0 020. 8. 8. 20.00. H029. 0029. 00298002 0000 020 88 8.00. 00. 0029. 0. 0298002 0000 000 0.88 8..00. ..00 H029.. 0029. 08002 0000 0208 .20.. 20.. 2200. 0. ..H029890002 000.0 0200. 8. 8. .20. .20. 0.29. 0029. 00002 0000 020. 8.0 ..20 20. 20200002009020 0000. 8.0.00. 00200002009020.5330 08. 20. 20 80298029029000. 0000 020.20. 0290002000209 0000 200900 NASH. wAmZO. WZOVKTAZ AA<0+ moA poA E25 020. .8.20.00.H029.20.00290002 0000 02088000000290.0298002 0000 000.88.00.00.H029.0029.00298002 000.0 000.88.00.00.002980020.0298002 000.0 000.88.00.00.029002908002 0000 0 09 00 08.00.2080 20002 008H88H.08H.08H.08H.080.08H.H8309 00 0-020000200902020000 00.202080002080200 0000 002020 000.90.208.00 20u2020 8.Hu0000000020 02000083000.0002000020090230.5002H02000000000000 0 8. 00 20000209020000? 0 02000020090200.0000. 8 . 8 000200200 0000 0 0029 . 0029. 900. 00 . 0029800020 0000 08.000000290029000 000.0 08.0080002900200 0000 8.00.00.88.00000200 0000 00209200 08.200088200082000 0000 08.202088220002000 0000 H 09 00 08.00.2800 80298002023029.0200 0020002000 0132020213020 200020200 0H82000.0Hv0000.0:2200 200020200 0188080029.:88300292188302? .0 H . 88080029. 0 H . 88030029. 0H .82 2029 \8H200\ 202200 2.2.00.20.00300002003300? .080800. 08083000 . 08000000. 8002009020 0005 202200 00.20 2.000: 202200 2.880000.2.8808200218808300.28800000.. AoA E26 020. 820.0.00.H.029.20.00298002 0000 020 20.88 .00.. 00. 0.029. m. H0298002 000.0 000. 88 ..00 00. H029. 0029. 000200002 0000 8H 09 00 08 00. 20000 8: 09 00 00088000000000.029890002 0000 000.88.000.00.H029.0029.0029090002 0000 000.88.00.008029002900002 0000 000.88.00.00.02980290029002.0000 08.00.00.0029.0029.H0298000 0000 00.000029002900209 0000 0008820000029.00029090002 000.0 020.88.20.00.02920800200002 000.0 0208820002000.00029890002 000.0 020.88.20.00.0292080298002 0000 02088000000290.0020.0298002 000.0 00088.00.00002900290002 0000 8: 09 00 000.88.00.00.00000002590002 0000 00088.00.00.029002980298900; 0000 000.8.8.00.00.0029.0029.H0290002 0000 000.88.00.00.029002908002 0000 08.00.00.0029.0029. 0298000. 0000 00.000029002900209 0000 000.88.00.00.02900002000298.9002 000.0 00088.00.000.0029.0029.H029090002 0000 08.00.00.H029.8.H.900200 0000 08.00.200.00 08.00.00.H029.90.0029v000 0000 0008820008029.00029090002 0000 020.8.8.20.00.H029.20.08002 0000 02088200020000.029890002 0000 MBA NoA E27 22088202022000.529090902 0000 220.88.20.20.529.229.009.02 0000 A2088.20.208029800200200me 0000+ 2 2200000200902000008 . 00. 2200302009020300300 08.20.20.229800200029500 0000 220.20.329.2020000209 0000 8: 09 00 22088202022000.2020.02009.0: 0000 80H 09 00 9088.90.902900800200200me 0000 8: 09 00 9088.90.90.9930.52909020: 0000 90.8.8.90.90.329.229.3259022 0000 90.88.90.908029402900902 0000 9088.90.90.0.029802980290902 0000 90.88.20.208029800208200me 0000+ 22200000200908.5003 . Gm. 2200032009025300300 A 8. 90. 90 . 8.029. H.029. 30200000 0000 90. 90. 32980020000209 0000 8: 09 00 H8.90.90.9200.229.H.0290000 0000 90.908029002930209 0000 90.8.8.90.90002908029090me 0000 908.820.90.229. 0029800200902 0000 9088.90.90.0.0298029802909090: 0000 908.8.90.908029002980290me 0000 820.90.329.9193200 0000 8.00.20va 8.90.90.H.029.9m.80290000 0000 908.820.908.029. 0.00290900me 0000 220.88.20.90.329.2%00202 0000 22088.20.90.290006029090902 0000 93 9: m8 2: CH E28 40 .m0 m0m z 20. < 82.2.0.0000009 0000 82.2.0.0000009 0209000000 020 200900 8.20.90.290020202905200 0000 8.20.20.220020202205200 0000 200900 8.00.2000 8.90.90.990020209905200 0000 22020 220.8.8.20.90.290..0._0020.329090902 0000 220.88.20.90.329.208.059.02 0000 220.88.90.908029.0.3290902 0000 290.88.90.90.329.229.009.02 0000 8.20.20.22008029.5290000 0000 220.20. 5298029000209 0000 22088202080290.0020.329090902 0000 20.8.8.5.20.H029.2m.00902 0000 8: 09 00 ANOS. .>A.>O.>>Qm.4.amzhvemwmz AAm2 4440 Awq.o.c.5.>0.mmzh.meEthmwmz 1.50.0 CnOédJmA.VO.X>Q&.U.~AEBVB~H>QE AA0.HAEBJAZBJQuABva—z 30.0 Axd.o.o.wq.>0.¢mzfi.m.qmzhvwmz 4.3.0 A>A.c.o.>A.NO.amzfi.wmzh.qv>mz 4440 00000000 0: m2 E29 902000 0000 0088:9205“: 800920.200 200020200 0000 902000 0 90 90090000220 00000000 2009022 200902 009 00000000 0000 902000 0000900008 0000 902000u9202 2009009 0 00 00000 0020000000 00020009 00300 000000002 000002090008 20902 00 009902 200902 000000200920 20 0220000 00 0002022 200902 000000200920 20 0300 00 0002022 0000 00 09 2009020 250% QmmOmmz0H00209 0900 0920.000020302092029 00>:00209.\2080H000\8H00209 0900 \ 8.08800209.\0H090H00\800209< 208000250209 0900 \20.00000\800209< 8002000209.\ 808800209 0900 \ 8002000209< 8000\800209.\ 820230209 0900 20000200000.802000.\ 00002000 0900 «0002 0000000 880209.3000050300H .80000.802000.80920 80009095000000.2va 200020200 E32 00000.00 80 09 00 302000020000 00 00209200 830209u80000 830209230000 8302092002000 80002092320000 20002020923920 830209200920 2000202 2 09 00 83020780000 830209200000 2302092002000 830209232000 800209200920 800209200920 $000202 0H 09 00 8.00.000202000000050 80020918920 80020203920 20000 0000880000002 2 09 00 87.0002 000 09 00 0802000000 00209200 23000 000.000 2020020 0.00 080 00 0 09 00 3.00.300 0009.2 Hu000202 com ”a «0 A: mam com «on Haw E33 30020209000. 00209200 00 09 00 8.0200002025002000 H+0200u0200 300202000900": 09 00 00209200 0u900900 18220003090092. H-0u0000. 3020000. 80 09 00 2000 7.900900 0009022200 2-00200 300202170000 3-000202320002020 8202 "00900 00020200020200 0. 00 8.00002 80.0900 3002023020000. 7002022202 00209200 88020928920 83020923920 00209200 00000.20 0H 09 00 0009.00 080 09 00 30200000300000 0009.00 080 09 00 on ma ON mm 2 mg a; new 25 Non Sm E34 Amz22: 0 O. < m0 m052 <2 m mum—2:. < .mO BODQOflm XHmHm2 AA<0 Amdmzémzddddévwg mzfisommsm QZm zmpemm 7:": xucio Cdsfigifxux flown: Sax mom. on 5% sun: mJA. «on on mJL am on on: zmpemm 7:": mazfizoo x5230 00000000000000 mom «on new com mom flow mam mJL an on zmsbmm mazfizoo w+aauau xucio 2+oauo~ A H . Ema“: . ofoux 0.4”: 3“ on a+oauon 1......“ mam 09 00 3 .cm. 3 E c.5m*2.o_.v<+xux 763:. 17: mom on ENE. .oux ”13. an on gnaw mgum mom on zmpemm xucio Civfgifxux . agum «S on .oux fifl. 2: on mJL 2: on H.Aoomfi.oomfi.oowfifi 58182.8:.82.255862.6868686868.83 2. ow mmzéémzzi S.bo.§.3m.3.b< 205325 Eééi mummy»: E43 com mow 8N mam gm «on com 2: «3 OS E44 7:": $2328 xucio 26?“: :.o:m*§._.v<+xux 0:72 «on on Town: Eng m3 9H. 06 3 .3. C B Aiocffiifxux $07.8 3": «cm on :A: .oux "a Sm on A new on ou- zmpemm mpzizoo a+naua~ 1130 CiEfidoEfxux $25.. 1an «8 on 2.an .on .34 «8 on cuan m.E :3 on zmpemm 3._.v<*2.3mu:._.vo .13. an on A: a: mom How mam new mom com 23 mg mo¢ Gov Hon E45 77: 3.03mifiocfxux Trauma 76W“: 1?: So on 8c 09 Co Sdsfzdcfxux 263:. 2+Bu2 Ea": mam on go 0H. 00 8 .am. 3 B 2.03midocfxux :6on 10?“: €72 so on 7?.“ SdEmanQo—vfxux 10?“: 12"“: 3": own on 398me 2...: 5 ENE. sun: .cux n.3, «8 on cuna .33 an on “-mufim an: zmbemm 2mm who mum bum owm mwo coo E46 1:": ~13 82 on ou__ zmaemm 2.3m*2.:52 < 22 $502 .mO gamma—>52 P2Am.Q.Q.§—vqm2m~ AA.H<.U.201—m2mu 15<0 2*2n22 2*2u22 «\AC+EV*EquZ A3».A:m<.ASA<.Ash.25.25.30.255? zoazmza A<2mmB22 2*2 mo 2:2 .20 mmwm=Q mm FWD: .NmmB2Q mm Ewan—2.222852 A2m mm OB NmmhE >2< .2 V N< ~< mmmu Ema mam 0000000000000 E50 .A:2.ASN.AA§.ASxAAEASm.25.30.3321 zoazmza Amm . A222vm 0~29m22>m . A22200 22220» A2N2vm A2um2vm 0222.ng . A5202 Ommfigwm . 322202 Unmhgm . A220< QmmOBm m2 mmomme<2 Oumhmefizwm .mO mq<2 ”2:504 M22. 20 22< .mm0225<2 QmmOEm qu.N.m.m.n.0.m.< mmmg A N 5 A 0 EA ...A 5A U 5 . A v u A VA VA 0 A xv Azm fiA $273t¢uS M173": 22.3. S on 7228.392va $3-530": $2-“qu 2&3;qu 2.3 S on zuzz 30.35.23. 206525 12...sz ammoem 302.22 SE59: “0.0.22. 8H.E.S.o.m.<.z.m.zvqm§ mzfipommsm 02m 3.3%.2. 5.23m: mEEmOm 902 a x33: mnemmmicfiomsgzmg H zmbemm EBB 03.14 2.8 FEE: 8.. 0000000000000 E55 AOH . m: . m . HQCO hid—"3 S OH. 00 ~+ku2 3.35270“: 5 On. 00 w+2u2 2.3.2833 Advma§d<+3>u3> diam: NH OH. 00 H+N> A H ..Avquvnd 3.3.32.2“va NH OH 00 323.3%“qu NH OH 00 A+E.n‘= r.m.mam3~: TNXCLHYQWE m 09 00 4+3“: {imaanmvhfi m OH. 00 4.5334 c OH. 00 72*3uqvu5 fimafizvfi MJuA Na 03 S on ma a; Na Na 3 3 00¢ U30?- Q N E56 m>zm .0 2H .4. .mO Ezzm m0zm mm OB NHMHZH j<0 S:.§.o.z.3m>zm mzfibommam azm zmpemm Godzuaoém Towcéuood. wércéuood 5.3 H on 03".. H on 25.2: zoazmza 00000000000000 E57 EN-“ Qifiu 3.8.39;va 2:3 on on {SQWQSZ 8.2.5.2. C9: 9..qu x->? .82 2220. 822200. 822220. 822520. 8222:? 8825.0.8259048382 . 82 EC. 8225. 8228. A 32225. 82i<92920§zoo 822292.. 82222.? .8222>.8222282220.8222m.8822228232»? 28252822282; 2 .8222. 8228 . 8222. 8222. 82532820228 222.82.202.282.82.222.22.82? ..202.202.22.§.~2.~2.@2.22.0.2.28.2.2.2.o.m\mamo\2o§oo 82 2258. 8222226. 82 22>»? . 8822. .5200 . 82 208. 82258 .82 vwmemEEOeazoo no. No. 5. oo. <0. 20. No. 20. 822 . 25200208200 3.3.2.02 2205.22 .2.0.2.2.2205 2.20 28.82.22: 22225988822223 2-2. $323372. 223020022 u «2. Q 8222.22.20.222.202.222.00.2.2.2020302222 2.20 2.2 u 302 8 92 u 302 8 2+2"; 2.3. 8 00 202.3 mm 00 82 8 02. 0032222232 832.22.222.82.0.2.00.2.2.20202028 2.20 223020022 u E. 8+2. $02802 8222 mm 22.225320 22022022220M622 2.2+ 65.26225622262252068.22.22.09.22.22.0.2.02.2225: .220 8 m 02. 00 88.82922: 8 02. 0020222202022 2 822.22.. .0.2.2.22.SE..SE..S>.S2.20.82.22.222222 .220 F4 32222222222222.2222 2022222220 22222522620262.2282.22222622.2222+ .202.202.22.222222222622222.02.20.22.2.2.0.22020202200 32822222.. 32822222... .82202222282322.82322203222822.32822222.$8822.. 82829828222822; 2 .2822282808232.82322.82323022022200 828.222.822.20.8220222220.32822222028222.2229 .8280.82220522222220.328220.82222203023 202200 822822022.32822222228222.2222? .8322. 2.2200. 822 22022 . 828222 .82 02222202022200 20.20.5.00.20.20.2020.22222225200202.2200 022.20.22.02 22022.22 «0.20.20.20.00.20.20.20 20200.2 80.222.222.222.22.22.202.222 2222220222222 222 $2...." 022.222.2252 2.2022302 2002222222 2222222....2222.222\~.22.*u 2222.....2222.o222,222202 228 $8 2222. 22.22220 22 2222222222 22222202 ..20 22222222*.o22222.<2202 82. 22.202.222.202 22.222222922322228 c8 9.2.2.2222: 22.2.2202 2.02 22 2222222 22 0.2. 222.222....22222222202 82 8.92...." 2220022 20 2 2222.2. 22.222292222252228; n 2202202222 2202.2.22*.xc2.o222\2.22..2 *1 2202202222 20200.2...2222.o222\m2.*u 222*.2o2.o222\m2.*u 222.222.2222 32.... "22*.222.22222\m2.* "2*.222.c222\m2.* "222.222.2222; 22...." 2*.222.c222\222.*u 9.2252232...“ 22.222262222222202 228 22222.22.222.226.22.2262822202 82.. 222822222522222022.2222 2222902228 22222.2 22.022202 o8 3222822202 82 m 0.2. 00 F5 22222.22 8.2.0.222: 22 32.0.2.2222.22.22.2020.220.220.228.228.>28.228.020.2828.: .2220 222222200 8 2-2222222022822228 20202.22 8 00 222222200 2. 2.2282222222822028 222202.72 2 02 222222200 8 22-228222282283228 22222.22 8 02 922222022 N>2+0¥0uz22202 «\A2+22*2u222222 8 82 0.2. 00 22:2-22822222828228 2 22222.22 2 02 22222822.8.2.2.22.82228225825820.2852 2.20 22222.22 8.2.0.2222222 2222.22-28.2.2222922022825882 .2220 2022.200 2 22-22222i22228u3228 22222.2...2 2 00 Q 2 2+2 2*2u222222 8 0.2. 00 320222 80.822.28222822253222.2220.20.82.82222222 .220 2.8200 222 2222222 22 0.2. 222.22: 2222 222222 in. 0 8.2828882288822252 .220 0 82.222.22.228 88 2222 888.822.8223 0 92222822222 F6 8.2828822888222252 .220 0 82222222202 88 2222 888.822.2223 0 2.022282222222222 22082828828222.2228 202822222 82222.82.8202.82§.822.8222.822.822+ .202.202.2.82.22.22.02.222.02.20.22.22.022222020222200 8222822028.82882228282822»? . 88:22. 2.8200 . 82282028 . 828228 .822 02282822202222.2200 02.20.222.02 22022.22 80 202002 20.202020092208020 202002 80.828.82.802228 2222202228 222 222922 822.82.822282228202835 822.8222.2822282228222.822.820.822.82x<.82222>.2.2.2.8 .220 82.~2.22.Ez.02.22.2.22.02.22.29.>2.0.2.2.222222222 .220 2.2202 022,222.88 22 222 2%.. 0 22222022 .2220 8229222222 022228200 22222 .2222 ..20 202220022 .2200: 22220822 2%.. 0 82.222.222.20238.22220002522222222.28002122202822228200 22222.22 8.2.0.2223“: 82 252222212022823228 «2 22202.22 22 02 25222222222283228 22 22202.22 22 00 2222-222228228220228 8 22222.22 8 02 822.820.822.2222.22.22+ .822082220822208228.8222882228822862222.2822 2.20 F7 82882228228 38:22:22,288": 3828.2 32.2 82 0.2. 008.220.220.22 8282,73 2+22n22 28822.2 222. 8 022 222.222 8 02 8n22 88 0.2. 008.02.08.22 222222.200 828828u222882>>8 8288222838 828222822228": 38282 8.22.2 8 0.2. 008.220.222.22 8282,73 2+22u22 38222.2 2.2n2. 82 02 2.22 82 02 2122222 8u22 222.22.228858880222 0222: 02882882222 mmqm .mO ZOHBUBmm 2.020.020.0292 O 88 02. 0080202. 822 m2 3 F8 82u82 8u22 22222228022822 222222200 88 2288282282228 2288228228 82.22 88 02 88 222822 228.228.2828 82.222 222.2. 02.22 8 2022.22.22 220.03%... 0 82258202 822.2822: 0*2u02 22.22: 0.2702 822082.380 8222222882 88 222222200 8 828828u2222822028 8:822:08 38:32:28": 8 228822282 8.2...22 8 0,2. 0082.20.26.22 80822.12 2+22u22 3822.2 2.2222. 8 02 222.222 8 02 8u22 222222200 8 82082822282228 F9 2.5.27.2 82 82 0.2. 00 255 2+2u222 «\22Avfivuq 82 02. 008.02.220.22 2.22 82 00 222.22.302.22228828822.223 882222.52 2028222222 2.222. 222 2 22022.22 <0.20.20.20.00.80.80.20 202002 22222222 22202.8 2222222 2222.222 2222 2.3.0.022... 0 A 222 . 2.2. 2.222.. 2. 2. 5.28222 2222220222228 222 220822 8282222202 88 8288222202 882 22222200 88 8208222808 88 82882822082228 2+22u22 2+2": 2.2u2. 88 02 8228222822228 82. 828828A22282228 2+2": 2+22u22 "2. 82. 022 ...2 88 02 8a.: 2.2 0.2 F10 8. 282888. 2288888222222 2220 822.232.2228 2222 888.882.2223 2022228222222 282.222.288.282 2028222222 282882222.88882222283822»? 48222222882222.28828>>2\822\202200 88882222888222... .82288;.8828822.288.820.8882.88882222888822? .2882222..28882.2.A83> 2 488.282.885.882.288822.88223082028200 882822.8828820.2828822020.888520.882882229 .8882.8828028282020.88:220.88222205282022.8200 22232222022200 80.80.200020202020.82228252002022.8200 82222.82.8202.8222.82.82.22822.822+ .202.202.22.2.8282.82.02.22.02.80.82.2.2.0.222202022200 02.820.82.02 2202222 080 202002 «0.20.20.20.00808020 20200.2 88200822222680.822.002.222.022 2222202228 222 82882228222202 82828282222222.2202 22222.22 32:22..820888822223 2.25 8. 282888. 2288888222202 .220 82.222222202 2222 8888882222: 282222822222 25222 O mmwm D F11 2222282282222 2222 + .2252 u 02888220822825.2822 .220 8882 02. 0080202822 888 82.2 8.8.2.2. 2222.. 2.282222 2222 u 2 2222.22 2202 2202222222..2.22.2.822.8228.2822 .220 888 02. 0032020282 88222822 822020 22232.22 2.8222202 28 288.882.2223 888 2.8.8. 2028 - 302220282282222.222.202.822 2.20 8.8.8. 228 - 92220282282822.222.202.822 .220 888 02. 003202.028“: 22.8.8. :8 - 92220282282822.222.222.822 2.20 882.822.2223 82 22.8.8. 30222028282222.220.262.822 2.20 8.8.8. 9222028282822.220.222.822 .220 82 02. 0032020282 22.8.2.2. 9222028282822.220.222.822 2.20 82 82.2 2.8.2.2. ..2822 222 2222282282222 2222 + ..22222 ... 02888226223822 .220 82 02. 0080202822 8 82.2 8.8.2.2. 2 22.222.28222222 2222 u 2 222222 2202 22022222282222.5822 2.20 8 02. 0022090282 9.8222822 822020 222223 82.22202 882. 88.82.2223 82.22.22.222.002.520.802 2.20 2ue2 2n222 8 02. 0080202222 F12 8820082222 22222 88 222822 888.882.2223 88 02. 0082222222 2028822280827222 22222200 888 28222220.82280.822.820.822.822+ .0.2.2.228228822228222.822820.822.822.82222208 2220 2820222208 822020 22232.22 228222222228 2.2222202 82. 288.882.8223 222220.289, .2.0.2.2.02.2.22.22..22..>.2.0.2.2.222208 2220 2820222208 8.22020 2222.23 228222222228 2.2222202 82. 288.822.2223 888 02. 00 2080202822 82. 282.822.8222.822282028222822+ .0.2822.2..8222..82>.822.82082282282222022 220 282.2.222222222226228.22..>.2.0.2.2252022 220 82. 02. 00 2002.02.82 88 22.8.8. 8028 - 22022202822822.228220222222822 220 28.8.8. 22.28 - 282222028228282282522222822 220 88 02. 002220902822 22.8.8. 828 .. 22282202828282282222222822 2220 882.882.2223 8882 22.8.8. 22022202828282282202022285 220 28.8.8. 22222202828282282320222822 220 8882 02. 0022208022222 22.8.2.2. 2822202828282282852282822 220 8882 82.2 22.8.2.2. ..28222222 F13 82 22222200 82 2228282208 n 2222020 2.22 2.22 82 02 82 22222200 8 2228082208 n 2282220 2+2u2 2.2.2 8 02 8.2 8.2828828888822222 220 0 2822222222202 8888 2222 28888822223 0 22082822222 2282.22822220228880.2223? 222.222.2322..2222222222.232.230.2282.22:22 202822222 80.80.20.00.20.20.80.220.8222.82\2200\202200 282.80.82.02 22022.22 20.20.80.220.00.80.80.20 202002 22222022622226.2222.22..22..>.2.0.2.222.282.2208 2222202228 222 $2.22 u EOQ< 2222 20 82220292828222.8222 20 88222000 20 8822.....2282.222282m<2202 888 8.82.2.2 ...n 2222 22222222220228.22828222288222282 2022222 20 8222022 8.82.2 2223 222208 22082828222222 ...20 88222000 20 2.8222228222322228 888 22.8 .. 22028 n 82222822828223.2228 882 222822 52.222222888222223 22222822220222 F14 3222,2200 2222.82222u22va 2229*AAquAAvU Amvwwnvkwvxxoumzb Thug 2.72. on CO 2.23 on OD and pm Oh. 020320.802.va mDZrHZOO Cvk>0>3>k02flqvmuaqvm Thug 2.7.2. mm on 2.2mm mm CG and pm Oh. OUAmO.BOZ.va szmBZOU 232.020.82.28222328. 78.74 2.2.1.2. cm on 0.2"“ cm CG and NN OH. OUCwOQbZQE MDZHHZOO vawwoidw§£$E€e2m <3: ASHE.:Eegsmeég.33:30.25.Sx<.:§<.2052sz no.8.8.oo..m.U.m.x<.>.m.U.m.x<.>>O4mm QMZOHHHPmAwm Emmi/7E in; A>.~<.2.xmaémndfiddmmd.m.<.z.2§mzm& mzfipommsm azm zmaemm 828350..“ A0. HafiACQCASmEmz zmaemmAxoéoz. 0..: AmgufiACmEASmtmz mbzfizoo AégudACBASmfimz S 2. ooANoéoz. 0.: AxNzJLACEASmEmz AZEJLACSASEEB m 0e 0208.502.va AEzJHHACmVASmEms m 2. ouAmoéoz. 0..: Aaz;u_.A:x5A$mEm3 00000000000 OCH 3 F17 22222289022232 A2222.m22.322.>.2.2.204mzm4 44<0 22222200 4 32-32202 22.72 4 02 Ao.o.2.2.2<.2.2.2.204m§ 4420 35.7.2422.2.2.2.2.2042222 44<0 2222228 . 2.2 . 222022 2222.222.522.252.02042222 44<0 2*ZHEZ 2*zuzz 222202722 3.222292822822222 4420 0 82.5?22202 $8 2222 823.8252; 0 42222222222 202.3022.2042.32.22.202.200.302.202 202222222 <0.20.20.20.00.20.20.40 4202004 gage-0.33% 4222 222 42222 2225202222 2445 225202 2222. 202.540 2 20 2 20 22.0224 22.2 22224 <20022 0234.20 22 2220822222 22 9222209022, 442222222 2 E2 20 232 20 2202<4 222. 22.224 22022 02244420 22 2220622222 22 2.222 .5222: 422222224 ~< 222 22222 «20022 022445 22 2220522222 22 9.222.222.4022 422222222 2 232025202 22 022.342 222. 2224222 on 2222 222222222 22 0,2 0222.52 222053.222 20 9222222922 u 222 22224 2223 2220022 222. 02. 22222222229 2 4022.200 22< 22420222 2222242200 22 0222.52 2222222 2222. 2224 22 2222242 422022 >2< 22 222 0000000000000 2222 828.8292: 42222222222 22:32.21 202222222 20.202020092202040 44202004 “origin." 250025202 24 .2 2224222 on 2222 22 2223 2220022 02244420 22.2 02. 222222222229 22 4022200 22< 2 2222222200 22 <.m22 2<2e 2224 22 2222242 450022 222 22 222 20924222222922 2 20022 02444<0 22 2220522242 22 2202.222222 22222 42222222 2 2020222 < 9.. 22200222222222 < 2 2322.20 22 2.222.242.2223 2220222222222 22 02. 242.22 < 222:: 20 22220 2 24222224 222022 52.3.: 022,225,222 2222222 3235* 22222222.».2.4225222 2229202222 222 222222 322252;: Aod.7.222.222.2042; 44<0 22222200 2 A20? + A: u A2022 $24,232 2.72 2 02 III-M 20222200 2 5221022 22.72 2 02 2.262322222042202 44<0 20222200 2 302.502 22.42 2 02 A4.2-.o.2.2.2<.2.2.2v4m§ 4420 F18 0 0000000000 F19 22.3 82 02 2.qu202 2...A3>-uA<202 43.222 2.22. 82 02 2*A3>-A2202HA<202 22222 22222 2.22. 022 02 2*vaum 4+22u22 2.2.2 A22 02 4n22 ou<2 2525; 4.25.2222 2.42 A22 02 W22 23.2 n 2 “wanna A22 02. 00 822.22.20.22 A202n2 2.74 9.2 02 82 02. 00 2.02.2022 2.4 u 2 A20.AS2500.202.222.2022 202222222 20. 20. 20. 00. 20. 20. 20.20. 2222 . 2252002022200 0220.22.02 22022.22 20.202020092202020 4202004 2.20.22.22.29.22..>.2.0.2.2225225 2222202222 222 3,222.22. 22.222222 22222202 2.02 22 0222.22 222.222.222.2220822202 2 222922 W2222 2.2.4 3.8 22.223 222 2222.22 22 23.2 ... A32 82. 02. 00 822.22.20.22 A3232 W4 82 2222.22 321202 82 F21 A 4.. 2328 . 2822822222 44<0 O A 22.23.2228 mama 2224 882.8292: 422222222224 A334 502502.202.A32.AS2502.302.300.202.202 204222242 44920290206060.2040 44404004 §§§* Jazz MZHFDOMMDm qu<0 WZHEDOm 92H. ZOHBDAmm . AEXEvQ OHmBmZSVm . CAN—2:4 QmmOBm my wMOHmHaV—z DAmBmE—zwm m0 mq4m.N.m.m.a.0.m.< mmmEs A N 20 A 0 20A ,2 20A 0 20 A v u A 0A 0A 0 A 20 A..2 39.2 592 3 9203:.sz Unmhmzzwm QmZOHBABm0A922.82222 922254 .82222 22 09 0030902224 A22.4u4.A:20A922.82222 92282132222 92.422400292252222 92.284 . 82222 on 09 02 08.90222 22249200 8.73222222402 o.o u A402 4+4n4 2.42. 8 02 2 .42 8 02 ou4 22 09 00 92.422402292252222 9228452222 A: 09 02 20.90222 A02. Hum. Cvfivahifimvflmwmm m2 ~02 03 mm on mm on A: F25 . A 22420. A8300. A 22320. A 223220. A 2.224220240204200 20.20. 40.00. 20. 20. No. 20. 2224 . 242200202200 A4422.222.2422.2422.82.24.22.o42.c42+ .202.82.22.222.22222.22.220.22.22.022220202200 $282429.2232429484024? £842.42. 223240. 328242. 2282422. 2282422. 22829.. 22829. A8442. A8442. 228.12 2.282.. 22822.. A228.22\402\202200 2429. 2429. S2. 22.. A440. 242. 2.42.2. 2422 2222924 29. 29. >. 2. 2. 2. 22. 22 2222924 2.4.3.2222 2202924 20.202020624202940 4204004 922022 2249202222 222 222922 Aa.4u4.A44290A242222 222922 020902.024 22.42.3292 52222 A22.4u4.A44>0A$2222 222 09 02 30.90222 22.74.2020 82222 A22.4u4.A4424A82222 82 09 02 08.90242 22249200 c.4uA402 2.22.4424 221442 4+4" 4 4. 82 02 4 82 02 ou4 224 09 00 A§.4u4.A4020 842222 2.2 2.2 com com mNH com o: F26 8429.0.A244240202 4420 8429.2.A240244202 4420 842.222.344.444202 4420 A2042 A242.o42242.A240240202 4420 84054222232202 4420 22 09 02 320902.024 842.242.240.522 4420 822.222.840.522 4420 A2024 852.222.823202 4420 29.2.82: 202 4420 A2024 A29.2.AS244202 4420 9.22.80.43202 4420 A2002 2.22.2023 202 4420 8.22.3244 20.4 4420 2 09 00 320902.424 2.2222202 4420 22.22.32: 202 4420 A2024 92.22.20.222 4420 2482422222 2422042342242 zanznzz A 2. 224222. 2822222222 4420 34224092202 2224 3222.2029423 202.234.2224 om O mmmm O mmommrfi<2 2223.822 “222me 20.9.“ 0 A32. A2 .8402 20422222 A 24424322022042.2200 A2222: 290. A 22824 290. A84 2220+ . A84 2220. A 2282400. A2222: 20. 22324220. 22824229 .A2NS290. 228290. 82.42404 F27 on 05 00 2552225 on 02. 00 Ha A5022" D: 8 Q7: . A mm . covm . coon . comm . co¢w . comm . CONN . cam . coom+ 22.862.22.322.5.80 02. 00 3.300225 2: OH 00 8.0202322 2.3007022 $2.W218.25.350020088502222 mm 02. 00 8.320522 22025026553202 .520 38022 302.202.229.382 520 802.202.222.2520202 520 @0022 2202.82.82.22820202 520 22025025052582 520 88 02; 00 90.202022 302.202.22.2350202 520 202.522.20226520202 520 38022 22025025926222.2502 520 22025020960202 .520 08022 22025025920202 520 22025025520202 520 $0022 2202502555202 520 2202502655202 520 2 0,2 00 08.202022 2202502555202 520 2025022265202 520 38022 A 202. 52. >2 . 220202 520 ou2oz 9202 3 003 S F28 3.30025 2 OH 00 8.0202522 2.3007022 8.2.2. 8 on on 250052.52 822 on 0.2 00 25022.23: 88 on 0e 00 2502210: 8%.. on 02. 00 Gdamusz 822 on 02. 00 250305: 822 8 02. 00 250225: 822 on 02. 00 250202220: 822 8 02. 00 A5352... :2 88 on 02. 00 Sdoeupz E 8 02. 00 25022.52 8 on 02. 00 25:10: mm 8 09 00 Cdmuaz 22 cm 02. 00 A5005: mm 8 02. 00 Sdmupz .22 F29 Diucqvoamh. c2 nyb any DEHCaCoCV cc nyH nxc DEHSAvon ow OH. 00 DEHQAVSU c2 OF 00 Diucqvcfim cc OH. 00 DEHCAVoHfi cc OH. 00 DEHQAVSWA‘ ow OH. 00 DEHCAVPH. cc OH. 00 DEHCAVMH. cc OF 0.0 DEHCAV> ow OH. 00 DEHQAVm ow OH. 00 22‘":qu ow nyfi mac DEHCAVm ow OH. 00 2:2quch cw OH. 00 DEMAHAv>< QZH . A mm . comm . ocpm . ccom . comm . co¢m . comm . comm . co~m+ .mm.~.m.om.mm.¢m.mm.mm.mmv OH. 00 ocbm ccom comm oowm comm comm och mm rm mm mm cm mm mm m F30 . 2. 0,2 00 3202.02.22 220.>2.202.2.§0002 520 2220522022265002 520 320022 92052202222582 .520 ou202 8.2. 22522 22222205622220.2252 520 320022 8.2. 20.2222 2222212552552: 520 8.2. 222. 2202222.22.SE.S>52222 520 220022 8.2. 22222. 2202222.32.32.2252222 520 8.2. 2.2222252525052222 520 82 0.2 00 020902.022 8.2. 2225222.82.22.225.252 520 8.2. 2.222022526622252222 520 320022 8.2. 20222222522.255252222 520 8.2. 22520 22222205235222 520 320022 8.2. 202222 22222125225252 520 8.2. 52. 22062222552222 520 22822 8.2. 22.229 2205222225222 520 8.2. 2922222205222 520 8 09 00 38.20202 8.2. 22522222252222 520 8.2. 22.222022262252222 520 320022 8.2. 22222222222225.2222 520 $222020 222223 3222202 28.82.2223 88.82.2223 22 02. 00 20.20292 22 02. 00 20222200 22u320o32. 8 0.2 00 mo coo co 2cm mm o¢ comm F31 guru—Ox 2222.22 .522 2 22 0 222322220 22202.2 22 .522 22302 520.822.2222 22 0 225222 202 02 22202.2 22 .522 22302 5200222222222 22 2 225222 2.22 22202.2 22 .522 2230.2 520.022.22.252 2 2 225222 2.22 :m: 22 2220222252252: 22 02. 220222.22 2.2 222200.222202A22202 2222022222 20 22220 2.2.: qu2.202.22..20002 .520 22205220255582 520 220022 2022202225002 520 200.222.202.953002 .520 com com cod cow or 0000000000000 F32 22 02. 00 22.22-2022 22 02. 00 22.2.22 2.2222022 732752275 2 02. 00 .375 22.22.23.022 2 02. 00 .3222 2 02. 00 12272022 12.222022 2.22 22 02 2.2 ... 3> 2\222-~0*uvu22 223.3”: 22.72. 22 02 2228.02.00.22 223.020": 2223.33: 2222.20": 2.2.2 22 02 222 3. 222222. 2822822222 520 82.52.2222 2222 882.822.2222 22222272222 300.302.2202 202222222 20.202020092202050 2202002 OOQ‘ mob- N ammo F33 202022.202 S 3320.2 3.2022 202.72 2 02 2 szzmbmzoo 0.2. szzmfimzoozc 2330.052. 0 222.82.22.20 02. 00 9. 222222. 28222022222 520 0 822.232.22202 2222 2222 8222.20.22.23 222222272222 2:22.202 202222222 2222.222.2:02.222.222.322.222? .202.202.22.§.22.22.02.22.22.220.22.2.2.0.2 2220202200 202.22.22.02 22022.22 20.202020092202040 2202002 2222252022222 2222202222 222 222222 322250 2.2 2:22": 22 S 02. 00 $2.": 22 22.22.222.322 3 Z 02. 00 7222.2 22 22.22.2203,: . 32022250222222, «2 2:22.52 22 22 02. 00 3223205022 8 22.22.227.022 2 0 F34 22222222222222.2322?222222.222222222222.222222223222? .2222.22222222222322.2222.232.22222222222222. 202222222 20.20.20.00.<0.20.20.20.22222225203720.2200 2 222222 220. 2222022220. 2222 2225+ .2222 2220. 222222200. 222222220. 2222222220. 22222222229 22222220.22282202223252 .2222 220. 2222200. 2222220. 22222220. 22222222022020.2500 2222.22222022222282.2222.222.222,: .202.202.22.§.22.22.@2.22.02.20.22.2.2.2.2 2220202200 22.222.22.02 2202222 <0.20.20.20.00.20.20.20 2202002 2222.2.2222.2222.22>+ .o2m.020.o2m.gx<.02><.QB.m9.>.m.0. m.N<.>OE mZHBDOmmDm 222 222222 32132 22 22222200 22 2222+3>n3>22 . 02. 232522 202.72. 22 02 2.2 ... 32 522A 8 02 22 222252222 22220 22222 2222222200 222 3.0.0.30." 0 222222 2322222 22 222222 22222200 22 22 02 0022622322522 202.22. 22 02 5222 22 02 22 2222222220022 02 2222222200 30.03%” 0 222222 F35 22 2202222 0 22202002 20.20.20.20.00.20.20.20 22.02002 8.22.22.22.5002 2222202222 222 222222 2222.222240.221222202222 2240 220022 22222222.22.221222200222 €20 Q22.222.222.2222.222222200222 .2320 220022 222.22224222222.22222200222 2.22.0 2222.2842222.22222002222 222.0 22 02 00 220202.222 222.x.222.2.22_>22.22122202222 22220 222.22.222.24.232.2212222202222 .320 020222 22.2.222<..2.2222.222222202222 2.2220 22.222.22.22222002222 22220220222 22.222.22.2212200222 2.220 22.2.>..2.E2.222.>002>2 22220220222 22.22.222.2222022222 2.2220 22.2.0.2.22.222.0002>2 222.0 2 02 0022020202 22.2.2.2.§.222.2002>2 .2220 22.2.x<.2.02.222.x<022>2 2220220222 22.2.2<.2.22.222.2<002>2 2.20 2222 22222222222 22. 222222. 222222022222 22.20 222.522.2202 2222 22222222222: 22022222222 2u.2 om mm¢m F36 2A2: 22 02 00 22,1202. 2222: 22222 2222.22.22.28 02 00 2+22222 2 2. 222222222 222022.22222>22 22 02 222252 2 22222222 22 02 22222.2 20 22220 2 22222222 222022 222252 .2222 222222 2012.... 22222022222222.2222 2222202222 222 2222222222202 222 222222 22.222203222222322 2222222202222 22 222222 227.202. 2 2.2u2 2 02 22 02 00 22.20.2220,: 3.2222222222282223 2.2220 0 2222220222202 2222 22222 222222022223 0 22022222222 822222.225. 202222222 22 2222222 <0.20.20.20.00.20.20.20 2202002 2.2223202 2222202222 222 222222 22222200 22 0000000 F38 222-222 222.222.2222 30073022222 222222.222 22-220022222222222 2+30020>222u30022>222 2222003502 22220227222 22222200 22222200 223253.222 225002 35022 222.222.22 222352222 .. 322322522 222.222.22 3-2222232 252232. 2.72 222 02 22.22.22 22-22.222.522 22.22-3522 2.7.2 222 02 2.2 H 22.22. 2.2u2 222 02 21322.2 2.2.2. 22 02 2.2 u 222 222. 02 0022.22.20.22 232.232. 22.72. 2 02 2.22.22 22. 222222. 2222222222222 .2220 222.233.2202 2222 222222022223 222222222222 o¢~ omH CNH OHM mo2 OOH am am mm mm or cm on m2 am am m2 OH mmmm F39 22.522222 ".2002. 22.522222 "3022. 222.222.2222 22.622227:2022222222 2-2+2u.2 2.22 222. 02 22222200 2*2220<-223AZOO 23.82.32 3743.2: a: O F42 22 02. 00 3.02.232 22 2222825122622 02. ow 5.22.222222222252222 Givicvmvazmdvgg 22282132222 2 22.32.25.222 02. Cu «22.255322 Gina: 3.222222822822222 2220 o 822.232.2222 2222 2222 222226222223 0 2222272222 32:2.2222.22.32.22.2222.232.232.232.2222 20222225 222.222.232.22222522632622.22? 202.52.22.22.«2.22.02.22.32?22.2.2.0.2\222o\2o§oo 82323022. 3282522 . 822 3522+ .8822. 2.228. 822 2222 . 328222 .22: 22222223202200 02.20.22.222 22022.22 20.20.20.2o.oo.2o.20.5 220802 $0.22.2.S>2.>2.2§.>.2222.22.20.862222: 2222 2222202222 222 2222.22 22222.28 S 25152 2 2 02. ow 222qu32 2 o2. 003.02.223.22 $25 S 02. 0023020225 2.3 22 on 223.228.323.522 F43 2.2.2. 22232.22 22222122222226.3225 2220 8.2.2. 22232.22 2222224.222.252.2925 2220 2222.22 320202.022 3.2.2. 22232.22 2222242.226222222022222 .2220 $222525. 22 02. 2222.22 222022 22232.22 32.22202 322.8 22223 2.2.2. 22223 22222122222226.3225 2220 8.2.2. 22223 22222222222222.3222 2220 222 02. 023202.023: 2.2.2. 2222.23 22222122222222.2525 2220 $222525. 22 02. 2222.22 222020 222223 32.22202 38.822223 22222220. 2.02. 022 220.222.202.02222 2220 2322.:2023120222u3022 2.022.205.220 2A.: 2 2+2u2 2.7.2. 22. 02 22.3 22. 02 2H2 22022022223253; 220202202325; :22 9.3 22. 02 2u2 22. 02. 02 22 02. 00 3.02232 22. 02. 02 32.7.2.22223322202222 823.55?22.82<22 2.22822 . 202222 c mm om m2 om Now on: How mp 22. N2 F44 HHOA mAHOA A w . mommow . mromocvmzbflm AA22 2202002 0 2202002 020.00.820.20 2202002 2.2.20 2222.2 2222202222 222 A 2 2 2: .222 . 222 02.22202 8222.222 . 2.222 2.22202 2 02. 00 202 210202 22222222022022 222.302u2..2320.22.22.822.223 220222 02202302 22.3-3"on 2 02. 00 2 2. 20222220222 02+202u302 222-202.20.202 2 02. 008.22.220.22 2.222 2 02 0222 2 02. 0080222202 22.022202212022223 22202022222 22.2.22.32.225.022:2:22.223 W22 22 2202002 0 2202002 2 .202 2000.20.20.20 2202002 8222.22.20.22220.2.2.2.2.2.2.2.22 302222 2222202222 222 mmaa 22.32.222222222222202 22 22.222.282.222.22.223222202 22 222222 222 Nun—I: MDZHBZOO an F50 22222222 2.2.2 22.2 02 2u22 222 02. 00 82.20.2222 222222 2220222222 222 02. 00 A222*2>-2222*2>+2222u2222 2+2u2 2.22. 222 02 A2VQ2NOHN> 222220"; 2.22 222 02 22 202.2 20 202.2 - 20 222222222020 222222223020 2.22 222 02 .2 u 20 u 20 22.2.2.22222222 2220 202:0 20u32 220-222u222 222222222232 22220 5220 2.22 222 02 2u22 222 02 00 8.202232 222222222222.222222222222. 20222 8222222222222.2.0.2.222 202222 2220 EN CMN CNN oHN F51 22222-222222223....222222-322{22-318222Y.2223222322 2322.202 u 3 222 02. 008.222.3222 22222232222323 222 02. 002 2222222. .22 . 2222222 222222222 n 2222. 22.2 + 22222 + 22222 n 2222. 2222222222:8222-283;22222-22222\.22u2 22.222222282282228 2220 0 2222.222222202 2222 2222 22222222223 0 2022222222222 8222.322222222 202222222 20.20.20.20.00.20.20.20 2202002 92.22222 202222 2222202222 222 A 20220222222 .2222202222222222202222222.22222222222222: 2 \\\\ 2202222222 302222 22222 20222222222222.2222 2.22202 22 222222 2222 22222200 222 S250 222 922.252 2.22 222 02 2u22 222222 8.2.0.2232 22222.22.222.2.22.2.2.0.2.2.22202222 2220 50.232 22.2 2.22 22.2 02 222 222 02. 00 22222222 222 F52 2u222 222 02. 02 222202.222 2222.22 22225522. «”222 222 o2. 02 22.2.0222 222222 €22n€22 22225222 2.3 222 O2 2222*2mu2222 2u222 222 02. 02 92.2.0222 322.22.322.23.22222922222222 22.23.822.0222042232 2922222222 22222222222222 28222222322222.222222222 u 22 3. 222222. 2222222222222 2222 82.23.5228 2222 22222222223 22822222222 9222.922222222.822.8222 202222222 0.222222222222222. 22.<<..20<2.2S2.2 \222\ 202200 2222.22.22.22 2222202 <0.2O.20.20.OO.2O.20.8 2222202 2222.22.22.2242222 2282 2222202222 222 222222 2822-2222222222-322222122225:2":222 222.222 emu CNN oHN mama com F53 22. 222222. 2222222222222 2220 222222222202 22222 222222222223 2222222222222 2236220222 u 2 022 \222\ 202202 222: 2222 2222202222 222 222222 2222.322 22222202 222222222222.2222222322 2.22 222 O2 22222.23 :22 202222222 58 3.. . “3.1.2 Owl oma . owm . cm: A. aau3vm~ . cum—amp. .22>.22>.222.22222522222 2222 a5. \o~> an 02> n OISE/Er 0 8.2.o.3mz>.>.>.2.2.zvqm§ 4.320 \>*>u3m2>0 m3. OH. 00 ANO.BOZ.V.2: Ac.H.o.BBMZmdmm.mam.o~z.o~z.gzvqm§ AA<0 EL. \ogm 2.2 gm u 3252mm 0 Ac.H.o.3mzm.m.m.z.z.204m§ 4.320 mmxanBmzmo 2.2.2. 02 02 08.20222 22.2.22222222 22222022220222 8.2+ .o~QH.onE.oH>.on.o~O.on.oHN<.oH><.OPRAH.>.m.U.m.N<.>OE AA<0 A 2 . 2. 2522222 22222202. 22. 82222 H OH. 00:.Gm..mn=vhu AmZIHvanonZnamZmZ mznwo~m2uonZmZ cummmm Ammmdvmfizfls 9.22 220222202 9.02 20.255202 9.2. 52222202 A22 322.2202 2*: 3222202 2.22 5222202 $.82 22022222202 cam $222202 95 *VH.c<.oEdEdESmE‘ 225 8.2 .o.o.<.2.z.z.zv2m§z 225 2.2-.o.2.3mz>.<.z.z.zv2m§ 225 on 02. 02 2222.28 3323222,":38 82.3 882 02 22722.28 €322>u€o «2.3 cm 02 mm 0.2. 02228.: 83.82.2223 mm 22. 0222090222 222922 381822.22: $2222 2 02. oo 28.922.222.222.2z<.8.o.2z.2.2.22:22 8222.22.eo>22.o<.22.252mz& 2222.22.2.2.222222 2222 2. 28.82.2223 2 2222.222 322 22222223222322 82.72 882 22 2222.222 22 3222232232 22.22 8 22 8 2. 2.2. 22 22222222 2282 322222522 82.22 2282 22 2222.222 22 €222u€2 22.3 mm 22 8 22. 2222222 83.822223 2 8.25.8222.22.222262262262222222 2222 2.T6.222222322255.226226222222222 2222 8.25.222222222222222 2222 2.7622232222.22.222222 2222 on S622.22.32.2262262262222252 2222 266.262.22.523; 2222 2. on 2.2. 22 2222.222 82: 222222222 222.3 2222 22 2222.222 8 32232 22.72 8 22 F60 2222225222222222222 22.7.2 «82 22 A222225223222222222 22.22 882 22 88 2.2. 22 222922.22 22.22.238.222222 82.W2.222222.€222.32222223222622.2223 3222222252222222222 22.22 882 22 22222222 2.0.233292223222223522 N§A2V2B+222222u22§22 2+2u2 2.2.2 3 22 ou2 8.266222262226222622222222 2222 2.2-.o.S268.222522622252222 2222 22.26.222.22.2.2.2.5292 2222 C2-622222222222222 2222 3.26622226222622262.222222 2222 226.222222222622ng 2222 2222.222 ~3232292223222222.2222 «$522.222222u222222 7.2.2 2.3 S. 22 ou2 8.26.2222.22252222662265222 2222 2.2-5.2222.23222622262262.322222 2222 2.26.82.22226222.22.22.32maz 2222 8.2.o.222.22.>22.2.2.222m§2 2222 Nooofi HcooH mmmm oooofi mm om mp Np om F61 o2. u 32 «22 2.2. 222222.222: 2222.222 22 222:22222222222222.2222. 22222222222922. 722 2.22 2: 22 ou2 3298.922222. 2.2. ... 92 u 22. 2222,2222 82 22222322225222.2225 22222222222229 22.22 82 22 2.22222. 2.2 u 2.2. 5 222922 2222272222 8.2222222 882.222.2223 2 2w 2.2. 22 822222222.222.922.2222.: 8 8.2226226226222224. 22.2.22.82.82.22.272.2.2222.»§2.2222.2.2v2m2m& 2222 32222.226222222222252.2.2722222222222222222222.: 2222 mm 8 2.2. 22 82.22.322.522 323.822.2223 2 2222222226522.2.2222522222222228 2222 9222.22-22.2.2222222.2222222222 2222 222 2.2. 222.22.223.22 88 93.82223 2 222222222 2222222222222. 2.2.2.2222 £833.. 2 mm 2.2. 22 F62 «2.22 82 22 22 83.82223 2 222222222 2.2.2.2222 $320.. 2 22.22.238.222222 22?. 2.52.922 382.822.2223 2 22.2.22. 22.22.822.223 2 222$.9.822.222.32222.2222222:2.2222222:2-22:2 22 33.2 2.2.2.2200 220.865.. 0 2323252223222. 32 ... o2 + 2.2. u 2.2. 22222222 22 58212222253232; 22222222293232 22.3 82 22 c.3232 22. n 32 23252222222. 332,729 2222.222 2: 22222222222222-8323232 32222222223232 2+2u2 2.2". m: 22 ou2 9222322232 32 ... 92 u 32 2222.222 2: 53222222253222; 2 2222132223232 «222 a: 22 2.22232 F63 822.222.8229 65226222252222.22226225622622.22222.3222222 2222 82.22.222.222.>22.222.222.222.22.22.22.322232 2222 82 $2 2.2. 22 222 2,2. 22 2222.2 22 . 22. 22 28222222 3.22 8222.32222522222 22.222.82.523 2 2223.222 25: A2222-22222222*2222*22-u€3222 2222222222223222.62.22.22:3222222A22222222265222 «2.22 822 22 8222.32222522222 328.822.2223 2 8.76.225.22.82.22.222222 2222 2.T.7.8262222622222222.3232 2222 8.76.222252222222222 2222 2.2-.7.222222222222222 2222 82 2.2. 22 2222.2 2.0220 2.2sz00 80.0.0329." 0 22222232222322 .2: 2223.222 2: 232222-32222133222 222222-322223222 22.22 2: 22 2222,2222 22 52222-2;222u32222 5222-322222222 22.2". 2.2 22 23 22. 220822222 2223.222 82 222222-52222u32222 222222-222222u5222 F64 22.22 22 22 2.2.2-.22.22.222.2édv2m22 2222 266.225222222622232 2222 3. 652.222.22.833). 2222 c X 2329\2Q 2.25.2200 20.2%....” 0 2223.222 38252322382 2222.72 8: 22 2223.222 32222222222 22.2.2 82 22 25.2622.222.225.222.222222 2222 262.222222222222252 2222 82 2.2. 222223.222: 82.22.382.323 22.222.82.323 2.o.7.82.22595.222.322.22 2222 2.2.7.22.22.222.22.222222 2222 cm: OS 0 mma 2 ~Q mBDmEOU 82.22.382.325 92.22.822.223 3.2.2.om<.om<.ow<.o~2.m.32vqm§ 4.32.0 2.0.2.w<.m<.o<.2.m.2vqm§ 41.20 3.o.2.82.22.222.22.322222222 2222 A2.o.o.o2.2.22.2.2.222m§ 2222 22.22222 22.22222 2.2-.o.S2.23222622522622.322222 2222 3.o.2-62262226222262.2.2222222 2222 366.82.222.2222622.2.222222 2222 3.c.o.82.o22.o§2.o22.22.222222 2222 3.76.22.55.22.2.2.3222: 2222 2 82 22. 2232232222 2: 2 2223.222 2.: 22282-n3282 2222.22 2.3 22 2223.222 2.2 22222-n2222 22.72 2.2 22 $2 :2 23. 222.222.922.222 82 $2 22. 22 €222n2282 2.23 2.22.22 53 22 A22>2u€22 $2 222.22 2.22 22 82 2.2. 22222222 82 2 2223.222 22: 2222+€o22u€ou2 2222.22 223 22 2223.222 222 3v~<+3vw>2u322 2+2u2 2.22 22 22 com mmfi onq 222 22.335