; ‘ , .- .,. .\ THESG This is to certify that the dissertation entitled EFFECTS OF DISCREPANCY MODELS AND ELIGIBILITY DECISIONS ON STUDENT SELECTION IN THE DIAGNOSIS OF LEARNING DISABILITIES presented by Karen Ann Fayette has been accepted towards fulfillment of the requirements for Doctorate degree in Philosophy 2%: - ‘ <‘ a - 2‘ , - , n ._ . / dAQI/LJ C ’ u ( Major professor ( ,1 Date m 5241-— x; MSU i: an Affirmative Action/Equal Opportunity Institution 0- 12771 I. J MIC GAN E UNIVERSITY LIBRARIES 1mmiiiiiumwiii i \\\\\\\\\\\\\l 3 293 01022 4495 . LIBRARY Michigan State University PLACE IN RETURN 80X to remove this checkout {tom your record. TO AVOID FINES mum on or him duo duo. DATE DUE DATE DUE DATE DUE THESlS run)- r _ f. ‘ ”1". ._‘ I ~ 4 This is to certify that the dissertation entitled EFFECTS OF DISCREPANCY MODELS AND ELIGIBILITY DECISIONS ON STUDENT SELECTION IN THE DIAGNOSIS OF LEARNING DISABILITIES presented by Karen Ann Fayette has been accepted towards fulfillment of the requirements for Doctorate Phi losoPhy degree in fizzueg J‘ figgb‘ué { Major professor Date /0 -Jfl" yv’? MS U is an Affirmative Action/Equal Opportunity Institution 0- 12771 " MIC IGAN sure mvensrrv u muss llllllillll l[iWilliiiii” 312 M 1022 4495 . LIBRARY i“’"N‘igan State University PLACE IN RETURN BOX to romovo thIo ohookout NONI your rooord. ID FINES rotum on or odor. doto duo. DATE DUE DATE DUE DATE DUE MSU Is An Affirmative Action/Equal Opportunity Institution WORD-9.1 EFFECTS OF DISCREPANCY MODELS AND ELIGIBILITY DECISIONS ON STUDENT SELECTION IN THE DIAGNOSIS OF LEARNING DISABILITIES BY Karen Ann Payette Dr. Harvey F. Clarizio, Advisor AN ABSTRACT OF A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Counseling, Educational Psychology and Special Education 1993 ABSTRACT EFFECTS OF DISCREPANCY MODELS AND ELIGIBILITY DECISIONS ON STUDENT SELECTION IN THE DIAGNOSIS OF LEARNING DISABILITIES BY Karen Ann Payette Concern over the growing numbers of students identified as learning disabled has led school districts to examine the criteria for diagnosis and the means by which they are operationalized. Two highly recommended methods for determining a severe discrepancy between ability and achievement, a key criterion in LB diagnosis, were applied to a sample of 344 students to determine how a change in method might influence the rates and characteristics of students meeting this criterion. Agreement between method and eligibility decisions were also examined, as well as student characteristics that might influence decision-makers to find a student L0. The results indicate an increase in numbers when a regression method is used over a simple difference score method. When the change, however, included moving to more severe cutoff score, as proposed in the intermediate school district studied, the pattern reversed and a 20 percent decrease was observed. While IQ correlated with the discrepancies when the simple difference score method was used, no correlation was observed when regression was employed, adding to a growing body of literature that suggests regression may be a more equitable method for calculating severe discrepancies. Contrary to other published work, neither method resulted in disproportionate racial representation among those meeting the severe discrepancy criterion. A second major objective of the study involved comparing the IEPCs’ eligibility decisions against the severe discrepancy criterion. An agreement rate of 75 percent suggests a greater reliance on the severe discrepancy criterion than previously reported. Agreement was the same regardless of the method used. When examining those students "misclassified", results indicate that IEPCs may be swayed by a student's IQ or achievement levels, alone, in the decision-making process. Overrepresentation of white students and students in the later elementary or secondary grades was observed among those who demonstrated a severe discrepancy but were found ineligible. Among the students found eligible without a severe discrepancy, a disprOportional number were female. ACKNOWLEDGMENTS The author wishes to extend a very sincere thank you to Dr. Harvey Clarizio for serving as committee chairman. His guidance, support and encouragement throughout the completion of this dissertation have been far beyond expectations and will always be remembered. Appreciation is also extended to Dr. Deborah Bennett for being a source of inspiration and sound advice along the way, to Dr. Susan Phillips and Dr. Frank Floyd for their valuable comments, and to Christine Schram for always being there when I needed her. "To my husband, Peter, I would like to express heartfelt gratitude for the love, patience and pride he has shown throughout my doctoral studies. Without his support, this research would not have been possible. Finally, a special word of thanks goes to my children, Jennifer and Pete, for making me feel truly blessed. iv TABLE OF CONTENTS Page LIST OF TABLES ..........................................vii I. INTRODUCTION ....................................... 1 II. REVIEW OF THE LITERATURE ........................... 4 A. Severe Discrepancy ............................. 4 B. Race and a Severe Discrepancy .......... ....... . 16 C. IEPC Decisions and a Severe Discrepancy ........ 22 D. Summary and Implications for Current Research .. 32 III. METHODOLOGY ........................................ 36 A. Subjects ....................................... 36 B. Measures ....................................... 39 C. Formulas ............. .......... ....... ..... .... 41 D. Procedures ..................................... 44 E. Research Questions ............................. 47 F. Data Analysis .................................. 49 G. Limitations .................................... 51 IV. RESULTS ............................................ 54 A. Identification Rates ............... ........... . 60 B. Effect of Method on Ability Groups ............. 63 C. Effect of Method on Race ....................... 69 D. Eligibility and a Severe Discrepancy ........... 72 E. Eligibility Without a Severe Discrepancy .. ..... 78 H. I. Eligibility and Race 0 O O O O O 0 O O O O O O O O O O O O O O O O O O O O 80 Eligibility and Gender ......... ........... ..... 83 Eligibility and Grade Level ............ ...... .. 85 Eligibility and Achievement ........ ......... ... 91 V. DISCUSSION ......OOOOOOOOOOOO0.0.0...0...... ...... O. 99 A. B. C. D. Identification Rates .............. ......... ....100 Intelligence Factors ................ ......... ..103 Racial Factors .................... ..... . ....... 105 Eligibility Decisions ................. ...... ...107 VI. CONCLUSIONS AND IMPLICATIONS .......................119 APPENDIX A: Data Collection Form .......................125 APPENDIX 8: Database ......OOOOOOOOOOOOOOO0.0.0000000000127 LIST OF REFERENCES ......OOOOOOOOOOOOOOOO00.0.00... ..... .151 vi LIST OF TABLES Table Page 1. Sample Description ..................... ............. 38 2. Pearson Product Moment Correlation (r) 10. 11. between Achievement Tests and Achievement Clusters from the WJ-R OOOOOOOOOOOOOOOOOOOOOOOO 000000 55 Number of Students Meeting the Severe Discrepancy Criterion Using Only the Standard Battery of the WJ-R by Gender, Race and Total Sample ...............57 Number of Students Meeting the Severe Discrepancy Criterion Using the Standard and Supplementary Batteries of the WJ-R by Gender, Race and Total Sample ........................................57 Agreement Between Levels of Achievement Testing (Standard Battery vs. Standard and Supplementary Batteries) in the Selection of Students by Total Sample, Race and Gender using the Simple Difference Method - 15 Pt. Cutoff ...................58 Agreement Between Levels of Achievement Testing (Standard Battery vs. Standard and Supplementary Batteries) in the Selection of Students by Total Sample, Race and Gender using the Regression Method - 22 Pt. Cutoff ....................... ....... 58 Change in Identification Rates as Method and cutOff value Change O.......OOOOOOOOOOOOOOOOOOO ...... 61 Agreement Between Methods at Different Cutoff Values in the Selection of Students .................64 WISC-R FSIQ Intervals for the Referral Sample by Frequency and Percent OOOOOOOOOOOOOOOOOOOOOO0.00.065 Point-Biserial Correlation Between IQ and Severe Discrepancy Criterion by Method and Cutoff Value ....67 Mean FSIQs for Students With and Without a Severe Discrepancy by Method and Cutoff Value ......... ..... 68 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. Number and Percent of Black and White Students Meeting the Severe Discrepancy Criterion by Method and Cutoff Value .............................70 Agreement between IEPC Eligibility Decision and Eligibility Based Only on the Severe Discrepancy Criterion ......................... ...... 73 Comparison of Eligibility Status to Severe Discrepancy Criterion under Current and Proposed Methods and Cutoff Values by Frequency and Percent ..75 Comparison of Eligibility Status to Severe Discrepancy Criterion under Current and Proposed Methods and Cutoff Values by Mean WISC-R Scale IQ ...76 Comparison of Eligibility Status and Severe Discrepancy Criterion under Current Guidelines by RaceOOOOOOOOOOOOO......OOOOOOOO0.......0000000000081 Frequencies of Students Showing a Severe Discrepancy Under Current Guidelines by Race and Eligibility Decision .......................82 Frequencies of Students Not Showing a Severe Discrepancy Under Current Guidelines by Race and Eligibility Decision ............ ........ ...82 Comparison of Eligibility Status and Severe Discrepancy Criterion Under Current Guidelines by Gender ...O...O.......OOOOOOOOOOOOOOOOO.....00000084 Frequencies of Students Showing a Severe Discrepancy Criterion Under Current Guidelines by Gender and Eligibility Decision ..................86 Frequencies of Students Not Showing a Severe Discrepancy Under Current Guidelines by Gender and Eligibility Decision ............... ...... 86 Students Found Eligible and Ineligible by IEPCs by Grade in Frequencies and Percents ................87 Comparison of Eligibility Status and Severe Discrepancy Criterion Under Current Guidelines by Grade Level .0.........OOOOOOOOOOOOO0......00.0.0089 Frequencies of Students Showing a Severe Discrepancy Under Current Guidelines by Grade and Eligibility Decision ............................90 Frequencies of Students Not Showing a Severe Discrepancy Under Current Guidelines by Grade and Eligibility Decision .................... ........ 90 viii 26. 27. 28. 29. 30. 31. Comparison of Eligibility Status and Severe Discrepancy Criterion Under Current Guidelines by WJ-R Mean Achievement Scores .....................92 Analysis of Variance of WJ-R Reading Recognition Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion ....................94 Analysis of Variance of WJ-R Reading Comprehension Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion ....................94 Analysis of Variance of WJ-R Math Calculation Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion ....................95 Analysis of Variance of WJ-R Applied Problems (Math) Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion ....................95 Analysis of Variance of WJ-R Broad Written Expression Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion .............96 ix I. INTRODUCTION Within our schools, learning disabilities (LD) continues to be the most frequently diagnosed and rapidly growing handicapping condition of all the special education categories. Since the inclusion of learning disabilities as a new disability in 1976-77, the number of students served under this category has grown by 170 percent. The relative proportion of these students, as a function of the total number of children served in special education, increased from 24.9 percent in 1976-77 to 50.5 percent in 1990-91, exceeding any other disability (U.S. Department of Education, 1992). Given these facts, the criteria for diagnosis and the means by which they are operationalized are continually under scrutiny by local districts who seek to provide services to those students who are "truly" learning disabled while avoiding overidentification and inappropriate LD placements, which drain limited resources from other programs and students. When establishing criteria for a LD diagnosis, most states (86%) have included the existence of a severe discrepancy between achievement and intellectual ability in one or more specified academic areas as a necessary, but not exclusive condition for determining a student to be learning 1 disabled (Mercer, Sears, Mercer, 1990). Methods for quantifying a severe discrepancy between ability and achievement have been the subject of much debate. More recently, attention has focused on the influence of various methods on the number and characteristics of students who receive a LD label. The purpose of this research is to apply two of the more highly recommended models for determining a severe discrepancy to data already collected on children referred for possible learning disability services. Will one method identify more students as having a severe discrepancy than a second method? Will the two methods systematically favor different ability groups? Will racial groups be affected differentially? How will the results of each method compare with the Individual Educational Planning Committee's (IEPC's) decisions regarding eligibility? A review of the literature will identify a number of studies that have compared various models for determining a severe discrepancy. Several studies have considered the effects of one formula over another on racial representation. Of those studies considering race as a factor, only one study (Evans, 1992) has consistently used individually administered intelligence and achievement tests and age-based standard scores, which are two standards for input or test quality commonly advocated by such measurement experts as Reynolds (1990) in the assessment of a potential severe discrepancy. In addition, when race was considered as a variable, data were collected from only three geographic areas; Florida, Indiana, and Arkansas, which limits the generalizability of the research findings. In addition to using quality input data, this study will provide information on a sample of students referred in a state other than those previously studied and will broaden the data base from which generalizations might be formulated in the future. It will extend the research by comparing the IEPC’s decision for eligibility with the finding of a severe discrepancy using each method, thereby attempting to draw conclusions regarding the influence of a severe discrepancy, as well as other student characteristics, on the final decision for special education services. II. REVIEW OF THE LITERATURE The review of the literature will focus on providing background information in three areas of research pertaining to the questions being addressed in this reseach project. First, a discussion of the severe discrepancy component for determining a learning disability will include a brief historical perspective and specific formulas presented in the literature for determining if a child is achieving commensurate with his age and ability. Second, the effect on minorities of the various formulas used for determining discrepancies will be examined based on the findings from previous studies. Third, comparisons between the IEPC's decision to find a student eligible and the presence of a severe discrepancy, by either a simple difference score method or a regression method, will also be reviewed in an attempt to better understand the decision making process and characteristics and conditions that affect it. A. Severe Discrepancy Recently, the use of a severe discrepancy between ability and achievement to determine the need for special education has been criticized. It has been referred to as a popular tool to reduce incidence rates of learning disabilities, while creating a false sense of objectivity and precision among diagnosticians and neglecting other 4 criteria for identification (Hammill, 1990; Chalfant, 1989, Algozzine and Ysseldyke, 1987; Council for Learning Disabilities, 1986). Nonetheless, as Reynolds (1990) noted, when the rules and regulations for the Education of the Handicapped Act (EHA), now known as the Individuals with Disabilities Education Act (IDEA), were being developed, the only consensus regarding definition or characteristics of this thing called LD was that it resulted in a major discrepancy between what one would expect academically of LD children and the level which they were actually achieving (p. 573). Mercer and his colleagues (1990) continue to find this consensus in their survey of State Departments of Education, stating, "It is accurate to say the states are currently in agreement on the importance of the discrepancy component for identifying LD students" (p. 151). Since the passage of EHA, now IDEA, the U.S. Department of Education has attempted to provide guidance in determining a severe discrepancy by proposing various formulas. Some of the earlier formulas included age and grade equivalents that were ultimately rejected, primarily because of their mathematical inadequacies (Reynolds, 1985; Wilson & Cone, 1984). Currently, standard-score comparison methods are generally considered more accurate in defining discrepancies than age or grade scores, and more states are mandating their use. Mercer and his colleagues (1990) found in their survey of State Education Departments that a total of 18 states specifically specify in their guidelines that standard scores are to be used in lieu of deviation from grade level methods and expectancy formula methods, using grade and/or age score differences, to determine a severe discrepancy. This represents an increase of seven states when compared to results of an earlier survey conducted by Frankenberger and Harper (1987). Two of the more highly recommended methods will be presented in detail. 1. The Simple Difference Score Model The simple-difference score approach defines as the appropriate discrepancy score the simple difference between an obtained aptitude or intelligence score and the obtained achievement score when both measures are expressed on a common scale (Reynolds, 1990). Both the IQ and achievement scores frequently are expressed on a standard score scale with a mean of 100 and a standard deviation of 15, allowing for a simple and direct comparison. With this procedure, a severe discrepancy is based on a criterion level described in standard score units, such as 15 points. The ease with which it can be employed and its intuitive appeal make this method probably the most popular (Evans, 1992; Michigan Association of Learning Disabilities Educators, 1992). Although the simple-difference score method is considered more accurate and statistically sound than age or grade scores, it is criticized for not taking into account measurement error and the effects of regression toward the mean. In order to reduce the chance of measurement error, Hanna, Dyck, and Holden (1979) introduced a standard score comparison method using T-scores and a graph, into which the reliability of the two tests are entered to determine the standard errors of measurement of the difference in T-score units. Reynolds (1981) attempted to establish that the discrepancy was not due to chance or errors of measurement by expressing the scores as z-scores and dividing the z- score difference by the standard error of the difference score. Although these procedures addressed the issue of measurement error, they also introduced more esoteric, derived scores which add confusion for teachers and test administrators who are accustomed to a standard score with a mean of 100 and a standard deviation of 15 (Cone & Wilson, 1981). Bennett and Clarizio (1988) recommend that if, in practice, district administrators find themselves determining a fixed cutoff level, such as 15 standard score points, it should be at least large enough to ensure a statistically significant difference. Reynolds (1990) also points out that educators often make the mistake of assuming that the standard deviation of the measures used, usually 15, is also the standard deviation of the difference scores. Difference scores have their own distribution and their own standard deviation. If two scores are positively correlated, as with intelligence and achievement, the standard deviation of the newly created distribution will be significantly smaller than that of the two original distributions. School districts attempting to predict the number of students who will be identified, based on the standard deviation of the univariate distributions, will miss the desired frequency significantly. A more central issue with the simple-difference score model concerns regression effects. By not considering regression of IQ on achievement, theory suggests that the simple-difference score model will systematically overestimate the frequency of LD among those with above- average ability and systematically underestimate the frequency of LD among those with below-average abilities (Reynolds, 1990; Wilson 8 Cone, 1984; Thorndike, 1963). The procedure, therefore, could be viewed as discriminatory in that all persons do not have an equal chance of having a severe discrepancy. Studies bearing on the empirical validation of these theoretical assumptions will be discussed on pages 12 to 22 and will be a focus of the current research. 2. Regression Discrepancy Model The regression discrepancy model has been identified as one of the most statistically adequate models for determining a severe discrepancy (Chalfant, 1989; Reynolds, 1984; Wilson & Cone, 1984; Thorndike, 1963). In comparison to a simple difference score model, the regression discrepancy model utilizes the mathematical principle of regression toward the mean to more accurately define the discrepancy. Regression toward the mean refers to the tendency of extreme scores on one measure to be less extreme on a second related measure and is the result of imperfect correlation between the two measures. Students are not expected to have achievement scores exactly matching their IQ score. Such an expectation would exist only if the correlation between the two measures was perfect, or 1.00. Rather, expected achievement is defined as the mean achievement score of students with the same IQ. The mean achievement score can be determined mathematically by knowing the correlation between the IQ test and the particular achievement test used. In general, the correlation between intelligence and achievement tests commonly used in LD diagnosis range from .5 to .7. The effect of the regression phenomenon can be illustrated further by comparing it to the simple-difference score approach at several IQ levels. Using the simple- difference score model, students earning a mean IQ of 120 would be expected to earn mean achievement scores of 120. Using the regression approach and an IQ-achievement test correlation of .6, children with an IQ of 120 would be expected to earn a mean achievement score of 112. The simple-difference score would identify eight additional 10 points toward a severe discrepancy over the regression approach with these high IQ students. Students with an IQ of 80 would be expected to earn mean achievement scores of 80 using the simple difference score model, but 88 using regression. In contrast to the high IQ students, the low IQ students would be awarded eight less points toward a severe discrepancy when regression is not employed. Regression is a very general and real phenomenon (Thorndike, 1963). For those who find simple-difference scores intuitively appealing, however, regression might seem more like an irrelevant statistical abstraction. In the minds of some, it becomes a manipulation for qualifying larger numbers of low-IQ students as LD. Considering earlier definitions of learning disabilities that required "adequate" intelligence (Johnson & Myklebust, 1967), placement of students with below-average and borderline intelligence in LD programs might seem inappropriate. A survey by Kavale and Reese (1991) of the perceptions of 547 LD teachers in Iowa revealed, in fact, that 80% of the respondents believe that LD is "somewhat" or "almost always" associated with average or above average intelligence. This line of thought may explain the hesitancy of some practitioners to move to a regression approach. In conclusion, Reynolds (1990) offers a regression model for determining a severe discrepancy. The initial step involves calculating the expected achievement score 11 (V), based on the student’s IQ, using a standard regression equation. 1" '7’ x - i |<> ll rxy sox sox + i L. .al the correlation between X and Y where rxy X the intelligence score X the mean of X SDx = the standard deviation of X The final step specifies that a severe discrepancy between aptitude (X) and achievement (Y) exists when A y - vi 3 soy za (’1 - rxy2 - 1.658EQ - Y1 where SEQ - vi = \/1 - rxyz 1 - rQ - yi A . - 2 _ 2 _ 2 1 rxy and Yi = the child's achievement score Xi = the child’s intelligence score A Y = the mean achievement score for all children with IQ = X1 is the standard deviation of Y U) D "< ll 12 2a = is the point on the normal curve corresponding to the relative frequency needed to denote "severity" (Reynolds recommends a value of 2.0) rxy2 = is the square of the correlation between the intelligence and achievement. ryy = internal consistency reliability of the achievement measure. internal consistency reliability of the H K K II aptitude measure. This formula compares a child’s current level of achievement with the mean level of achievement of all other children with the same IQ and takes into account the unreliability of the difference score. 3. Simple Difference Scores versus Regressed Scores Empirically, Valus (1986) was not able to substantiate the over- and underestimating phenomena, as a function of IQ, in her study where standard score differences and regressed score differences were compared. A large overlap (86.8%), supported by a chi-square significant at the .001 level, was found between the two procedures when applied to a small sample (n = 68) of students with a mean WISC-R FSIQ of 92.7 who had been placed in LD programs from two midwestern states. She suggests that the differences between the two procedures may be more theoretical than practical. 13 While not addressed by Valus (1986), it is possible that some school districts in her sample may have their own policies and screening techniques that exclude both the high and low IQ students from placement in learning disabilities programs, based on the assumption that these students were not intended to be served under the LD label. Students with IQ's in the mid-ranges would be less affected by regression to the mean and might qualify as LD, regardless of the procedure used. It should also be noted that Valus used the Hanna, Dyck and Holen (1979) method for standard score comparisons, which recommends use of the Verbal IQ rather than Full Scale IQ for comparison. When computing regressed scores, she used the Iowa Regression Tables, based on the Full Scale IQ as a measure of aptitude. Although the two methods yielded concurrent classifications a high percentage of the time, they used different measures of aptitude. The effect of this inconsistency in the use of IQ scores on her results is unclear. Additional data do not support Valus's (1986) conclusion that the difference between the regression analysis and standard-score procedures is primarily theoretical rather than practical. Bennett and Clarizio (1988), using scores of 86 LD referrals with a mean WISC-R FSIQ of 94.9 from primarily white suburban and urban communities, compared four methods for calculating a severe discrepancy; two standard score difference methods (z-score difference and an estimated true score difference) and two 14 regression methods ( unadjusted regressed difference and adjusted regressed difference). When compared to the standard score difference methods (z-score difference and estimated true score difference), the unadjusted regressed difference was in agreement only 28.8% and 10% of the cases, respectively. Greater agreement was observed between the adjusted regressed difference method and the z-score difference and estimated true score difference, but only if the tests involved were of high reliabilities. The results also showed that the unadjusted regression procedure selects the smallest percentage of students. These researchers concluded that regression procedures cannot be used interchangeably with standard score comparison methods in the determination of a severe discrepancy. Clarizio and Phillips (1988) compared two methods, a z- score discrepancy and a regression procedure, using two different cutoff procedures. Scores were collected from 236 predominantly white LD referrals with a mean WISC-R IQ of 96.4 from suburban and rural school districts. When the cutoff score was held constant, the standard score difference method identified 50% of the referred group as LD, but the regression method identified only 28%. Therefore, the regression formula markedly decreased the number of referred students identified as LD when the significance level remained the same. The two methods did not identify children who were significantly different from one another with regard to measured intelligence. In a 15 second comparison, the percentage identified as LD was held constant at varying percentages (10%, 25%, and 54%). When the bottom 10% and 25% of those referred were identified as LD, the standard score and regression methods did not differ much with respect to agreement with the interdisciplinary evaluation team decisions and with each other. At a 54% percent cutoff, the standard score difference and the regression methods continued to agree highly with each other (87%) but not with the team decisions (65% and 68% respectively). Based on their sample, these researchers concluded that school districts interested in decreasing the number of students identified as LD could do so either by changing to a regression method or, simply, by adjusting (increasing) the cutoff score for the standard score difference. MacMann, Barnett, Lombard, Belton-Kocher and Sharpe (1989), in their sample of 373 rural students referred for LD evaluation (mean WISC-R IQ = 96.8), found that the degree of inconsistency in classification across methods (standard score comparison and regression prediction) was not as pronounced as they had anticipated. Indeed, the proportions of severe underachievers identified by the two methods were equivalent for five of seven across-method comparisons, making Valus' (1986) suggestion that the "differences between the two procedures are more theoretical than practical" (p. 204) seem reasonable. Only when judged against a stringent kappa statistic (a coefficient of 16 agreement for nominal scales) of >.90, were a sufficient proportion of students inconsistently classified by the two different methods of discrepancy score calculation. More importantly, these researchers found that "the degree of variation attributed to the two different methods of discrepancy score calculation was trivial in comparison to the extreme levels of classification inconsistency introduced by test selection" (p. 139). In sum, there appears to be no clear answer regarding the "best" method of discrepancy score calculation, based on these studies. A clearer picture begins to emerge as researchers have studied more diverse populations with regard to IQ and race, when comparing standard score differences and regressed score differences in the determination of a severe discrepancy. A review of these studies follows. B. Race and a Severe Discrepancy Braden (1987) used a hypothetical sample (based on the standardization sample of the WISC-R) to illustrate that a simple difference score model will have a differential impact on black and white students, owing to the correlation between simple standard score difference discrepancies and IQ and the lower mean IQs of blacks on measures of intelligence. Jensen and Reynolds (1982) identified the white students' IQ distribution on the WISC-R as having a mean of 102.25 and a standard deviation of 14.08, and 17 distinct from the black students' IQ distribution with a mean of 86.42 and a standard deviation of 12.75. Results of the application of a simple difference score method and a regression method on Braden's hypothetical sample of nearly 20,000 students showed that the odds of being identified LD change drastically across intelligence intervals for the simple difference method, but are constant for the regression method. For example, the probability of students with an IQ of 125 demonstrating a severe discrepancy using a simple standard score difference of 15 points was .3372 in contrast to a probability of .0188 for students with an IQ of 75. Using regression, the probability remained at .1056 across intelligence intervals. When the probability of meeting the severe discrepancy criterion varies across IQ levels, as with the simple difference method, the effect will be disproportionate racial representation in groups meeting the severe discrepancy criterion. While the results of this study provide insight into the problems associated with racially diverse populations and discrepancy formulas, empirical studies are needed to support conclusions drawn from this hypothetical sample. Braden and Weiss (1988) supported these earlier conclusions empirically using 2,263 students from a countywide school district in north-central Florida. Group IQ and achievement scores were collected from second and fifth graders, of which 1343 were white, 817 were black, and 53 were of other races. The mean IQ of the black students, 18 as measured by the Otis-Lennon School Ability Test, was more than one standard deviation below that of whites, with a black mean IQ calculated at 90.89 and a white mean IQ at 106.97. Similar achievement differences (approximately one SD) were observed for blacks and whites in reading and math at both grade levels. Severe discrepancies between aptitude and achievement were calculated using the simple difference method and a regression formula in both subject areas and at both grade levels, allowing for four comparisons. When the simple difference model was applied, minorities were proportionate to overall sample parameters in only one of four cases. When the regression model was applied, minority representation was proportionate in three of the four cases. The empirical outcomes suggest use of simple discrepancy criteria may raise ethical and legal questions, while the use of regression provides more equitable treatment in the determination of learning disabilities. Braden and Weiss (1988) defend their use of group tests over individually administered tests, which are commonly use to qualify students for special education, by stating that results will be similar. They do not, however, give much support for their statement. Reynolds (1990) states that for diagnostic purposes, individually administered tests should be used, particularly with young children. He argues that "for all children, but especially for handicapped children, too many uncontrolled and unnoticed factors can affect test performance in an adverse manner" (p. 586). A 19 test administrator is more likely to detect these factors during individual assessment. McLeskey, Waldron and Wornhoff (1990) improved on the previous research by using individual test scores to examine the application of a simple difference and a regression method for determining a severe discrepancy and the impact of the use of an IQ cutoff with black and white students referred for possible learning disability services. Using a sample of 218 white students (WISC-R FSIQ = 96.3) and 132 black students (WISC-R FSIQ = 88.5) in the state of Indiana, McLeskey and his colleagues compared the two methods, based on scores from the Wechsler Intelligence Scale for Children - Revised (WISC-R), the Wide Range Achievement Test (WRAT) and the Peabody Individual Achievement Test (PIAT). In their sample, 42% of the black students who met the severe discrepancy criterion in reading using a regression method failed to meet the criterion using a standard score procedure. A similar 42% of the black students who met the discrepancy criterion in mathematics using a regression method failed to meet the criterion when the standard score procedure was applied. Finally, the use of a regression procedure resulted in a proportionally balanced representation of black and white students, in contrast to a standard score procedure which resulted in identification of a significantly greater proportion of white students than black students with learning disabilities. Thus, these research findings were consistent with those of Braden and 20 Weiss (1988) in demonstrating that use of a regression method to determine a severe discrepancy provides all students more equitable access to special education services. In addition, McLesky et al. (1990) noted that use of an IQ cutoff score at 85, when 41% of his black students had FSIQs below this level in contrast to only 16% of the white students, adds a another source of racial bias when used in combination with a simple difference score method for determining LD. McLesky et al. (1990) used a combination of age and grade-based norms in the measures of achievement, which is not recommended when making ability-achievement comparisons (Reynolds, 1990). Only age-based achievement standard scores should be used, as they are being compared to IQ scores which are age-based. In addition, P.L. 94-142 specifically notes that a child's achievement should not be commensurate with his or her age and ability to meet the severe discrepancy criterion. These researchers also justified mixing scores from the WRAT and PIAT in their analyses because the correlations between these tests are moderate to high, despite previous research that found extreme levels of classification inconsistency introduced by test selection (MacMann et al., 1989; Clarizio and Bennett, 1987; Macmann and Barnett, 1985). A recent study by Evans (1992) provides one more link in a chain of evidence that finds simple-difference scores to be discriminatory to black children because of its 21 inequitable treatment across IQ levels. Using achievement tests from the Woodcock-Johnson Psycho-Educational Battery - Revised (WJ-R) and the Wechsler Intelligence Scale for Children - Revised (WISC-R), Evans compared the simple difference and regression methods on scores from 194 referred students, 60% white and 40% black, from one school 'district in central Arkansas. The two models identified similar proportions of white students. However, the simple difference model identified significantly fewer black students. The difference between mean FSIQs of 91.5 and 84.5 for whites and blacks, respectively, in conjunction with the relationship between small simple difference discrepancies and low IQ, would account for the smaller proportion of blacks identified by the simple difference model. In addition, Evans found that grade and time of evaluation (initial/re-evaluation) produce subgroups that differ with respect to mean FSIQ (older students and re- evaluations have lower FSIQs) and, consequently, were subject to the same bias experienced by black students when the simple-difference method of determining a severe discrepancy is used. With regard to identification rate, the regression model identified a slightly higher percentage of referred students than the simple difference model in Evan's sample. All the studies examining race, in addition to several other factors such as IQ cutoffs, grade level, and time of evaluation, suggest that severe discrepancy models are not 22 interchangeable, as evidenced by the proportions of subgroups identified. Districts moving from a simple- difference method to a regression method for determining a severe discrepancy could see different identification patterns, and possibly rates, depending on student characteristics within the districts. The current research will attempt to add empirical data from another geographical location, using quality input data. C. IEPC Decisions and a Severe Discrepancy The reader is reminded that a severe discrepancy does not constitute the diagnosis of LD. It only establishes that the primary symptom exists. In the final analysis, professional judgement plays an important role as the Individualized Educational Planning Committee (IEPC) integrates all the diagnostic information. For this reason, predictions regarding identification rates and characteristics, based on studies of different formulas for calculating a severe discrepancy, might fail to accurately identify those students who ultimately are labeled as LD. The final area for review, therefore, will deal with studies that have looked at the match between team decisions regarding classification and the existence of a severe discrepancy using the different methods for determination. Ysseldyke, Thurlow, Graden, Wesson, Algozzine and Deno (1983), in their generalizations from five years of research on the assessment and decision making process with students 23 considered learning disabled, state that "placement decisions made by teams of individuals have very little to do with the data collected on students" (p. 78). Rather, they found that sex, socioeconomic status, physical appearance and reason for referral were factors that influenced the decisions made by school personnel, as well as the availability of services and the power that a student's parents hold in the school system. The team decision making process is set in motion by a teacher's initial decision to refer a student, and teams serve primarily to confirm the existence of problems first observed by the teachers. In contrast to the conclusions drawn by Ysseldyke, et al. (1983), Huebner (1991) argues that there is an accumulated body of evidence that referral and assessment data typically influence special education decisions. His own series of analogue studies have documented the importance of test data in the decision making process using a variety of samples (teachers and school psychologists) and test data bases. He notes, however, that the influence of test data appears less in studies where the test results are unusually ambiguous or borderline. The presentation of ambiguous test data may be one explanation for the discrepancy in research findings from Huebner's studies and those conducted earlier by Ysseldyke and his colleagues. The form in which test scores are reported (percentiles, grade-equivalents, deviation IQs) also appears to have an 24 impact on the decision-making process with both teachers and school psychologists. The research of Ysseldyke, et al. (1983) was completed prior to publication of Critical Measurement Issues in Learning Disabilities (Reynolds, 1985), prepared by a special working group funded by the Special Education Programs branch of the federal government. This publication clearly delineated the most appropriate norm-referenced scores (i.e., standard scores) and the best methods for quantifying a severe discrepancy. Given additional guidance, have decision making teams relied more on test data, and more specifically severe discrepancies, when determining a student to be LD than has previously been suggested? Valus (1986) questioned how many students placed in LD programs actually showed a severe discrepancy. She found that, even though staffing teams acknowledged the importance of identifying a severe discrepancy in her survey, no such discrepancy was evident in one third of her sample of LD placements, regardless of the method used. She concluded that slow learners may have been overrepresented among the students who did not demonstrate a severe discrepancy, and that staffing teams need guidance in determining whether or not slow-learning students are also learning disabled. Furlong (1988), examining the implementation of the simple difference score model in California, also found that although students with lower ability test scores were less 25 likely to meet the legal discrepancy criterion than were students with higher ability test scores, they were actually more likely to receive a positive placement decision. This situation was also true for minority students (primarily Mexican-American) as well as students who were being re- evaluated (in contrast to initial referrals), and is probably accounted for by IQ differences. In his sample of 393 students referred for evaluation, 43 percent of those students placed in special education resource rooms or special day classes as learning disabled did not meet the state’s severe discrepancy requirement, based on simple difference scores. Subsequently, Furlong and Feldman (1992) studied a subgroup of the 393 students reported on by Furlong (1988) to evaluate whether regression to the mean could "explain" inconsistent placement decisions. Their sample consisted of the 153 students who received inconsistent placements, including (1) those meeting the California severe discrepancy criterion, but found ineligible and (2) those failing to meet the California severe discrepancy criterion, but placed in resource rooms or special day classes as learning disabled. One third of these students changed discrepancy status in the correct direction when a regression formula was applied. Thus, regression can explain some of the inconsistent placement decisions. The greatest number of corrections by regression were noted on the group of students placed in the 26 more restrictive special day classes. Nearly one-half of the those students changed discrepancy status. This outcome is not surprising, given the group's low WISC-R Verbal IQ of 76.1 and Performance IQ of 82.5. Regression corrected one- third of placement decisions in resource rooms and only one- fourth of the ineligibility decisions. A bias observed in this study toward not placing higher IQ students, even those who obtain scores between 100 to 110, is not as easily explained. In addition, an effect for age indicates that younger children are not placed as frequently as older students with similar profiles, despite meeting the severe discrepancy criterion. More research is needed to determine why younger children might be treated differently. These researchers point out that no significant differences in the proportion of white and minority students changing status after regression was applied indicates that the study's results are not an artifact of race. How the larger sample would have changed, not just the subgroup of inconsistent placements, after regression was applied is unknown, but would be of interest. Using the team decision for eligibility as the LD criterion, Clarizio and Phillips (1989) found that a misclassification rate of approximately 35% across both simple difference and regression methods. The number of false positives (those found eligible without a severe discrepancy) was only 4% with the regression method, compared to 16% for the standard score difference method. 27 Interestingly, the same classification results were achieved when low achievement in reading was used in place of the severe discrepancy requirement for LD eligibility, suggesting that achievement level may have as much influence as ability-achievement discrepancies in team decision- making. McLeskey (1992) provided descriptive information about 790 students found to be LD, grades K-12, in Indiana. Like Valus (1986), he found a severe discrepancy existed for approximately two-thirds (67%) of his total LD sample when a regression formula, as directed by state guidelines, was used. The percentage of students with severe discrepancies decreased significantly from the primary grades through the secondary level. McLeskey noted that 58% of the students with learning disabilities were retained prior to being identified, a rate more than twice as high as retention rates for nondisabled students in Indiana. This result, further supported by interview data, suggests that retention is being used in Indiana as a remedial measure before labeling a student with a learning disability. In another publication by McLesky and Grizzle (1992), using the same Indiana sample of learning disabled students, they compared LD students who had been retained (LDR) with those who had not been retained (LDNR) and found no significant difference with regard to the presence of a severe discrepancy; 67% of the LDNR group and 71% of the LDR group had severe discrepancies. In this study, a 28 history of retention does not appear to be a significant factor in justifying an LD label without the presence of a severe discrepancy, as an equal number of students who had not been retained also failed to show a severe discrepancy. Finally, Clarizio and Phillips (1986) investigated sex bias in the diagnosis of learning disabled students by examining the discrepancies between ability and achievement in males and females from two groups, one consisting of referred but not eligible (NE) students and the other consisting of students who had been diagnosed learning disabled (LD) in Michigan. Full Scale IQs from the WISC-R and the standard scores in reading from the Wide Range Achievement Test (WRAT) were used to calculated discrepancies, based on the assumption that reading is the most common type of LD problem referred in the public schools. Although boys outnumbered girls by more than a 3.5 to 1 ratio in receiving a diagnosis of LD, analyses of the discrepancies for male and female subjects failed to indicate any evidence of sexual bias in diagnostic and placement procedures. In addition, these researchers found that approximately one-half of the students labeled as LD did not show a reliable discrepancy between expected and actual achievement, as defined by .66 standard deviations between the two scores. Approximately 40 percent of the students found not eligible did have reliable discrepancy scores . 29 Although the rate of agreement between the presence of a discrepancy and diagnosis of LD is poorer in the study by Clarizio and Phillips (1986) than in more recent studies, several procedural differences may help to explain the differences. LD consideration was restricted to the area of reading, as these researchers did not look at discrepancies in other areas where a student might have qualified, such as math or written language. In addition, selection of .66 standard deviations between the two scores to indicate a reliable, or statistically significant, discrepancy is quite different from the selection of 1.5 to 2.0 SDs to indicate a severe, or educationally significant, discrepancy in the more recent studies. Also, a simple difference score method, rather than a regression method, was used to identify a discrepancy. Possibly, more recent studies are showing better agreement between the presence of a severe discrepancy and a diagnosis of LD because some progress has been made in the operationalization of this criterion. Another look at the match between a severe discrepancy and an LD diagnosis by the IEPC with sex as a factor might be warranted, as operational definitions have become more standardized. No study has considered race, specifically black students, when comparing eligibility decisions and the presence of a severe discrepancy using a regression formula. There are, however, circumstances that raise questions regarding the race factor in LD eligibility decisions. 3O Tucker (1980) asserts that, as schools have been pressured to stop classifying minority children as mentally retarded, black students have been increasing placed in LD classrooms so by 1974, they were overrepresented among the learning disabled. In his words: when it was no longer socially desirable to place black students in EMR classes, it became convenient to place them in the newly provided LD category. It took a year to make the changeover, but the resultant proportional differences are maintained (p. 104). If LD classrooms have become an answer for low performing black students, as Tucker suggests, is this happening without requiring they meet LD criteria, specifically, a severe discrepancy between ability and achievement? Contrary to Tucker, Chin and Hughes (1987) concluded that the increase of black students in LD classrooms has not resulted in disproportionate representation after analyzing placement data from the Office of Civil Rights from 1978 to 1984. However, a more recent demographic profile of secondary school-age students (ages 13-21) with disabilities, based on a nationally representative sample and the work of the National Longitudinal Transition Study of Special Education Students (NLTS) in 1987, is presented in the Fourteenth Annual Report to Congress on the Implementation of the Individuals with Disabilities 31 Education Act (U.S. Department of Education, 1992) and disagrees with Chin and Hughes (1987). Findings from the NLTS study indicated that youth with disabilities are twice as likely to be black and only slightly less likely to be white than the total population of youth. Black youth are more highly represented in every disability category. With regard to specific disabilities, the racial characteristics of secondary school youth with learning disabilities included 67.2% white students and 21.6% black students; for mental retardation, the proportions were more pronounced with 61.0% white students and 31.0% black students. In contrast, secondary age youth in general were 70% white and 12% black, according to 1987 figures. Thus, it appears from these reported findings that race may continue to be a biasing factor in special education placement. Several reasons for the disproportionate numbers are offered in this Fourteenth Annual Report to Congress (1992). The use of standardized assessment instruments which are racially biased may, at least in part, be responsible. The likelihood of minority children also being poor and more likely to have experienced poor health care and nutrition seems logical, too, and suggests the disabilities truly exist. For our discussion, however, the contention that school professionals are more likely to refer and place minority and poor children in special education because of lower expectations regarding the educability of these children is most germane. Are IEPCs influenced by decreased 32 expectations for black children so that meeting the severe discrepancy criteria may not play as prominent a role when eligibility decisions for learning disabilities are made? In sum, several recent studies comparing a severe discrepancy to the IEPC’s decision for eligibility indicate that approximately one-third of the students found eligible using a regression formula do not demonstrate a severe discrepancy between ability and achievement. With standard difference score methods, the level reached even higher, with reports of 40 to 50 percent. This condition appears to exist despite the concept of severe discrepancy being fundamental to the guidelines set forth by the federal government for identifying LD students. In addition, low achievement, by itself, appears to be an influential factor in finding a student learning disabled, which would make students classified as learning disabled not clearly different from other students who are failing in school. The sex of the student has not been shown to influence placement without the presence of a severe discrepancy, although the data is limited to a single study. Of those students found eligible as learning disabled, elementary students are more likely than secondary students to demonstrate a severe discrepancy, according to one study. D. Summary and Implications for Current Research While the concept of a severe discrepancy between ability and achievement as a fundamental characteristic of 33 learning disabilities has not received universal support, states currently appear to be in agreement on the importance of the discrepancy component for identifying LD students. The U.S. Department of Education attempted to provide guidance to state and local districts by proposing various formulas, using standard scores, to calculate a discrepancy. Two of the more frequently used methods include the simple difference score model and the regression discrepancy model. The regression model takes into account the regression between ability and achievement that results from less than perfectly correlated measures and is recommended over the simple difference score method for this reason. Previous studies comparing the two models have found significant differences, particularly when the populations studied have included minority students. In general, these studies have shown that the simple-difference score method, by favoring the higher IQ students, identifies a disproportionate number of white students over black students. When regression analysis is used, proportionate numbers of black and white students are found to meet the severe discrepancy criterion. Identification rates vary from sample to sample, with some researchers noting an increase in students showing a severe discrepancy when regression is used, while others show a decrease. Although these findings would appear to suggest significant changes in the characteristics and identification rates of students found to be LD if districts 34 were to change from a simple difference to a regression method for determining a severe discrepancy, the impact may not be as predictable as assumed. Because a severe discrepancy is a required, but not exclusive, criterion for eligibility as learning disabled, as well as the fact that studies have shown some disregard for the discrepancy requirement, outcomes as to who is found LD may not be easily predicted. Current studies, in fact, show that almost one-third of those students found to be LD do not show a severe discrepancy, using a regression formula, and that this is more likely to happen at the secondary level than in the elementary grades. The proposed study will attempt to replicate findings from Evan's study (1992), using students from another area of the country to show that use of a regression method over a simple difference score method to determine a severe discrepancy provides all students, black or white, with a more equitable opportunity to be considered for special education services. By consistently using individually administered intelligence and achievement scores and age- based normative data, this research will improve methodologically on previous research. In addition, it will look at the impact such a change in methods, as well as a change in cut-off values, will have on identification rates in one intermediate school district in Michigan. Finally, it will extend previous research by looking more closely at those students exhibiting a severe 35 discrepancy with those found to be learning disabled by the IEPCs. Are there students who show a severe discrepancy but are not found eligible for special education as learning disabled, as well as students who are found eligible, even though a severe discrepancy can not be documented. What characteristics do these students display with regard to race, gender, grade, ability and achievement? An attempt to understand the characteristics and conditions that influence decision makers in ultimately finding a student LD will be the goal of these final analyses. III. METHODOLOGY A. Subjects The subjects came from six urban, suburban, and rural school districts served by an intermediate school district in Michigan. Data were collected on all students referred for a psychoeducational evaluation during the 1990-91 school year due to learning problems. Those students who were referred primarily for emotional difficulties were not included in the study unless referral information also addressed a concern for learning problems. Students who were found to be educable mentally impaired (EMI), hearing impaired (HI), visually impaired (VI), or physically and otherwise health impaired (POHI) were also excluded from study. If students fell below a Full Scale IQ of 70, but failed to qualify as mentally impaired based on other criteria, they were included in the sample. All students studied had been administered the Wechsler Intelligence Scale for Children-Revised (WISC-R) as a measure of intellectual functioning and achievement tests from the Woodcock-Johnson Psycho-Educational Battery-Revised (WJ-R) as measures of achievement in the academic areas. A total of 344 students, kindergarten through twelfth grade, were included in the study. Heaviest referral rates were at the first and second grade level, with 21.5% and 24.1% of the sample coming from these two grades, respectively. Of the 344 students, 227 (66.0%) were white, 101 (29.4%) were 36 37 black, and 16 (4.6%) made up an "other" category, which combined Native American and Hispanic students. Approximately one-half of the students were enrolled in urban schools (49.4%) with the remainder (50.6%) attending suburban or rural schools. Males (70.3%) outnumbered females (29.7%) more than two to one. Overall cognitive functioning was in the average range, as indicated by a mean IQ of 94.1 on the individually administered intelligence test. Two-thirds (66.8) of the sample had been retained at least one year prior to referral. Based on decisions of the Individualized Educational Planning Committees (IEPCs), 201 (58.4%) of the students studied were found eligible for special education services as learning disabled. Table one gives more detailed descriptive statistics for the sample. A total of 89 students were dropped from the pool of referred students due to incomplete data sets. The most common reason for incomplete data appeared to result from a procedure used in one district whereby students were screened out of the process if preliminary testing by the school psychologist suggested no evidence of a learning disability. This factor accounted for 58 of the students dropped from the pool. P013601“: 38 TABLE 1 Sample Description (N = 344) Educational_§stting Urban Nonurban Race White Black Other Sex Male Female Grads_£lassmsnt QUP‘N (new 6 7 8 9 10 11 12 IQ (mean= 94.14 60-69 70-79 80-89 90-99 100-109 110-119 120-129 130-139 S.D.= 11.55) 170 174 227 101 16 242 102 74 83 57 34 30 21 Hmmoo 24 112 93 79 27 Frequency g 49.4 50.6 ”00‘ 0. Hum 00. NmH HFJN ham~4u N NU HQU \INQ 0.. DO. UU'IQO 00500 \1‘00’) 39 te ' 5 none 114 33.2 one 166 48.4 two 60 17.5 three 3 .9 153259151211 eligible 201 58.4 ineligible 143 41.6 B. Measures 1. Wechsler Intelligence Test for Children-Revised The Wechsler Intelligence Test for Children - Revised (WISC-R) was published in 1972 and is an individually administered intelligence test for children between the ages of 6 and 16. The WISC-R provides IQs for the Verbal, Performance, and Full Scales with a mean of 100 and a standard deviation of 15. The internal consistency reliabilities of the Verbal, Performance, and Full Scales are excellent (average of .94, .90, and .96 respectively). 2. Woodcock-Johnson Psycho-Educatignal Battery-Revised The Woodcock-Johnson Psycho-Educational Battery- Revised (WJ-R) was published in 1989 and is a set of individually administer tests for measuring cognitive abilities, scholastic aptitudes and achievement. Only the WJ-R Achievement Tests (WJ-R ACH) were used in this study. Norms include individuals from ages 2 to 90+. Nine tests are provided in a Standard Battery and nine additional tests 40 make up the Supplemental Battery of the WJ-R ACH. The internal consistency reliabilities are generally in the high .80s and low .908 for the individual tests and in the mid .90s for test clusters. A .92 internal consistency reliability coefficient was calculated, based on the individual achievement tests from the standard battery, and represents both the median and the mean for the WJ-R ACH. 3. Correlation Between Measures Intercorrelations between the WISC-R and the WJ-R ACH for the regression model were restricted to the average Full Scale IQ - achievement correlation of .6. This decision was based on information from a number of sources. First, the WJ-R Technical Manual (McGrew, Werder, & Woodcock, 1991) provides correlational information between the WISC-R and the WJ-R reading and mathematics tests from a study of third graders in which a .6 correlation is reported. Second, correlational information from the original WJ (Woodcock, 1978), while also restricted by selected grade levels as well as standard battery tests, is consistent with a .6 correlation. Third, median correlations of .6 in reading and math across achievement tests are reported by Sattler (1988), based on his review of a large number of studies. Finally, Reynolds (1985) identifies .6 as the commonly accepted correlation between ability and achievement in his published work, Critical Measurement Issues in Learning Disabilities. 41 C. Formulas 1. Simple Difference Model The simple difference model represents the model currently recommended by the intermediate district under study in their LD guidelines and utilized within the local districts. It can be expressed by the following equation: Xi - Yi ; 15 the child’s WJ-R achievement score £ 5' (D H (D 0< ... II the child’s Full Scale IQ Score X (.1. II A second cutoff score, denoting a more severe discrepancy, was substituted in the above equation for additional comparisons. Xi - Yi 2 22 2. Regression Model The regression model represents the model being considered by the intermediate school district for future LD eligibility as new guidelines are being developed. Reynold's (1990) offers the following regression equation to determine a student's expected achievement score (I), based on his/her IQ: 42 Q= rxy x-SE sox+§ SDX c .1 where rxy = the correlation between X and Y X = the child’s FSIQ X = the mean of X SDx = the standard deviation of X The second step in the regression model determines if the difference between a student's predicted achievement score (9) and his real achievement score (Y) is severe, as defined by the intermediate school district under study. Currently, the district is interested in changing from a simple difference model to a regression model and increasing the level of severity from a minimum of 15 to a minimum of 22 points. Given this information, the following formulas were used in this study as a second step in the regression method: the child’s expected achievement score where: Yi the child's achievement score r< ...». ll 43 This procedure deviates from Reynold's (1990) recommended formula for determining a severe discrepancy (as described in the Review of the Literature), which uses a number of standard deviations from the mean of the difference score distribution rather than a fixed number of standard score points to define "severe". Reynold's use of standard deviations allows for consistency across all tests, as the size of the standard deviation will differ depending on the correlation between ability and achievement. In practice, however, school districts are inclined to use a formula based on a fixed number of standard score points because it is more manageable than the complex formula proposed by Reynolds. D. Procedures All evaluations were completed by a multidisciplinary evaluation team that included a state approved school psychologist and a certified learning disabilities teacher employed by the local districts during the 1990-91 school year. Standard score and regressed standard score differences were calculated in five areas of eligibility (basic reading, reading comprehension, mathematics calculation, mathematics reasoning, and written expression), using a Full Scale IQ from the WISC-R and age-based achievement scores in reading, mathematics and written language from the WJ-R ACH. 44 Although Michigan Special Education Rules also identify oral expression and listening comprehension as two additional areas of eligibility, school districts do not appear to use these categories without a discrepancy in a basic academic skill area. Within this study's sample, only one student was considered LD without a deficit identified in reading, math, or written language. The WISC-R Full Scale IQ was used because it is the ability measure recommended by this intermediate school district in their LD guidelines. Although the standard battery of tests from the WJ-R ACH is used uniformly throughout the intermediate school district as the measure of achievement in the determination of learning disabilities, additional tests from the supplementary battery are sometimes given, depending on the concerns of the learning disabilities specialist administering the test. The supplementary tests can then be combined with the standard battery tests to give cluster scores in the areas of eligibility. Although uniform test comparisons for all students would be preferable when applying different formulas to the data, this procedure could distort the data upon which the IEPCs made their decisions and, thereby, limit conclusions that might be drawn regarding their intent. To resolve this problem, severe discrepancies were calculated in two different ways. First, discrepancies were calculated using only tests from the standard battery of the WJ-R ACH for all subjects. 45 Achievement tests and corresponding areas of eligibility were as follows: Achiexsment 195: A122 21 21131211122 Letter-Word Identification Basic Reading Skills Passage Comprehension Reading Comprehension Calculation Math Calculation Applied Problems Math Reasoning Broad Written Language Written Expression Second, cluster scores that combined achievement tests from the standard and supplementary batteries were used in place of standard battery scores for students who were given the additional tests. From the total sample, 57% of the students received additional testing in basic reading skills, 49% received additional testing in reading comprehension, and 42% received additional testing in basic math skills. In the second analysis, therefore, students discrepancies were not based consistently on the same achievement tests. Achievement clusters and corresponding areas of eligibility were as follows: Wmste LLligieLLmeaoE Basic Reading Skills Cluster Basic Reading Skill (combines Letter-Word Identi- fication and Word Attack) 46 521115221119.htm: o 1' 'bi ”6 Reading Comprehension Cluster Reading Comprehension (combines Passage Comprehension and Reading Vocabulary) Basic Math Skills Cluster Math Calculation (combines Calculation and Quantitative Concepts) For these students, written expression and math reasoning continued to be judged from the standard battery tests. E. Research Questions This study was designed to provide guidance to an intermediate school district considering changes in their operational definition of LD. Specifically, two changes were contemplated in how one defines a severe discrepancy between ability and achievement. One change entailed raising the magnitude of the discrepancy from 15 to 22 standard score points. The second change involved the switch from the use of standard scores to regressed standard scores. The proposed changes were applied to data already collected on students who had been referred for evaluation within school districts served by the ISD. In an effort to predict rates and patterns of LD identification that might result from the policy change under consideration, the following research questions were developed: 47 What effect will changing from the simple difference method to the regression method have on the percentage of students determined to have a severe discrepancy when the cutoff value is held constant at 15 points and at 22 points? What effect will establishing a standard score cutoff at two different levels (15 points, 22 points) have on the percentage of students determined to have a severe discrepancy using each of the methods. What effect will changing the method of identification from simple difference to regression and increasing the cutoff score from 15 to 22 points have on the percentage of students determined to have a severe discrepancy? Will the regression method treat ability groups more equitably than the simple difference score method, as predicted by previous research? Will the regression method treat black students more equitably than the simple difference score method, as predicted by previous research? One needs to keep in mind that a severe discrepancy is only one component of the LD determination process. 48 Therefore, predictions for rates and patterns can not be based solely on this criterion. In the final analysis, professional judgment plays an important role as the IEPC integrates all the diagnostic information. Given this fact, additional research questions were developed, intended to examine the relationship between team decisions regarding classification and the existence of a severe discrepancy using the different methods for determination: Are there students who are identified as demonstrating a severe discrepancy between ability and achievement, but are not found eligible by the IEPCs as learning disabled under current guidelines? Under proposed guidelines? Are there students who are found eligible as learning disabled by the IEPCs, even though a severe discrepancy was not demonstrated under current guidelines? Under proposed guidelines? What characteristics do these students who are "misclassified" by the IEPCs under current guidelines display with regard to ability, race, gender, grade and achievement? 49 F. Data Analysis Each student's Full Scale IQ and achievement standard scores, using the achievement tests identified in the Procedures section, were compared in five areas of eligibility. A student was identified as demonstrating a severe discrepancy if one or more of the five comparisons were equal to or greater than the cutoff level under each method. The following statistical techniques were used to analyze the data. 1. Pearson Product Moment Correlation Coefficient: The Pearson coefficient was used to determine the relationship between student scores on the standard battery and the supplemental battery in order to make decisions regarding test selection for comparisons. 2. McNemar's Test for Correlated Proportions: McNemar's test for correlated proportions was used to test the significance of increases or decreases in the proportion of students found eligible under the severe discrepancy criteria using the different methods and cut-off values. 3. Kappa: The Kappa statistic, as described by Cohen (1960) for measuring nominal agreement among raters, was used to measure the agreement between pairs of methods and cutoff values for classifying students by severe discrepancy 50 between ability and achievement. It was also used to measure the'agreement between methods and the IEPC in determining which students were labeled as learning disabled. The overlap statistic provided descriptive evidence regarding the same comparisons and was calculated by adding the number of students upon which the pair agreed, both positive and negative, and dividing that number by the total number of students evaluated. 4. Point-Biserial Correlation Coefficient: To correlate the dichotomous variable of severe discrepancy with the continuous measure of a student's WISC-R Full Scale IQ, the point-biserial correlation coefficient (rbp) was used. The rpb indicates if a relationship exists between those found to have a severe discrepancy and their IQ for each of the two methods. 5. Chi-Squared: The chi-squared statistic was used to answer research questions regarding race, grade and gender bias when comparing methods and the IEPC decision. 6. Analysis of Variance: Analysis of variance was used to test for differences in achievement means when students were grouped by eligibility status and the severe discrepancy criterion. 51 7. Two Sample T-Test: The two sample t-test was used to test for significance of differences in mean IQs between black and white students, eligible and ineligible students, and students with or without a severe discrepancy. 8. Descriptive statistics were also used to determine if there appeared to be any relationships between ability, race, grade, gender and achievement and the determination of eligibility as learning disabled. G. Limitations A primary limitation of this study resulted from the use of accessible rather than randomly selected subjects. Of the 21 local districts served by the intermediate school district, six districts that represented a cross-section of geographic, socioeconomic, and ethnic areas within the intermediate school district volunteered to provide data. All complete files from within the six districts made up the final sample. Although the generalization of the results of this study beyond the intermediate school district are questionable, they do add, by replication, another piece of information in an accumulating knowledge base about the rates and patterns of LD identification. A second limitation of this study involves the number of comparisons that are required by Michigan Special Education Rules in finding a student eligible for special education. Statistically, each additional comparison 52 between a student’s ability and achievement scores increases the probability that a positive effect will be found. Nonetheless, given the legal requirements placed on schools to consider multiple areas of eligibility, a sacrifice in statistical precision appears to be unavoidable, and is necessary in this study to predict the impact of policy change for the local districts. In practice, all qualitative and quantitative data regarding a student's performance are to be considered in the decision making process and may lessen the chance of an unnecessary label. Still another limitation involves sources of bias. Students who were referred for evaluation are the subjects under study. Any conclusions regarding bias in the LD identification process or placement decisions made by the IEPCs will not have addressed the fact that bias may have existed in the initial referral process. Whether race, gender, ability, or grade level were factors influencing teachers or administrators in decisions to refer students is unknown. The socioeconomic status of the referred students was a desired but unavailable factor for study. Without controlling for SES, it is difficult to sort out other related factors, such as IQ and race. In the end, SES might be an influential factor in determining not only which students are referred, but also which ones are selected to receive special education services. 53 Conclusions reached in this study need to be viewed in light of the emphasis placed by local, intermediate, and state level organizations toward operationalizing the concept of a severe discrepancy, while at the same time cautioning against the rigid applications of mathematical formulas involving only standardized test data. In the final analysis, professional judgment plays an important role in integrating all the diagnostic information in a complex decision making process that may also include psychological, political, educational or practical considerations outside the child. The complexity of this decision making process, consequently, limits the confidence with which predictions can be made based solely on the statistical criteria and factors within the child, as used in this study. IV. RESULTS Before addressing the research questions put forth in the preceding section, several statistical procedures were completed in order to determine the relationship between students' scores on tests from the standard battery of the WJ-R ACH and their scores on a combination or cluster of tests from the standard battery and supplementary battery. Correlations between the single test scores and corresponding cluster scores for all the students who received additional testing are presented in Table 2. The very high correlations (r .94, .93 and .87, p < .001) between the two achievement scores (single score and cluster score) in each area would suggest that the additional testing made very little difference in the measurement of student achievement and, consequently, would not change the results in subsequent analyses involving the calculation of a severe discrepancy or decisions based upon it. In addition, the high internal consistency reliability coefficients reported for the standard battery, ranging from .90 to .94, may help to explain the consistency in achievement across batteries. While additional testing might typically increase reliability, in this case, excellent levels of reliability were already reached through the standard battery. The supplementary battery increased reliability to only a small degree, with internal 54 55 Table 2 Pearson Product Moment Correlation (r) between Achievement Tests and Achievement Clusters from the Woodcock-Johnson Psycho-Education Battery—Revised Achievement Test Achievement Cluster (one test from (adds a second test from n Standard Battery) Supplementary Battery) r 196 Letter-Word Basic Reading .94* Identification Skills 169 Passage Reading .93* Comprehension Comprehension 144 Calculation Basic Math Skills .87* * p < .001 56 consistency reliability figures reported to range from .94 to .96 for the additional test clusters. We can predict little difference in the selection of students showing a severe discrepancy, regardless of the level of testing completed, based on these very high correlations between the two achievement scores in each area and the high test reliability coefficients of the standard battery. It is also interesting, however, to observe the results after actually applying the two methods at different cutoff values to the data. A comparison of Table 3 and Table 4 suggests there is little difference between the actual numbers and characteristics of the students demonstrating a severe discrepancy, regardless of the level of achievement testing utilized. Overall, the total number of students showing a severe discrepancy by either method or cutoff level does not change by more than five students when the two levels of achievement testing are compared. While the numbers change only slightly, is this small change also true with regard to individual students? Further analyses, using the overlap statistic (number of decisions in agreement divided by the total number of cases considered) also show very little difference in the determination of a severe discrepancy, regardless of the achievement scores used. Table 5, which reports the extent of agreement when the current guidelines (simple difference method - 15 point cutoff) are used, indicates large overlap statistics, ranging from 91.18 to 96.69 percent, across Number 57 TABLE 3 of Students Meeting the Severe Discrepancy Criterion Using Only the Standard Battery of the WJ-R by Gender, Race and Total Sample Factor n Simple Difference Regression 15 pts 22 pts 15 pts 22 pts Male 242 166 1l8 190 129 Female 102 59 44 72 49 White 227 154 112 167 115 Black 101 60 42 81 53 other 16 11 8 14 10 Total 344 225 162 262 178 TABLE 4 Number of Students Meeting the Severe Discrepancy Criterion Using the Standard and Supplementary Batteries of the WJ-R by Gender, Race and Total Sample Factor n Simple Difference Regression 15 pts 22 pts 15 pts 22 pts Male 242 168 123 193 132 Female 102 59 42 74 48 White 227 154 113 170 116 Black 101 62 43 83 54 Other 16 11 9 14 10 Total 344 228 165 267 180 58 TABLE 5 Agreement Between Levels of Achievement Testing (Standard Battery vs. Standard and Supplementary Batteries) in the Selection of Students by Total Sample, Race, and Gender using the Simple Difference Method - 15 Pt. Cutoff Factor Agree Disagree %Overlap Kappa Sig SD No SD Male 163 71 8 96.69 .91 p<.001 Female 55 38 9 91.18 .81 p<.001 White 149 69 11 95.19 .90 p<.001 Black 59 38 4 96.03 .92 p<.001 Total 218 109 17 95.06 .88 p<.001 TABLE 6 Agreement Between Levels of Achievement Testing (Standard Battery vs. Standard and Supplementary Batteries) in the Selection of Students by Total Sample, Race, and Gender using the Regression Method - 22 Pt. Cutoff Factor Agree Disagree %Overlap Kappa Sig SD No SD Male 127 108 7 97.11 .94 p<.001 Female 46 51 5 95.10 .90 p<.001 White 111 107 9 96.04 .92 p<.001 Black 52 46 3 97.03 .87 p<.001 Total 173 159 12 96.51 .92 p<.001 S9 factors. Under current guidelines, a total of 17 students, or 4.94 percent of the sample, would be influenced by a decision to use one level of testing over another. Very high kappa statistics, representing the proportions of agreements after chance agreement has been removed from consideration, ranged from .81 to .91 and also suggest that the differences are trivial. Table 6 presents similar data using proposed guidelines (regression - 22 point cutoff) for comparison. Under proposed guidelines, only 12 students, or 3.48 percent of the sample, would be influenced by a change in testing levels. Kappa statistics range from .87 to .94. Given these empirical findings, along with the high correlation coefficients identified and the excellent test reliability coefficients of the standard battery, it would seem reasonable to conclude that the additional tests administered to some students did not result in significant changes in the number of students demonstrating a severe discrepancy. Subsequent analyses will, therefore, employ only scores from the standard battery of the WJ-R ACH, which was administered to all students. The significance of these differences to diagnostic personnel, aside from the research concerns being addressed here, are another matter and will be revisited in the discussion section. Having dealt with these preliminary concerns, we can now look at the results of this study that provide answers to the first three research questions regarding identification rates. 60 A. Identification Rates The first three research questions addressed the change in identification rates that this intermediate school district might experience if they were to change their operational definition of LD. One change involves the switch from the use of standard scores to regressed standard scores. Table 7 indicates the change in number and percent of students that would meet the severe discrepancy criterion as method and cutoff score change and the significance of the change using a chi-square test for correlated proportions (McNemar's Test for Large Samples). Looking first at a change in method, regression significantly increased identification rates at each cutoff value. At the 15-point discrepancy level, 37 more students, representing a significant increase of 16.4 percent (p < .001), demonstrated a severe discrepancy using a regression method over a simple difference score method. At the 22-point discrepancy level, 16 more students, representing a significant increase of 9.9 percent (p < .025), demonstrated a discrepancy using a regression method over simple difference method. The second change entails raising the magnitude of the discrepancy from 15 to 22 standard score points. By examining Table 7, we see that a significant decrease in the number of students meeting the severe discrepancy criterion occurs when the cutoff level is raised from 15 to 22 points using either method. While a decrease would logically be 61 TABLE 7 Change in Identification Rates as Method and Cutoff Value Change Increase (+) Change Decrease (-) n % X2 sig Sim Dif-15 to Regres-15 +37 +16.4 25.81 p=.0000 Sim Dif-22 to Regres-22 +16 + 9.9 6.40 p=.0114 Sim Dif-15 to Sim Dif-22 -63 -28.0 62.88 p=.0000 Regres-15 to Regres-22 -84 -32.0 83.91 p=.0000 Sim Dif-15 to Regres-22 -47 -20.7 46.92 p=.0000 df ll ...: 62 expected, the amount of change may be of greater interest. Using the simple difference method, a significant decrease of 63 students, or 28 percent (p < .001) is observed. Using regression, a significant decrease of 84 students, or 32 percent (p < .001) is observed. In this intermediate school district, simply adjusting the cutoff level would decrease the number of students demonstrating a severe discrepancy by over a fourth of the current rate. What effect, then, will changing the method of identification from simple difference to regression and increasing the magnitude of the discrepancy from 15 to 22 points have on the percentage of students determined to have a severe discrepancy? As indicated in Table 7, the change resulted in a decrease of 47 students, or 20.7 percent (p < .001) who met the severe discrepancy criterion. Therefore, this intermediate school district could deal with the increased identification rates that would result from using the regression method by adjusting the cutoff value to identify only the most severely learning disabled. In addition to predicting the change in identification rates resulting from a change in policy, it is of interest to know how well the methods for determining a severe discrepancy agree on which students to identify and exclude. One approach is to measure the extent of overlap between the methods. Overlap statistics were calculated for six comparisons by method and cutoff value. Secondly, kappa statistics were calculated. The percent of overlap and the 63 kappa coefficient for each comparison are reported in Table 8. Although one would expect high percentages of agreement and significant kappa values when only the cutoff value was changed, this result was also true when the method for calculating a severe discrepancy was changed. Of particular interest in this study is the comparison between the simple difference method at a 15 point cutoff (current guidelines) and the regression method at a 22 point cut-off value (proposed guidelines). Regardless of the significant decrease in identification rates previously noted with this same comparison, the current guidelines and proposed guidelines tended to include and exclude a high percentage (84.59%) of the same students. B. Effect of Method on Ability Groups The fourth research question asked whether ability groups, as determined by IQ scores, would be treated more equitably using a regression method over a simple difference score method. Table 9 presents information regarding the distribution of WISC-R Full Scale IQ scores in the sample. A mean FSIQ score of 94.14 and a standard deviation of 11.55 describe the overall ability of the referral group. Approximately two thirds (67.44%) of the students studied had FSIQs under 100. Scores ranged from 64 to 136. The calculation of a point-biserial correlation coefficient (rpb) was used to determine if a relationship exists between the dichotomous variable of severe 64 TABLE 8 Agreement Between Methods at Different Cutoff Values in the Selection of Students Comparison % Overlap Kappa Sig- Simple Dif-15 and Simple Dif-22 81.69 .64 p<.001 Simple Dif-15 and Regres-lS 84.59 .64 p<.001 Simple Dif—15 and Regres-22 84.59 .69 p<.001 Simple Dif-22 and Regres-15 70.93 .44 p<.001 Simple Dif-22 and Regres-22 88.37 .76 p<.001 Regres-15 and Regres-22 75.58 .50 p<.001 65 TABLE 9 WISC-R FSIQ Intervals for the Referral Sample by Frequency and Percent Interval n % 60 - 69 3 .87 70 - 79 24 6.98 80 - 89 112 32.56 90 - 99 93 27.03 100 - 109 79 22.97 110 - 119 27 7.85 120 - 129 5 1.45 130 - 139 1 .29 66 discrepancy (meeting or not meeting the criterion) and the continuous measure of FSIQ. If the odds of meeting the severe discrepancy criterion change across IQ levels for the simple difference method, as theory suggests, then a correlation should be observed between student FSIQs and their severe discrepancy status. Likewise, if the odds of meeting the severe discrepancy criterion remain constant across IQ levels for the regression method, no correlation should be observed when this method is employed. As presented in Table 10, significant (p<.0001) point- biserial correlation coefficients of .33 and .36 were found at the 15-point and 22-point cutoff values, respectively, using the simple difference score method. In contrast, nonsignificant point-biserial correlations of .08 and .10 were found at the 15-point and 22-point cutoff values, respectively, using the regression method. Further evidence that IQ plays a role in determining a severe discrepancy under the simple difference method comes from comparing mean FSIQs of those students who met the criterion with those students who did not, as reported in Table 11. At both levels, the simple difference method identified a group of students with a significantly higher mean IQ as meeting the severe discrepancy criterion, t(342)= 6.37, p<.001 and t(342) = 7.00, p<.001, using a one-tailed test because we expected one group to be lower. The group of students not qualifying under the simple difference method at either cutoff level had FSIQs approximately eight points 67 TABLE 10 Point-Biserial Correlation Between IQ and Severe Discrepancy Criterion by Method and Cutoff Value Method Cutoff Value rpb t sig Simple Dif 15 .33 6.378 p=.0000 Simple Dif 22 .36 7.006 p=.0000 Regression 15 .08 1.481 p=.1395 Regression 22 .10 1.910 -.0570 "U I 68 TABLE 11 Mean FSIQs for Students With and Without a Severe Discrepancy by Method and Cutoff Value Method - Cutoff Mean FSIQ t sig with without severe dis severe dis Simple Dif - 15 96.87 88.97 6.37 p=.0000 Simple Dif - 22 98.47 90.28 6.92 p=.0000 Regres - 15 94.65 92.47 1.49 p=.1371 Regres - 22 95.28 92.91 1.91 p=.0570 df = 342 69 lower. On the other hand, when regression was used, there was an insignificant difference of approximately two points between the mean FSIQ of those meeting the severe discrepancy criterion and those not meeting it, t(343) = 1.49, n.s. and t(342) = 1.91, n.s., using a two-tailed test. In sum, the current evidence appears to support previous research that identifies an influence of IQ on simple difference scores in favor of higher ability students. C. Effect of Method on Race Knowing that an influence of IQ on simple difference scores in favor of higher ability students is present and that the mean WISC-R Full Scale IQ for white students in the sample is 96.41 (s.d.= 11.7), which is significantly higher than the mean Full Scale IQ of 89.45 (s.d.= 9.8) for black students, t(226) = 5.217, p<.001, we would expect to see an overrepresentation of white students who meet the severe discrepancy criterion when the simple difference method is used. In contrast, when regression is employed, we would expect representation to be proportional for blacks and whites because there was no evidence of an influence by IQ using the regression method. Surprisingly, no comparison between black and white students by method and cutoff value identified a significant proportion of one race over the other as meeting the severe discrepancy criterion (see Table 12). Chi-squares of 2.193 and 1.688 for the simple difference method and 1.666 and 70 TABLE 12 Number and Percent of Black (n = Students Meeting the Severe Discrepancy Criterion by Method and Cutoff Value 101) and White (n = 227) Method and Black White X2 sig Cutoff Value n (%) n (%) Simple Dif-15 60 (59.4) 154 (67.8) 2.193 p=.l386 Simple Dif-22 42 (41.6) 112 (49.3) 1.688 p=.1939 Regres-15 81 (80.2) 167 (73.6) 1.666 p=.1968 Regres-ZZ 53 (52.5) 115 (50.6) .092 p=.7616 df = 1 71 .092 for the regression method did not reach significance, indicating that any differences in representation between black and white groups were the result of chance. In summary, the data analyses show that moving from a simple difference score method to a regression method for determining a severe discrepancy and increasing the cutoff value from 15 to 22 points would result in approximately a 20 percent decrease in the number of students who meet the severe discrepancy criterion within this intermediate school district during a one-year period. The change to a regression model could also result in a more equitable approach to the provision of LD services by providing students at all IQ levels the same chance of meeting the severe discrepancy criterion and eliminating the influence that was observed by a significant correlation between IQ and simple difference scores. Likewise, black and white students would be represented proportionally within groups demonstrating a severe discrepancy and thereby have equal access to special education services under this criterion. It should be noted, however, that race did not show up as a significant factor in this referred sample of students, even when a simple difference method was used and the mean IQ's of the racial groups were known to be significantly different. Is a severe discrepancy between ability and achievement the key defining feature leading to a student being found eligible as LD or do other factors appear to contribute to 72 the eligibility decision? The second group of research questions consider the relationship between IEPC decisions regarding eligibility and the existence of a severe discrepancy using the different methods for determination. D. Eligibility and a Severe Discrepancy It is important to keep in mind that the IEPCs in this study were making decisions based on current LD guidelines which include the use of simple difference scores and a 15- point discrepancy between ability and achievement in determining a severe discrepancy. Consequently, if a severe discrepancy plays a key role in determining who is LD, we would not expect to see similar agreement when comparing eligibility decisions and a severe discrepancy when we apply proposed guidelines, which use a regression formula and a more severe 22-point difference. We know from earlier results that application of the current and proposed guidelines would result in disagreement on the severe discrepancy status in 53 cases, or 15% of the students. Interestingly, however, the rate of agreement between the eligibility decision and the presence or absence of a severe discrepancy, as measured by the overlap statistic, was similar regardless of the method or cutoff value, as indicated in Table 13. Using the current guidelines, the IEPC decision was consistent with a decision based solely on the severe discrepancy criterion in 73.83% of the cases. Applying the proposed guidelines, consistency was observed 73 TABLE 13 Agreement between IEPC Eligibility Decision and Eligibility Based Only on the Severe Discrepancy Criterion % Overlap Kappa Sig IEPC Decision compared to Simple Difference - 15 73.83 .45 p<.001 IEPC Decision compared to Regression - 22 74.71 .49 p<.001 74 in 74.71 percent of the cases. In other words, in approximately three-fourths of the cases, regardless of method or cutoff value, the eligibility decision was consistent with the severe discrepancy status. In approximately one-fourth of the cases, a decision was made to find a student eligible without a severe discrepancy or ineligible with a severe discrepancy. Presentation of further analyses will attempt to explain why this similar level of agreement could occur despite a change in guidelines. The sixth research question asks if there are students who are identified as demonstrating a severe discrepancy between ability and achievement, but are not found eligible by the IEPCs as learning disabled under current guidelines. A total of 57 students, or 16.57 percent of the referred sample fell within this category (see Table 14). When compared to those students who demonstrated a severe discrepancy and were found eligible by the IEPC, the ineligible group had a significantly higher Full Scale IQ of 102.30. The eligible group had a Full Scale IQ of 95.03, t(223) = 4.239, p<.001 (see Table 15). Consequently, it appears that the students with high IQs are less likely to receive an eligibility decision and placement in special education than the students with low IQs, despite evidence of a severe discrepancy. Even if a regression model could correct for a method that gives high IQ students additional 75 TABLE 14 Comparison of Eligibility Status to Severe Discrepancy Criterion under Current and Proposed Methods and Cutoff Values by Frequency and Percent Discrepancy Criterion Eligible Ineligible n % n % (Current Guidelines) Simple Diff - 15 Severe Discrepancy 168 48.84 57 16.57 No Severe Discrepancy 33 9.59 86 25.00 (Proposed Guidelines) Regression - 22 Severe Discrepancy 146 42.44 32 9.30 No Severe Discrepancy 55 15.99 111 32.27 76 TABLE 15 Comparison of Eligibility Status to Severe Discrepancy Criterion under Current and Proposed Methods and Cutoff Value by mean WISC-R Full Scale IQ Discrepancy Criterion Full Scale IQ t Elig Inelig (Current Guidelines) Simple Diff - 15 Severe Discrepancy 95.03 102.30 4.239*** No Severe Discrepancy 85.55 90.28 2.475* (Proposed Guidelines) Regression - 22 Severe Discrepancy 94.08 100.75 2.900** No Severe Discrepancy 91.85 93.43 .873 * p < .01 ** p < .005 *** p < .001 77 points toward a severe discrepancy, the IEPCs seem to have already made informal adjustments in the same direction. What change is seen when we compare the IEPC decision with those students meeting the severe discrepancy criterion when the proposed guidelines are applied, including the regression method and a higher cutoff value? The group demonstrating a severe discrepancy, but found ineligible by the IEPCs shrinks from 57 (16.57%) to 32 (9.30%), as indicated in Table 14. In other words, the proposed guidelines "correct" almost half of the current "misclassifications", if one considers the severe discrepancy criterion to be key to a diagnosis of LD. The students removed from the "misclassified" group would be those with smaller severe discrepancies (15 vs. 22 points). Any change in status caused by a change in method for calculating a severe discrepancy across would be small, overshadowed by the requirement for a larger discrepancy. However, the almost 10 percent that remain "misclassified", have large discrepancies (at least 22 points when only 15 are currently required) and still were not found eligible by the IEPC, giving additional support to the conclusion that something more than the discrepancy is heavily weighed by the decision makers. Although the mean FSIQ for those demonstrating a severe discrepancy under the proposed guidelines, but found ineligible, has come down to 100.75, it is still significantly higher, t(176) = 2.90, p<.005, than the mean FSIQ of 94.08 of those with a severe 78 discrepancy who were found eligible by the IEPCs, as noted in Table 15. Federal and state laws dictate that a severe discrepancy between ability and achievement is a required, but not exclusive factor in the diagnosis of learning disabilities. Consequently, IEPCs might decide that despite the presence of a severe discrepancy, some students would not require placement in special education programs in order to address their educational needs, or they might decide that an exclusionary factor, such as environmental or emotional issues, are responsible for depressing achievement. However, diagnosing a student LD without evidence of a severe discrepancy, would appear to be a departure from legislative intent. The seventh research question examines the occurrence of this situation in the sample of referred students. E. Eligibility without a Severe Discrepancy Are there students who are found eligible as learning disabled by the IEPCs, even though a severe discrepancy was not demonstrate under current guidelines? Referring back to Table 12, in 33 cases, representing 9.59 percent of the students, a decision was made to classify the student LD without evidence of a severe discrepancy in any academic area. Again, a comparison of FSIQ’s between those students found eligible versus those found not eligible indicates a significant difference in mean FSIQs, t(117) = 2.475, p<.01. 79 As a group, the students found ineligible without a severe discrepancy of at least 15 points, using the simple difference method, had a mean FSIQ of 90.28. Those students found eligible without a severe discrepancy under the same guidelines had a mean FSIQ of 85.55, suggesting that a greater need to "bend the rules" and provide educational services through special education placement might be perceived by IEPC members for lower IQ students. Although the simple difference method of calculating a severe discrepancy may make it more difficult for lower IQ students to demonstrate a severe discrepancy, IEPCs appear to make decisions in some cases that counteract these outcomes and are more in line with a regression approach. How do the numbers change with regard to those students found eligible without a severe discrepancy when we apply the proposed guidelines, as shown in Table 14? We would expect this group to grow simply because we have applied a more severe cutoff value than the IEPCs were using for decision making. This expectation was confirmed. The group of students found eligible without a severe discrepancy increases from 33 (9.59%) to 55 (15.99%). Do these results suggest that IEPCs are inclined toward identifying lower IQ students as LD when making eligibility decisions, finding them eligible more easily than higher IQ students? To answer this question, a point-biserial correlation coefficient was calculated between the IEPC's eligibility decision (eligible and ineligible) and the FSIQ. 80 A negative, insignificant rpb (-.07, n.s.) indicates that those students found eligible by the IEPCs had a lower IQ than those found not eligible, but not to a significant degree. It appears that, although the IEPCs made decisions with regard to IQ that counteracted the influence indicated by the simple-difference score method, they did not do so to the extent that a relationship between eligibility and low IQ could be detected. F. Eligibility and Race The eighth research question asks what characteristics the students who are "misclassified" by the IEPCs, using the severe discrepancy as the key feature of the eligibility decision and current guidelines, display with regard to ability, race, gender, grade and achievement. Ability has already been addressed in the preceding discussion. What differences are observed with regard to race? Table 16 identifies the number of black and white students in each category of eligibility. Table 17 looks only at those students who were found to have a severe discrepancy under the current guidelines. If race is not a factor, then we would expect students found ineligible to be represented in the same proportions by race as those found eligible when a severe discrepancy has been observed. The chi-square test shows, however, that a disproportionate number of white students over black students fall in the ineligible category, X2 (1, N=214) = 81 TABLE 16 Comparison of Eligibility Status and Severe Discrepancy Criterion under Current Guidelines by Race With Without Severe Discrepancy Severe Discrepancy Factor n Elig Not Elig Elig Not Elig Black 101 51 9 14 27 White 227 107 47 18 55 82 TABLE 17 Frequencies of Students Showing a Severe Discrepancy Under Current Guidelines by Race and Eligibility Decision ELIGIBILITY Black White Eligible 51 107 Ineligible 9 47 N = 214, df x2 = 5.322, p = .0211 ll ...: ‘ TABLE 18 Frequencies of Students Not Showing a Severe Discrepancy Under Current Guidelines by Race and Eligibility Decision ELIGIBILITY Black White Eligible 14 18 Ineligible 27 55 N = 114, df = 1, x2 = 1.171, p = .2792 83 5.382, p<.025. Of the 57 students with a severe discrepancy, but found ineligible under current guidelines, 47 (82.46%) were white and 9 (15.46%) were black. Table 18 provides data to answer the same question regarding race of those students who did not demonstrate a severe discrepancy. Does the eligible group differ proportionately by race from the ineligible group? Unlike the previous comparison, proportionate numbers of black and white students made up the category of students who did not show a severe discrepancy but were found eligible under current guidelines, X2 (1, N=114) = 1.171, n.s. In sum, when looking at the "misclassified" students, we see that more white students than black students made up the group that demonstrated a severe discrepancy but was not found to be LD. Equal representation was observed in the group that was found to be LD without a severe discrepancy. G. Eligibility and Gender Table 19 identifies the number of boys and girls within each classification by severe discrepancy status and eligibility decision. As previously noted, overall, boys outnumbered girls more than two to one in referrals and eligibility decisions. Again, using the notion of "misclassification", based on the severe discrepancy criterion and current guidelines, we can compare those students showing a severe discrepancy who were found eligible to those who were found ineligible, 84 TABLE 19 Comparison of Eligibility Status and Severe Discrepancy Criterion under Current Guidelines by Gender With Severe Discrepancy Without Severe Discrepancy Factor n Elig Not Elig Elig Not Elig Males 242 119 47 16 60 Females 102 49 10 17 26 85 expecting proportional representation of boys and girls across groups. The assumption is that decision makers are not influenced by students' gender when deciding whether or not to find them eligible in the presence of a severe discrepancy. This assumption, in fact, was supported in the data analysis by an insignificant X2. As reported in Table 20, x2 (1, N=225) = 2.972, n.s. In contrast, an unexpected effect for gender was found when comparing the proportions of boys and girls who did not show a severe discrepancy. Within the group of students who were found eligible for special education services without a severe discrepancy, a disproportionate number were girls, x2 (1, N=119) = 4.681, p<.05 (See Table 21). Although girls made up approximately one-third of the students without a severe discrepancy, an almost equal number of each sex from this group were labeled LD. Consequently, there is some indication that being a female may have made a difference when students were labeled LD without evidence of a severe discrepancy. H. Eligibility and Grade Level Before comparing the eligibility decision with the severe discrepancy criterion by grade, it may be interesting to observe at which grade levels students are most frequently referred and identified as LD. A look at the breakdown by grade in Table 22 indicates both the highest referral rate and positive eligibility decisions occurred in 86 TABLE 2 0 Frequencies of Students Showing a Severe Discrepancy Under Current Guidelines by Gender and Eligibility Decision ELIGIBILITY Male Female Eligible 119 49 Ineligible 47 10 N = 225, df = 1, x2 = 2.972, p = .0847 TABLE 2 1 Frequencies of Students Not Showing a Severe Discrepancy Under Current Guidelines by Gender and Eligibility Decision ELIGIBILITY Male Female Eligible 16 17 Ineligible 60 26 N - 119, df = 1, x2 = 4.681, p = .0305 87 TABLE 22 Students Found Eligible and Ineligible by IEPCS by Grade in Frequencies and Percents Grade n Eligible Not Eligible n % n % K 8 2 1.00 6 4.20 1 74 52 25.87 22 15.38 2 83 54 26.87 29 20.28 3 57 36 17.91 21 14.69 4 34 17 8.46 17 11.89 5 30 16 7.96 14 9.79 6 21 14 6.97 7 4.90 7 13 4 1.99 9 6.29 8 4 1 .50 3 2.10 9 8 2 1.00 6 4.20 10 6 1 .50 5 3.50 11 5 2 1.00 3 2.10 12 1 0 0.00 1 .70 88 the second grade, followed by almost equally high rates in the first grade. Referral of 83 second graders resulted in 54 LD decisions, or 27 percent of all students found eligible in the sample. For first graders, referrals totaled 74 students, with 52 LD decisions, representing 26 percent of all those found eligible. Thus, it appears that over 50 percent, or a majority of the students found to eligible as LD by the IEPCs were in the first or second grades, with referral rates declining steadily after that time. Table 23 consolidates the grade levels into three categories; early elementary (K-2), later elementary (3-6), and secondary (7-12) and identifies the number of students in each category as to eligibility and discrepancy status. Consolidation was necessary to accommodate small cell sizes when eligibility was compared to the severe discrepancy criterion by grade using the chi-squared statistic. Is there evidence that a student’s grade level may play a part in the IEPC decision for eligibility? As indicated in Table 24, a significantly greater proportion of students who demonstrated a severe discrepancy, but were found ineligible, were older students from the late elementary and secondary levels, X2 (2, N=225) = 10.243, p< .01. This result is particularly interesting, as one might hypothesize just the opposite; that younger students would have access to more remedial programs in the primary grades, which could 89 TABLE 23 Comparison of Eligibility Status and Severe Discrepancy Criterion under Current Guidelines by Grade Level With Without Severe Discrepancy Severe Discrepancy Factor n Elig Not Elig Elig Not Elig Early El 165 96 22 12 35 Later El 142 64 26 19 33 Secondary 37 8 9 2 18 90 TABLE 24 Frequencies of Students Showing a Severe Discrepancy Under Current Guidelines by Grade and Eligibility Decision ELIGIBILITY Early El Later El Secondary Eligible 96 64 8 Ineligible 22 26 9 N = 225, df = 2, x2 = 10.243, p = .0060 TABLE 25 Frequencies of Students Not Showing a Severe Discrepancy Under Current Guidelines by Grade and Eligibility Decision ELIGIBILITY Early El Later El Secondary Eligible 12 19 2 Ineligible 35 33 18 N = 119, df = 2, x2 = 5.264, p = .0719 91 provide the needed academic support otherwise received in special education. On the other hand, for those students who were found eligible without demonstrating a severe discrepancy, there does not appear to be a significant difference in the proportions represented by early elementary, later elementary, or secondary students, X2 (2, N=119) = 5.264, p< .05 (See Table 25). I. Eligibility and Achievement The last factor to be examined for its influence on IEPC decision-making is achievement. Do achievement levels alone, aside from their role in the calculation of a severe discrepancy, influence IEPC participants to find a student eligible or ineligible as learning disabled? Table 26 displays the WJ-R mean achievement scores in each of the five achievement areas identified in this study for consideration in LD diagnoses. Students are grouped, as before, by eligibility status and the severe discrepancy criterion. Observable differences are noted between the eligible and ineligible students when the severe discrepancy criterion is held constant. While achievement means for eligible students can generally be described as below average, achievement means for ineligible students appear to be primarily (70%) in the average range. Further hypotheses 92 TABLE 2 6 Comparison of Eligibility Status and Severe Discrepancy Criterion under Current Guidelines by WJ-R Mean Achievement Scores With Without Severe Discrepancy Severe Discrepancy Achievement Area Elig Not Elig Elig Not Elig Basic Reading 74.89 86.72 84.42 90.76 Reading Comp 78.80 94.04 86.81 93.22 Math Calculation 82.10 92.53 90.36 93.89 Math Reasoning 88.60 98.38 88.55 96.31 Written Language 70.85 82.02 80.27 86.30 93 testing was completed to determine if the observable group differences represent real group differences in achievement. Using one—way analysis of variance and post hoc comparisons (Tukey’s), the "misclassified" students were again compared against the "correctly" classified students on achievement levels, using the severe discrepancy as key to the diagnosis of LD. One might hypothesize that no true differences between group achievement means would be observed, based on the assumption that achievement levels are not weighed separately from the severe discrepancy criterion by decision makers in the determination of eligibility. Results of the data analyses suggest otherwise. In all five achievement areas, significant F-ratios (P < .0001), ranging from 12.51 to 46.50 (see Tables 27 - 31) were noted, indicating true differences in achievement between groups. Post hoc comparisons, using the Tukey method with a significance level of .05, showed that, of the students with a severe discrepancy, those found ineligible had significantly higher achievement scores in all five achievement areas than those found eligible. Consequently, there is some evidence that would indicate higher achievement scores may influence IEPC's to forgo special education services, even though a severe discrepancy between ability and achievement exists. Post hoc analyses also showed some differences in achievement levels between students found eligible and those 94 TABLE 27 Analysis of Variance of WJ-R Reading Recognition Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion With Without Severe Discrepancy Severe Discrepancy Elig Not Elig Elig Not Elig i = 74.89 i = 86.72 X = 84.42 E = 90.76 SD 8 11.73 SD = 10.21 SD = 6.08 SD = 11.01 n = 168 n = 57 n‘= 33 n = 86 df = 3, 343 F = 46.50 P < .0001 TABLE 28 Analysis of Variance of WJ-R Reading Comprehension Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion With Without Severe Discrepancy Severe Discrepancy Elig Not Elig Elig Not Elig i = 78.80 i = 94.04 i = 86.81 i = 93.22 SD = 13.66 SD = 10.73 SD = 8.76 SD = 10.68 n = 168 n = 57 n = 33 n = 86 df = 3, 343 95 TABLE 29 Analysis of Variance of WJ-R Math Calculation Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion With Without Severe Discrepancy Severe Discrepancy Elig Not Elig Elig Not Elig SE = 82.10 i = 92.53 X = 90.36 i = 93.89 SD = 16.06 SD = 13.75 SD = 14.89 SD = 12.68 n = 167 n = 55 n = 33 n = 85 df = 3, 339 F = 15.22 P < .0001 TABLE 30 Analysis of Variance of WJ-R Applied Problems (Math) Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion With Without Severe Discrepancy Severe Discrepancy Elig Not Elig Elig Not Elig SE = 88.60 i = 98.38 i = 88.55 i = 96.31 SD = 12.72 SD = 13.42 SD = 10.74 SD = 13.23 n = 167 n = 55 n = 33 n = 85 df = 3, 339 F = 12.51 P < .0001 96 TABLE 31 Analysis of Variance of WJ-R Broad Written Language Student Achievement Scores by Eligibility Status and Severe Discrepancy Criterion With Without Severe Discrepancy Severe Discrepancy Elig Not Elig Elig Not Elig i = 70.85 i = 82.02 i = 80.27 3? = 86.30 SD = 14.20 SD = 10.64 SD = 7.81 SD = 9.66 n = 167 n = 57 n = 33 n = 86 df = 3, 342 F = 35.18 97 found ineligible when a severe discrepancy was not documented. Significantly lower achievement levels in basic reading, reading comprehension and math reasoning were identified among those who received an LD label and those who did not, suggesting that low achievement in some areas may have played in role in bending the rules for special education eligibility. At this point, however, it becomes very difficult to determine if it was low IQ or low achievement that influenced placement, as they are highly related among those students without a severe discrepancy. In summary, when comparing the IEPC decisions for eligibility and the severe discrepancy criterion, there is evidence of a high level of agreement not only with the current guidelines, but also with the proposed guidelines which employ the regression method and a more severe cut-off value. In both cases, the rate of agreement between the severe discrepancy criterion and the IEPC eligibility decision is approximately 75 percent. This finding suggests that IEPCs are making informal decisions under the current guidelines, probably when considering students of higher and lower intellectual ability, that appear in some cases to result in an outcome similar to that observed when regression is employed. With regard to specific student characteristics, over- representation of white students and students who were in the later elementary or secondary grades was observed among those who demonstrated a severe discrepancy but were found 98 ineligible. These students also appeared to be more academically able, earning higher achievement scores in all areas of qualification, than those students who were found eligible. Among the students found eligible without a severe discrepancy, a disproportionate number were female. Low achievement scores in basic reading skills, reading comprehension, and math reasoning also may have played a role in the eligibility decision. The following discussion will attempt to explore reasons, draw conclusions and suggest implications for these findings. V. DISCUSSION Although the intent of this study was not to examine the extent to which achievement testing is desirable in LD evaluations, methodological concerns led to some comparisons being made. These comparisons are of interest to diagnostics, including psychologists, teacher consultants, and LD classroom teachers who regularly administer the Woodcock-Johnson-Revised Achievement Tests. For school professionals, the amount of time needed to test each student and the adequacy of the achievement information gained are variables always under scrutiny. For this reason, it may be worthwhile to digress briefly from the main focus of the research to comment. In this sample, less than 5 percent of the students changed status, with respect to the severe discrepancy criteria, when the current cutoff score and method for determining a severe discrepancy were applied to data containing the supplementary testing. If the proposed guidelines had been in place, the proportion would have shrunk to 3.5 percent. Consequently, for diagnosticians who routinely administer the Supplementary Battery of the WJ-R ACH to all students in fear that if they do not, their results may be inadequate and/or leading to false labels, there is evidence from this study that suggests such an outcome is unlikely and the additional testing may be 99 100 unnecessary. Given the very high correlations between the standard and supplementary test scores and the data from overlap analysis, indicating that a very low incidence of change in severe discrepancy status as a result of the additional testing, one might conclude that only the most questionable or borderline cases warrant the administration of supplementary test(s). This conclusion appears consistent with information presented in the WJ-R manual that recommends the use of selective testing, based on the information needs of the examiner. Based on the large percentage of LD students given the Standard and Supplementary Battery, it appears that the practitioners in the study were not as selective in their use of supplementary tests as the manual recommends. A. Identification Rates What information does the current study add to a research base that could guide school districts who are feeling the constraints of limited resources and need to restrict their services to only the most severely learning disabled students? How will a change in the method for determining a severe discrepancy and the level of severity, as described by a cutoff score, affect their identification rates? The results of this research indicated an increase in the number of students identified as having a severe discrepancy if the method for determination was changed from 101 a simple difference to a regression formula and the cutoff was held constant. This outcome was true at both levels of severity. When the change also included moving to a more severe cutoff score, as proposed in the intermediate school district studied, the pattern reversed. The number of students identified in the sample then decreased by over 20 percent. Thus, while regression increased numbers, a more severe cutoff offset the increase and actually decreased the total number of students who met the severe discrepancy criterion. These findings are not consistent with other published work on identification rates. Evans (1992), whose research used the same tests, achievement areas for qualification, formulas, and cutoff levels, reported a 10.7 percent increase in students identified with a severe discrepancy when regression and the more severe cut-off were used. (The increase was from 15 points to 2 standard deviations, which is equivalent to 22 points, given the tests used.) Why might this discrepancy between studies occur? Possibly it is due to differences in the characteristics of the students referred. Evan's sample included re-evaluations (55%) and a much greater proportion of high school students (40% vs. 4% in this study). He reported a mean FSIQ more than 5 points below that of the current research (88.73 vs 94.14) and a more restricted range, with no student earning an IQ over 111. His students' average achievement scores in the five academic areas were from 4 to 8 standard score points below 102 those of students currently studied. Overall, Evan’s sample was a less intellectually and academically capable group. The work of Clarizio and Phillips (1989) also contradicts the current findings, as well as those reported by Evans (1992). They found a substantial decrease of 45 percent in the number of students identified when a regression formula was used over the simple difference method and the cutoff was held constant. Different formulas for calculating discrepancies, including adjustments for measurement error, may be one explanation. Another might be their restriction to reading as the only achievement area considered for LD qualification. Again, the differences in the referred populations would seem to be significant. Clarizio and Phillips used a referred sample of predominantly white, suburban and rural students, with a mean FSIQ of 96.4. The extent to which above and below average students were included in their sample is unknown. In contrast, the current study included more diverse populations with regard to setting (urban, suburban and rural school districts) and race. A lower mean IQ for the sample would also be a relevant factor. Thus, inconsistencies in the literature suggest that school districts would be wise to look at the characteristics of students referred before attempting to predict what might happen to their identification rates if a change in method for calculating a severe discrepancy were 103 established. Intelligence factors play a role as we see in the discussion which follows. B. Intelligence Factors The findings in this investigation add to a growing body of literature that demonstrates the effect of IQ on the determination of a severe discrepancy. As in previous studies by Bradding and Weiss (1988) and Evans (1992), a correlation between IQ and the discrepancy was found, pointing to an advantage experienced by students in the higher IQ ranges to demonstrate a severe discrepancy than students in the lower IQ ranges when the simple difference method is used. No correlation between IQ and the discrepancy, when discrepancies were calculated by the regression method, suggests a more equitable method for calculating severe discrepancies. No ability group is given an advantage. Further evidence to support this relationship between IQ and the standard difference score was provided through comparison of mean FSIQs of students who met the severe discrepancy criterion and those who did not. Unlike the outcome reported by Clarizio and Phillips (1989), the simple difference method did identify students who were statistically different from each other with regard to measured intelligence. Those with a severe discrepancy had FSIQs almost 8 points higher. Regression, on the other hand, identified groups that displayed no significant IQ 104 difference. The reason for the difference in findings between the two studies is unclear, but may relate to factors already identified above. For school districts who are leery of regression formulas, fearing they would open the floodgates for low ability students into their learning disabilities programs and classrooms, this study suggests otherwise. A lower ability student would have no greater chance of meeting the severe discrepancy requirement than a student of higher ability. Resistance to the use of a regression formula, instead, appears to produce more limited access for low ability students, at least with regard to meeting the severe discrepancy criterion, and an unfair system of selection maintained by misconception. There are those, however, who would argue that the label, itself, is handicapping. What follows for labeled students, they might say, are decreased expectations, placement in special programs that are isolating but not "special", and lowered self-esteem. Therefore, students who escape this fate are really fortunate, rather than unfairly treated. While there may be some truth to their concerns, these are separate issues that need to be debated on their own. Additional dollars come with labels. Students who are overlooked in the certification process because of unfair selection practices are denied the financial support due them. Using these dollars in ways most advantages to students with special needs is another challenge, but one 105 that should be not confused with best practices for identification of the learning disabled. C. Racial Factors. Another line of inquiry in this research focused on how students of different racial backgrounds would be affected by a change in procedures for identifying a severe discrepancy. Formulas that would clearly result in disproportionate numbers of either black or white students meeting an eligibility criterion would be viewed as unacceptable to school districts with ethnically diverse populations who are concerned with equal access to special education services for all students. Unlike other investigations (Evans, 1992; McLeskey, Waldron & Wornhoff, 1990; Bradding and Weiss, 1988; Furlong, 1988) the present study failed to find any influence of method on race. No comparison between black and white students by method or cutoff value identified a significant proportion of one race over the other as meeting the severe discrepancy requirement. These results are particularly surprising in light of a significant lower mean FSIQ for the black students in the sample. Although this outcome was not hypothesized for the simple difference method, the results might be explained by an aggregation bias. The present study does not look at the number of severe discrepancies a student might show across subject areas or the severity of the discrepancies beyond 106 the cutoff value, but simply, if he/she qualifies in at least one academic area. A substantial amount of information is aggregated to produce a dichotomous variable of either meeting or not meeting the severe discrepancy criterion in any academic area. If we were to analyze the data by the size of the discrepancies or the number of areas in which a student could qualify, we might find evidence of the IQ influence upon race. This level of analysis, however, would not be as important to school districts who are concerned with examining racial representation in special education programs by making comparisons between those who are labeled and those who are not. Another possible explanation for why the current study failed to show an expected effect for race may be the size of the correlation between IQ and the dichotomous variable of meeting or not meeting the discrepancy requirement. Using the simple difference method, this investigator found point biserial correlation coefficients of .33 (15-point cutoff) and .36 (22-point cutoff), which portray a weak relationship between the variables. IQ explained only an approximate 11 - 12 percent of the variation in the discrepancy decision. Thus, the correlation may not have been strong enough to pick up group differences in further analyses when a factor secondary to IQ, namely race, was examined. 107 D. Eligibility Decisions A second major objective of the study involved comparing the IEPCs' eligibility decisions against the severe discrepancy criterion using current and proposed guidelines. In what percentage of the cases was eligibility consistent with the presence of a severe discrepancy and ineligibility consistent with the absence of one? Under the current guidelines, in 75 percent of the cases, the eligibility decision was consistent with the severe discrepancy criterion. This figure is higher than those reported by McLesky (1992), Clarizio and Phillips (1989), Furlong (1988) and Valus (1986) and may suggest that decision making teams are relying more on test data, and specifically severe discrepancies, than previously thought by Ysseldyke and his colleagues (1983). A greater reliance on severe discrepancies may exist for a number of reasons. It may be the result of the clarification by measurement experts as to the most appropriate scores to be used in the calculation of a severe discrepancy (i.e., standard scores). It may also come as a gatekeeping measure against the growing number of students who are referred for LD consideration. Allowing large numbers of students into LD programs, sometimes as many as 43 percent (Furlong, 1988) without a severe discrepancy, could result in uncontrollable growth and undermine a school district's ability to make even the broadest predictions about the amount of services needed. Still another reason 108 for greater reliance on severe discrepancies might be found in the need for consistency. The concept of learning disabilities has come under fire by those who point out that LD students look no different than other groups, such as slow learners or unmotivated students. Care in meeting the present rules and criteria may not eliminate, but could certainly reduce the broad mix of students who have filled the ranks of the learning disabled, thereby adding validity and integrity to the diagnosis. How consistent was the eligibility decision with the severe discrepancy criterion when the proposed guidelines were applied to the data? Reasonably, we might assume that the comparisons made using data calculated under current guidelines would produce the highest agreement. After all, these were the data available to IEPCs at the time eligibility decisions were made. Changing the guidelines for the purpose of this study meant changing the data, but not the eligibility decision. Thus, we would expect less agreement between the eligibility decision and severe discrepancy status when the proposed guidelines were applied. Interestingly, such assumptions did not prove to be true. Agreement was observed in three-fourths of the cases, regardless of the guidelines used. What these results seem to suggest is that there are other factors being weighed that tend to produce results in the direction of those 109 produced by the regression method, even when it is not being used in the school districts studied. It should be noted that Clarizio and Phillips (1989) also found similar, although lower, rates of agreement (65%) between a simple difference and regression method when comparing the severe discrepancy to eligibility. It is not known, however, which method their evaluation teams used to determine a severe discrepancy, or if it was consistent across all districts in their sample. When comparing the results using current guidelines and proposed guidelines with the eligibility decision, it is interesting to note that although the agreement rate remained constant at approximately 74 percent, the type of misclassification changed. Under the current guidelines, the greatest number of misclassifications, totaling 16 percent, occurred among those students with a severe discrepancy, but found ineligible (false negatives). When the proposed guidelines were applied, the opposite situation occurred. Under proposed guidelines, 16 percent of the misclassifications were those students without a severe discrepancy, but found eligible (false positives). This type of situation raises a question regarding the preferred type of error. Is it a more serious mistake to serve children as LD who are not actually LD or to forgo services when a student may, in fact, be disabled? Compliance with the intent of the federal law, or IDEA, would suggest that all handicapped children must be served. 110 Thus, guarding against false negatives would be a primary concern. However, given the expanding population in recent years of LD students and the failure of special education to meet our expectations for positive treatment effects, administrators may question the wisdom of a zero reject approach. Setting a more severe cutoff seems to be their way of saying that they will risk an increasing number of false negatives by serving only the most handicapped students. In an attempt to understand some of the factors being weighed by the IEPCs when determining eligibility, a closer look was then taken of those students who were found to be ineligible despite evidence of a severe discrepancy. 1. Ineligible Students Using current guidelines, the IEPCs found 16.6 percent of the sample ineligible, although a severe discrepancy was observed in at least one subject area. This finding is only slightly below the percentage of false negatives (20%) reported by Clarizio and Phillips (1989). There are a number of reasons why the IEPC might have reached such a decision. One reason might be found in federal and state law which specifically directs the multidisciplinary evaluation team to ascertain whether services in special education are required to address the needs associated with a student's identified severe discrepancy. Some multidisciplinary team members from 111 school districts participating in the study stated that they have interpreted this directive to mean that students with higher FSIQ's, specifically over 100, may not need special education services. Likewise, students whose achievement scores remain within one standard deviation below the mean (i.e., standard score of 85), which they consider "average", may also not require special education placement, regardless of the size of their discrepancy scores. It is such interpretations that, in part, would seem to account for the 57 students (16.6%) who were not labeled, but showed a severe discrepancy in this sample when the current guidelines were applied. This conclusion is supported by the findings that show significantly higher IQs and achievement scores for those students found ineligible than those found eligible when the severe discrepancy criterion was met. Furlong and Feldman (1992) reported a similar finding with regard to IQ. They noted that higher IQ students, even those who obtained scores between 100 to 110, were less likely to be placed than lower IQ students when a severe discrepancy exists. The research by Clarizio and Phillips (1989) supported the notion that achievement levels, alone, can influence eligibility decisions. It may be argued, however, what the law really seems to be asking school districts to do in determining the need for special education is to guarantee that appropriate alternative learning experiences have been tried with the student's educational program before any further 112 determination is made about the existence of a specific disability. (Michigan Association of Learning Disabilities Educators, 1992) Pre-referral teams can address this issue through the documentation of alternative intervention strategies and their duration before the referral is made. The success or failure of good intervention stratagems in regular education would seem to be the most appropriate measure of a student’s need for special education services. Thus, the need for special education should have been fairly well established before a referral for service is made and the student is tested. A second reason that the IEPC might fail to find a student eligible in the presence of a severe discrepancy might be found in other factors that could explain the difference between ability and achievement, but are excluded by state and federal law. The law specifically states that the IEPC shall not identify a child as having a specific learning disability if the severe discrepancy between ability and achievement is primarily the result of (a) a visual, hearing, or motor handicap, (b) mental retardation, (c) emotional disturbance, (d) autism, and (e) environmental, cultural, or economic disadvantage. However, students who were found mentally retarded (EMI), hearing impaired (HI), visually impaired (VI), physically and otherwise health impaired (POHI) and autistically impaired (AI) were not included in this study. While environmental, cultural, or economic disadvantage could be factors for 113 exclusion of students in this study, they are often known in advance of referral, indicated through circumstances such as significant family trauma, frequent school changes, continued unexplained absenteeism, or bilingual background. They would more likely be used when screening students before the referral process than after a costly evaluation has taken place. Emotional disturbance, on the other hand, may explain some of the cases where students were not found eligible as learning disabled despite the presence of a severe discrepancy. A review of individual records, in fact, revealed that the severe discrepancy was attributed to emotional impairment in 13 cases, resulting in an EUR label over an LD label in each case. In addition, the exclusionary clause for emotional disturbance may help to explain the study's finding that a significantly greater proportion of students who demonstrated a severe discrepancy, but were found ineligible, were older students from the later elementary and secondary levels. It seems reasonable to suggest that IEPCs might be more comfortable finding young children in the primary grades learning disabled over emotionally impaired when a severe discrepancy is present because they perceive it to be a less harsh label. Older children, possibly with more difficult to manage behavior problems, might be more likely to receive the EUR label, even though a severe discrepancy exists. 114 Other than the notion just put forth, it is difficult to explain the effect for grade observed in this study. In contrast, Furlong and Feldman (1992) found that younger children were not placed as frequently as older students with similar profiles, despite meeting the severe discrepancy criterion. Further research is needed to explain why, despite the reliability concerns associated with a severe discrepancy in very young children (reduced exposure to formal education, unevenness of developmental stages, and considerable variability in standard scores among tests at young ages), students found ineligible with a severe discrepancy were more likely to be those at the later elementary or secondary levels. Although black and white students were represented proportionately in groups demonstrating a severe discrepancy, regardless of the method or cutoff score used, the same was not true with regard to the eligibility decision. IEPCs found significantly more white than black youngsters with a severe discrepancy ineligible. It may well be that within this small subgroup of the total sample, the effect of an IQ difference between races had a greater impact. Thus, the students with a severe discrepancy in the ineligible group were more likely to be white, older, and of higher intelligence and achievement than those found eligible. What profile emerges when students without a severe discrepancy are examined, particularly those found 115 eligible? To what extent is failure to meet the severe discrepancy requirement overlooked in the labeling process? 2. Eligible Students Using current guidelines, the IEPCs found 9.6 percent of the total sample eligible without evidence of a severe discrepancy, which is less than the 16 percent (Clarizio and Phillips, 1989), 33 percent (Valus, 1986) and 43 percent (Furlong, 1988) reported in other studies where simple difference scores were used. What circumstances might lead to such a decision when the law is clear about the need for a severe discrepancy between ability and achievement in one or more of the specified achievement areas? Again, the results of this study suggest that a student's ability and achievement levels may affect the way decision makers view a case and cause them to bend the rules. Just as higher IQ students were less likely to be found eligible than lower IQ students when a severe discrepancy was observed, lower IQ students were more likely to be labeled LD than higher IQ students without evidence of a severe discrepancy. The same pattern was true when comparing achievement levels. It may be that IEPCs are feeling pressure from teachers, principals and parents to provide special education services for lower ability students and/or lower achieving students who traditionally have experienced limited success in regular education where their curriculum needs (more individualized instruction, 116 adapted materials, a slower pace) are difficult to meet. Although the data collected do not indicate if a student had previously been evaluated, it would have been interesting to note how many of those students found eligible without a severe discrepancy had one or more previous evaluations in their school history, thereby placing additional pressure on IEPCs to provide a solution to their academic problems. What the informal adjustments with regard to eligibility criteria for lower and higher IQ students have produced is an approach to qualifying students that is more in line with a regression method than a simple difference score method for determining a severe discrepancy. In part, it provides an explanation for the similar levels (75%) of agreement in this study between the elibility decision and the severe discrepancy criterion regardless of the guidelines used. It also points out, that despite a resistance on the part of some school districts to move to a regression approach because they believe it will qualify too many low IQ students as LD, IEPCs are already seeing the need to provide special education support services to these students and are qualifying them without evidence of a severe discrepancy. It should be noted that there is a possibility, in some cases, that IPECs were not using the FSIQ as recommended in their current guidelines. When a very large discrepancy between the Verbal and Performance IQ's occurs, the lower score may be more indicative of the child's handicap than an 117 accurate representation of overall ability. In such cases, the examiner might have selected the measure which most favorably reflected the child’s abilities to use in the calculation of a severe discrepancy, which resulted in the child meeting the criterion, although a similar result would not have occurred using the FSIQ. Since large VIQ - PIQ discrepancies occur infrequently, with a 20-point difference observed in only 12 percent of the population (Kaufman, 1979), this situation would be likely to explain only a few "misclassifications", if any. In addition to low ability and achievement, gender appeared to play a role in decisions made by the IEPCs to label a student LD without a severe discrepancy. Unlike the negative findings reported by Clarizio and Phillips (1986) in their study of sex bias in the diagnosis of LD students, this study did find an effect for gender. Girls who did not meet the severe discrepancy criterion were more likely than boys with the same profile to be found eligible. Why this occurred is not clear. Possibly the academic problems experienced by boys can be more easily explained by other conditions when a severe discrepancy is not found, particular at the elementary school ages where the majority of the referrals occurred. For example, boys are more likely than girls to be diagnosed with an attention deficit and hyperactivity (Barkley, 1990) or display acting out behaviors (Clarizio, 1983) which might lead to specific interventions outside of special education, such as 118 counseling or medication. Academic problems experienced by girls may not be as easily explained and addressed through outside interventions, leading to a reliance on special education services through an unjustified label. Unlike gender, ability and achievement, race or grade level did not make a significant difference in eligibility decisions for those students who failed to meet the severe discrepancy criterion. VI. Conclusions and Implications The purpose of this research was to apply two of the more highly recommended models for determining a severe discrepancy to data already collected on children referred for possible learning disability services. The influence of method upon identification rates, ability levels, and race was a major focus of the study. In addition, a comparison between the severe discrepancy criterion and the eligibility decision was made to determine if student characteristics, particularly those which could be influenced by method, were significant factors in IEPC decisions for eligibility. Depending on student characteristics and referral practices, school districts moving from a simple difference method to a regression method may see an increase in the number of students meeting the severe discrepancy criterion when the cutoff value is held constant. Given financial limitations, this increase could be dealt with by raising the cutoff value to identify only the most severely learning disabled, as proposed in the intermediate school district studied. Evidence has been provided that suggests moving to a regression approach would be fairer, giving all students, regardless of their ability levels, a more equitable chance of meeting the severe discrepancy criterion. No evidence of differential treatment, based on race, was observed using either method. Given a significant difference in mean IQ 119 120 between black and white students in the study, this outcome was not anticipated and should be viewed cautiously. Rather, it was expected that racial differences would be influenced in the same manner as IQ differences by the simple difference method, as previous research has demonstrated. A pattern for eligibility was observed with regard to IEPC decisions when the simple difference method was used which favored providing services to students in the lower IQ ranges without evidence of a severe discrepancy. As IQ’s increased beyond the mean, students were less likely to be found eligible, even though a severe discrepancy was demonstrated. The result is a selection process that may have a mitigating effect on the tendency of the simple difference method to select higher IQ students. In doing so, it resembles a regression approach. More research is needed to understand why the IEPCs might have treated girls differently than boys, overlooking the severe discrepancy requirement in favor of an LD diagnosis in a disproportionately large number of cases. Given the decreased expectations for girls that have been observed in other areas of schooling, such as science and math, decision makers need to take care in applying the same standards for LD diagnosis across all students. Evidence from this study suggests that school districts may support a philosophy of early identification and treatment for learning disabilities. Highest referral rates 121 and LD diagnoses were reported at the early elementary level, including first and second grades. Situations in which students demonstrated a severe discrepancy, but were found ineligible as LD, were more likely to occur with older students than youngsters from the early grades. Given the recent movement toward a develOpmental, or non-graded approach to curriculum in the primary grades, it will be interesting to observe if future research reports a change in referral and identification patterns. Individual differences in skill development may not present the concerns they currently elicit in the early years. Rather than looking for intervention in special programs, the differences may be accommodated with developmentally appropriate curriculum within the regular education program. In analyzing the eligibility decision, it becomes apparent that school districts could reduce high evaluation costs by using the prereferral screening process effectively. A number of the exclusionary conditions can be determined through early intervention strategies and a careful review of student records, reducing the chances that a costly evaluation is done only to decide that the severe discrepancy is the result of factors known in advance. Finally,-the higher levels of agreement than previously reported between the severe discrepancy criterion and eligibility decisions are encouraging, suggesting greater consistency in the diagnosis of LD is occurring in some school districts. There are those, however, who would not 122 be impressed. They might argue that a severe discrepancy needs to founded on more than simple calculation by formulas using standardized test data, and that clinical judgment needs to play an important role in the decision making process. A focus on formulas overlooks the complexity of the decision making, which must consider not only factors within the child, but factors outside the child, such as learning environments, teaching practices and parent support, which influence achievement. While it is recognized that many diagnostic dilemmas may be faced by those using this complex process, they must at least begin with the most statistically sound and fairest method for calculating a severe discrepancy and proceed from there. Socioeconomic status was a desired, but unavailable factor in this study. Future research might address the influence of SES on the calculation of a severe discrepancy, as it may well represent subgroups that differ by mean IQ. While race has been studied by a number of investigators, it has been primarily limited to black and white student populations. Similar research with other minority groups, such as Hispanic students, may provide guidance to school districts in other parts of the country who wish to examine their identification practices. The introduction of the Third Edition of the Wechsler Intelligence Scale for Children (WISC-III) raises questions regarding what impact the new test might have on identification rates. The WISC-III manual (Wechsler, 1991) 123 reports research results that indicate an average of five points less on the WISC-III FSIQ than on the WISC-R FSIQ when both tests are administered to the same students. The ranges of the expected WISC-III FSIQ scores associated with a particular WISC-R score are relatively narrow near the middle of the IQ distribution and wider at the upper and lower score levels, with as much as eight to nine points less on the WISC-III at the extreme levels of the distribution. Given this information, one might assume a decrease in the number of students demonstrating a severe discrepancy when the WISC-III has been administered, regardless of the method used. Validation of this assumption and knowledge regarding the extent of change in identification rates might be of interest to future researchers. Inclusion of the new Wechsler Individual Achievement Test (WIAT) as a measure of achievement in future studies would be interesting, as it has been linked to the WISC-III by a common sample of over one thousand children (Psychological Corporation, 1992). Having the same standardization sample allows for a more direct and precise calculation of ability-achievement differences for use in severe discrepancy determinations by comparing students tested on intelligence and achievement at the same point in time. The standardization sample also provides a school district with some idea of how many students they might expect to show discrepancies of a given size, particularly 124 if characteristics of the local population do not differ dramatically from the standardization sample. Since school districts often have a set percentage of students in mind when planning for special education services, the WIAT may prove to be a very useful instrument in helping administrators anticipate and control their need for special education services. APPENDIX A Data Collection Form 1255 Data Collection Form SEVERE DISCRERANCY STUDY District Building 480 at evaluation __ Gender __ Grade Ethnicity: Causasian Hispanic native American Asian Resides with SES: Student Code Number Years Retained African American Other Reason for Referral: Intellectual Assessment: WISC-R FSIQ PIQ Other Intelligence Tests: VIQ Scores: Achievement Assessment: WJ-R Norms based on Broad Reading age grade Letter-Word Identification Passage Comprehension Broad Mathematics Calculation Applied Problems Broad Written Language Dictation Writing Sample Supplementary Tests Other Achievement Tests Administer: Severe Discrepggcy 83555;! Ability Achievement oral expression listening comprehension basic reading reading comprehension written expression math calculation math reasoning I§PC Determination Ineligible 1265 Standard Score Eligible in Discrepancy Eligible Eligible o h 8 a e ‘8 c . a m e o I- U) o e e .4 c: I: oral expression listening comprehension basic reading reading comprehension written expression math calculation math reasoning APPENDIX B Database 127 Database , 1 A 1 a 1 c 1 D 1 E i F f G 1 H ’1 M) 'usm: g1§3_ Guns 13am Raambn mummy F90 . 2 1 1 1 900. 2; 1' 2: 3. 94 2 3 2' 1: 1010: 2: 3: 21 3: 881 1 4 41 1? 1307' 2: 51 2: 3: Q1 :5 3 1; 1GP? 1: fl 1' 32 10g 1 5 1 5: 1: 8w 2; 21 1' 1: .1 1 7 6i 1' 1102: 2: 41 2? 1' g] i 8 7; 1, 9001 21 2: 21 3.» 3g is 81 110.1 11 21 11 21 831 1 1 0 91 1: 1100' 2: 41 2: 3: 91 :11 101 11 7081 1 11 1! 11 81 i 12 111 11 15011 1 91 11 31 891 W3 12! 1' 7041 2: 11 1: 1 104 514 131 17 my: 1; 1; 21 1. 70 .15 141 1: 8081 2: 21 1- 31 311 1 15 151 1 1008: 2I 31 2: 3; 58] F 17 181 1 7021 2; 11 01 3. 89 1181 17: 11 9081 1 3: 1= 31 97 :19 181 1' 802: 2: 2: 1: 3' 95 1'20 191 1. 904- 2: 3. 11 3: 88 1 21 1 201 1: 811: 2: a1 01 :31 98' 522 211 11 1111 2' 11 2: 3' 80 :73 22: 11 1500. 2: 10: 0. 3: 102 1241 23 11 an: 21 2 2 11 HR !25 25: 11 1002 2' 41 1' 1 111 {25 281 1. 908: 2; 21 2: 1 117 1271 271 11 11041 11 51 1: 1» 991 :28 28: 1' 905 2: 2: 1= 3 80 129 291 1. 10381 22 3: 2: 3. 80 ; 301 301 1' 1601‘ 2. 81 3: 1 801 ‘ 31 1 31: 1 707‘ 2: 1' 0: 1 791 1 32: 32: 1‘ 1304: 2 7: 1 1 1% ,331 331 1: 1209: 2: 7 0! 31 1051 , 341 34' 1 806 2 1 2: 1 85 7351 351 1. 1002. 2; 3: 11 1 90 1361 381 1- am 2: 11 21 1: 931 1 37 37 1' 1 106' 2. 4: 2: 3' 94 F 38 381 1: 911; 2. 3: 2' 3: 84 1391 391 11 800: 2: 2: 11 3: 97 .130 401 1 ' 1 1 03' 2: 31 31 3: 83 541* 421 1: 702; 11 11 1i 3;, 112 42 431 1: 8081 g 2! 11 21 11 871 43 441 1' 1403: 2: 71 2: 3: 78 .44 451 11 909.: 2: 4- 1. 3: 87 145 481 11 805 11 2 11 ___1_1___ 981 3 48 47' 1 808' 1 1 21 3: 781 F47 481 1 10081 2. 4. 1, 1‘ 983 3481 491 1; 811: 2 2s 1' 1« 1001 I491 501 11 1011‘ 1 4 2 3: 871 128 1 1 1 C 1 D 1 1 F 1 1 1 50 51: 1' 9051 2: 3: 11 1: 95 757 52: 11 13m; 23 51 1: 11 1m 52 53: 1: 9011 11 31 11 31 891 541 11 8051 21 11 01 11 831 1 54 551 1: 803: 2: 2: 11 31 551 581 11 7051 11 11 11 3 83 55 571 11 7101 21 11 11 1 81 57 581 11 7001 21 11 11 11 87 58 591 11 5081 21 11 11 1l 1 59 001 11 9051 21 31 11 31 8g 80 511 11 7011 21 11 01 31 1131 81 521 11 12011 21 51 11 31 891 ., 82001 1: 11091 21 51 1e 11 m] 541 1; ms: 2: 1 1; 11 a) 1 841 851 1: 8041 21 21 21 11 101 1 85 581 1: 10041 21 31 2: 11 92 f 88 57: 1: 810: 2: 1: 2: 32 72 1871 581 1. 811: 21 1: 2: 11 8g 1 58 591 1: 9041 2: 2: 2: 1.1 I 59 70: 1: 1002; 11 3: 2: 31 87 1 70 711 1: 10031 1: 31 2: 21 77 W1 72? 1: 1302: 21 51 2: 3: 7? 73; 1' 5022 2. 0: 01 2: 7g 73 741 11 8:51 21 21 11 11 89] . 74 751 1 911: 11 21 2: 11 981 75 751 1: 13091 21 7: 11 11 1151 75 771 11 7021 11 11 11 31 74] 1 77 78: 1: 12051 21 51 2: 2: 88] F78 791 1: 9052 1; 21 21 11 gm 1 791 801 1 1211: 2: 5: 2: 11 901 180 81' 1' 1302: 2: 5: 2: 1' 11131 1 81 82: 1 802: 2: 2: 1. a: 112 F821 831 1: 10051 21 3: 21 11 111 {831 . 84‘ 1 910: 1: 2' 27 1 91 {841 851 1. 808: 2: 2: 1: 1: 109 1' 851 881 1 703: 21 1.: 1: 31 801 851 871 1‘ 905: 1: 21 2. 3: 9g 871 88: 1 807? 2: 21 1. 3: 801 88 :12. 1. 1705: 1: 11: 01 1: 1131 V 89 90: 1' 1003: 11 41 1‘ 3: 91 90 911 1: 1209: 21 51 11 31 84 91 921 1: 12051 21 51 2: 1 87 ; 92 931 1' 8m! 11 2: 1' 11 104 [’93 941 1. 711. 11 1: 1; 31 78 1 94 951 1 7031 21 11 11 3: m 1 95 95' 1 804: 2: 2' 1 1- 1081 .’ 98 97' 1 705: 21 2: 01 3: 1051 1T17 1 98: 1 1207 1: 51 2: 1: 951 1 98 1 991 1 1101: 2: 4: 1' 31 871 129 T1 1 I c 1 0 1 1»: 1 '1 1 199 1001 1' 9001 1' 2! 1* 41 67 '100 1011 1: 11061 21 61 1‘ 3. 90 101 1021 1 12111 2: 61 11 3: 94 102 1091 11 9061 21 41 01 11 114 .103 1041 1' 8073 2: 2t 1? 3- 96 11041 1071 1. 10061 11 41 1: 11 941' 1 051 1 061 1 1 1 0031 21 31 1 1 33 791 106 1091 1: 7111 11 11 11 31 ESJ 107 1101 11 11061 21 41 21 1 1061 106 1111 11 906 11 3 11 11 1m 1091 1121 11 605 1 1 01 3: 64 1101 1131 11 1204: 21 61 11 41 102 !111 1141 11 7061 1: 11 0: 11 91 1112 1151 1: 603; 21 11 23 1= 97 11131 1161 1: 12051 11 51 11 1s 65 9114 1171 11 704: 2? 1' 1! 3: 1031 {115 1161 1‘ 7111 2: 1? 23 1; 931 1116 1191 11 9061 11 11 21 11001 1117 120! 11 10111 1' 41 1' 31 61 '1161 1211 11 7051 11 11 11 1: 116 1191 1221 11 6001 2: 11 11 31 691 120 1231 11 7041 2! 11 11 3! 65 1121 1241 1 1000: 2; 21 21 3: 62 1722 1251 11 7021 21 11 11 11 1051 W31 1261 1' 1200' 1! 51 1' 31 9g 1E4 127: 1 ; 1 2061 2: 51 21 31 92] W25 1261 11 7091 21 11 11 31 Ed !126 129! 11 7061 2: 1: 1' 31 641 {1127 1301 1: 9011 1: 2: 2; 3: 661 11261 1311 1 12091 21 5: 2. 3! 901 11291 1321 1' 10001 2: 31 2' 3' 1 Q1 7301 133: 1 6m; 2; 0 1 3: 661 11311 1341 11 9011 21 31 1: 3: 931 :1321 135: 11 906' 2' 2' 1 3 76 1133 1361 1. 1603; 1: 81 2: 3; 75 1134 1371 1 1202: 2: 61 11 3: 6_71 11 351 1361 1‘ 1109. 2. 61 0' 3* 661 11361 1401 13 1109: 1: 5: 1' 3: 83] 11371 1411 1. 1511; 2. 101 0: 1: 1041 '1— 361 142: 1' 1 3051 2I 51 21 3: 761 1391 143: 11 605: 21 21 1. 3; 92 1401 1451 11 604: 1 21 11 21 91 141 1461 11 6051 1! 21 01 1I 102 142 1471 1 1103: 2 41 1; 3: 100 1431 1461 1 1 9061 21 31 1 1 31 961 {E41 1491 1: 1202‘ 2' 5 1' 1 ‘ 79 11451 1501 1 . 1 m3; 2: 4: 1: 3; 67 11461 1511 1- 11001 2: 5' 1' 31 691 [1 471 1521 1 ' 10071 21 31 21 31 £1 130 1 1 1 : c 1 D 1 I {1461 1531 1' 602‘ 11 0 107 11491 1541 1 605'. 1: 4 731 $01 1551 11 6111 1: 2: 731 1151 1561 1: 9051 21 1: 107 552 1571 1. 905': 2: 1: 124 11531 1561 11 12071 21 31 901 I154 1591 11 13011 21 11 961 155 1601 11 13:61 21 11 931 156 1611 11 7041 11 1 901 157 1621 11 7:31 11 11 1041 156 1.1 1: 1506; 11 01 1. 1591 1641 11 9031 21 11 1Q 1601 1651 1 6051 2: 1: 991 1161 1661 1: 11113: 2'. 01 1:51 1162 1671 1; 9041 2; 1. 961 163 1.1 1' 7101 2: 11 1131 164 1.1 1. 1306; 11 21 761 11651 1701 1: 7051 1 01 . 1051 166 171' 1 1206: 11 11 2: 6g 167 172: 1. 1401. 2: 11 2; 1661 1731 1: 1406.- 11 11 1- 92' 1691 1741 1‘ 7093 11 01 1* 94 1170 1751 1 1604. 1. 0: 1 991 1171 1761 1: 17071 21 11 11 90 172 177: 2' 901 1' 1‘ 31 67 173 1781 2. 811? 1: 0. 1; 91 174 1791 2; 607: 11 11 1' J 1175 1m: 2: 806? 21 1= 1- 91] F176 1611 2; 6061 2: 01 3: 961 1'1 771 1621 2 1402; 1' 7: 2. 1 w 651 1176 183' 2* 11041 1' 6: 01 1 1001 1173 1641 2: 910: 1 3. 2. 1 64 11601 1651 2 809’ 1: 31 01 1- 101 11611 186' 2. 706 2 2 0 1' 109 1621 1671 2; 604. 2. 2: 01 3; 1631 1661 2 1402. 11 61 01 11 67 -164 169: 2; 17031 27 11 1. 1t 92 165 1901 2. 610: 2; 2; 1: 2. 66 166 1921 2 1007. 21 5- 01 2: 1061 167 1931 27 603: 2: 2: 01 1 90 166 1941 2; 906. 2. 2; 1. 1: 92 11691 1951 2; 6111 1: 31 01 11 961 11—901 196! 2' 6056 1' 2: 01 1 8§l .5911 1971 2 1305. 2. 7; 11 1. 6_61 11921 1961 2' 1311 21 7 11 1: 1051 1931 1991 2 606: 11 1' 01 1 691 1941 2113: 2. 603 2; 2: 1. 1; 671 11951 2021 2‘ 8031 1 - 31 : 3. 911 11561 2031 2 1204 21 61 01 1 1091. 131 1 3 1 1 C 1 D 1 1 1 1 i '1971 2041 2. 903: 2: 41 01 1 117 21961 2051 2. 6091 2; 2: 11 1: 67 11691 2062 21 6031 21 21 01 31 67 12001 2071 21 1 0021 21 31 1 1 11 69 1201 1 2061 2: 6091 21 1 1 01 1 :_ E 12021 2091 21 9061 21 21 21 11 691 %1 2101 21 11 111 21 61 01 11 ‘2041 2111 21 7051 2: 11 1 1 1 1 72 2051 2121 21 10101 11 41 11 11 an .2061 2131 21 1 0091 1 1 51 01 41 1 02 1 " 2141 21 6071 11 11 01 11 101 12061 2151 21 7101 11 21 01 11 101 12091 2161 21 9101 21 41 01 11 102 121 01 21 61 21 6021 1? 21 1 1 1 91:1 121 1 1 21 91 1 9061 1 31 11 1 z 1 061 12521 2201 21 11111 11 51 11 11 1131 f 21 3 2211 2: 6061 2: 1 : 01 1 ' Q] 121 4 2221 21 16041 2: 1 01 01 1 1 Q1 1215 2231 21 11061 11 51 11 1: 611 '21 6 2241 21 1 003: 1 31 1 1 1 : 6_31 21 71 2251 21 9101 11 31 11 11 g 21 61 2271 21 606' 11 21 1 1 1' 661 g 21 91 226: 2; 13001 21 61 1 E 1 . 95 12201 2291 21 6101 21 21 11 11 101 {221 2301 21 9071 1 1 31 1 1 11 691 1222:; 2311 2: 7071 1 11 01 1: 1 011 12231 2321 21 6111 11 31 11 11 961 z 224 2331 2: 9021 21 21 1 1 1 = 631 17275.. 2341 2: 1 006: 1 41 1 1 1 . 6m :2261 2351 3: 610; 1. 2: 1 11 1071 12271 236' 3 902 2' 41 01 11 11 2 2261 237: 3. 603. 2; 2; 1 : 2. 105 1223 251 31 arm 21 11 m 1: 9M 2301 2391 3: 803' 2: 2: 1 1 1 ' 104 12311 2401 3: 1110: 1‘. 51 11 11 61 12321 2411 31 9091 2: 41 01 11 1161 1233" 2421 31 13091 21 61 01 3' 83 {234 2431 3‘: 6071 2: 1 a 0: 1 . 6_31 12351 244: 31 10061 2: 41 1 1 1 : 961 F2561 2451 3' 17061 21 1 01 21 21 691 12371 2461 31 71 11 1 1 21 01' 1 1 99 12361 2471 31 15031 1 1 91 01 1 1 1 04 12391 2461 31 703' 21 21 01 11 107 {2401 2491 31 602. 21 21 01 1‘ 96 1241 1 2501 31 7021 21 1 1 01 1 1 951 {2421 2511 3: 1010 11 41 11 1' 85 32531 252: 3. 1201 - 2: 51 1 ' 1 ' 61 . 2441 253: 3: 6061 21 21 1 1 1: 991 2451 2541 31 161 1 1 2: 1 01 1 1 11 961 132 1 A a 6 1 c 1 D E E 1 F 1 c 1 H 4 2461 2661 31 7031 21 1 r 01 1 ' 1061 12471 2561 31 6031 1 : 21 1. 1 ' 101 12461 2571 31 71 11 21 2 01 1: 101 2431 2531 31 61 01 21 3 01 1 1 107 12501 2531 31 1607: 21 31 1f 1. 36 251 2601 31 701 i 11 11 11 11 1 07 2521 2611 31 6B1 21 31 01 11 1161 253 2621 31 m1 1 31 01 1, 1 254 2631 31 1 5011 21 31 01 11 112 1255 2641 31 61 11 1 1 11 01 11 97 '256 2661 31 6051 21 21 11 11 1031 257 2661 31 3071 11 31 11 11 95] .2531 267' 31 6031 21 21 1 11 1021 12531 2661 1 7061 21 1t 01 11 £1 12601 2631 41 14031 1 1 31 01 1 1 73] {261 270: 41 3071 21 2: 11 1 1 631 ,2'76’2 271 : 41 6031 2: 1 i 0: 1 : 7_3] 12631 272: 41 16011 21 31 11 11 601 1234 2731 41 1 2001 1 ' 51 1 ' 2' 77 1265.. 274: 41 7071 2: 2; 01 1 1 102 2661 2751 41 7061 1 1 11 1 1 = 31 2671 276' 41 605 21 11 1 1 1 1 100 2631 2771 41 11 1 0: 2; 51 11 1: 351 2631 2761 41 31 11 21 31 1 1 1 1 101 270" 2731 41 3061 11 21 01 1= 37 271 2301 41 7041 2: 21 01 11 35 1272' 231 1 41 3001 21 21 1 1 1 : 67 12731 262: 41 6031 2: 1 2 01 11 32 127—41 233: 41 7041 2: 1 1 01 11 1m .2751 234: 41 6101 2: 01 01 11 64 276 2351 41 3111 11 41 01 11 106 {277 266. 4. 603: 1 1 3: 0: 1: 33 12731 2371 41 3061 21 21: 11 1: 112 12731 288' 4: 7011 1 1 02 1 60 {21801 2331 41 7111 21 21 01 11 124 12311 2301 41 1211: 21 71 01 11 37 12821 2311 41 305: 21 21 2: 1 11_61 {"2331 2321 4; 71 1: 2: 2: 01 11 Q] 1284 2931 41 7051 2: 1 1 1 : 1 1 891 1735 2341 : 7011 21 1 1 01 1 1 101 266 2351 41 606: 1 1 1 01 1 1 31 287" 2361 41 7051 1 11 01 1: 32 2661 2371 41 6061 21 01 0! 11 a) 2631 2361 41 1002: 2: 41 01 11 32 2301 2331 41 61 11 2: 31 01 1 1 104 32311 300: 41 6031 1 1 1 : 1: 1 361 {2321 301: 51 71 1 . 1 . 1 1 1 1 1 : 31 12331 302: 5; 7031 21 1 1 11 1 ' 32 12341 3031 51 12031 21 6' 1 1 1 £31 133 1 A 1 8 1 C 1 D I E 1 F l G 1 H J 235 3041 61 607' 2: 1 1 01 11 301 236 3061 61 3031 2: 31 11 1: g] 237" 3061 6 61 11 1 1 31 01 11 32 236 3071 51 7111 21 21 01 11 104 2331 3061 6: 3051 11 31 11 11 31 3001 3031 61 3061 21 41 01 11 11 1 301 3101 61 10631 21 41 01 11 106 302 3121 51 7061 11 21 01 11 104 3031 3131 61 1 6061 21 1 11 01 1 72 304 3141 61 12101 2 71 01 1; 101 306 3161 51 7071 2 21 01 11 1 11 3061 3161 61 10031 11 31 11 11 Q1 3071 31 71 61 1 3071 21 71 1 1 1 1 1 02] 13m] 316: 6: 16001 2: 12: 1: 1. $1 3031 31 31 6.1 7031 1 21 01 1: 1 261 31 0 3201 51 1 404: 2: 71 2: 1 t 36 Q1 321: 61 16061 2: 3: 2: 1 1 36 131 21 3221 61 3001 21 31 11 11 931 [$31 3231 61 3061 21 31 1 1 1= 63 W41 3241 61 12021 21 61 1 1 1 a 100 1315 3251 61 1003: 21 61 01 11 1131 131 6 326' 61 10041 1 41 11 1 1 $7 327; 61 3061 1 : 2: 1 : 1 s 1 £1 3161 3261 61 3101 21 41 01 11 1131 31 31 3231 61 7031 21 11 01 11 1 (rd I320:: 3301 6: 7061 2: 1: 0: 1: 63] 1321 3311 61 11061 21 61 01 1: 1011 322': 332? 61 610! 2' 1 ' 01 1 ' g 323:: 333: 61 12031 2: 61 1 1 1 : 1011 1324 334' 6: 81 0: 2! 1 0: 1 108| 1,325 335 6‘ 121 0' 2' 6‘ 1 3 1 = 301 1376 336 62 6031 2: 01 1. 1 1 331 1327" 3371 61 1000: 1 1 31 01 1 1 331 '3231 333 6 302' 2' 3 0' 1 103 {3231 333. 61 611: 2; 1: 01 1. 110 $01 3401 6: 16111 21 111 01 11 124 1331 3411 6 703 1 ' 1 ' 01 1 7 67 r3732:; 342: 6‘ 8091 2: 23 1. 1: 104 3331 3431 61 3031 2: 21 11 1 1 64 3341 3441 6‘ 702' 1' 1 1 11 1 1 33 335 3451 61 7131 21 1 1 1 1 1 j 11% 336 3461 61 61 11 21 31 01 11 661 3371 3471 61 1 001: 1: 41 01 1 ' 1 02 3331 3433 6. 1 206: 2: 51 1 1 1 z 106 3331 3431 61 31 1 1 1 31 1 1 1 1 36 {$01 3601 6' 305? 2: 3' 0: 1 ' 1 1 6 {in 351 ‘ 6: 1203: 2: 5. 2; 1 33‘ 1342 352! 6: 1005: 21 3: 1 1 1 1 341 13431 353: 61 31 11 2: 31 1 1 1 1 L161 134 1 1 1 C 1 D 1 i 1344 3541 6' 10061 41 01 32 1 3451 355: 61 1202: 6: 0; 36 135 1 1 1 J 1 K 1 1. 1 111 1 N i o . p 1 1 1 11°10 v10 van-99 mun-99 WJH—mc 'WJH—MA MAE—WE -w.n—ans ‘ 2 1 991 92: 56. 551 571 79: 43. 56 1 3 1 911 971 72: 791 961 961 63: 1 4 961 921 791 911 911 911 79' W: 1041 96i 77: 991 951 991 77 72 1 6 921 991 711 691 911 931 591 691 1 7 1911 921 741 771 991 991 571 1 9 951 911 911 951 771 991 731 1 9 911 791 691 651 951 791 661 an 110 961 971 931 941 931 991 931 99 11 771 991 461 391 431991 441 J 12 991 m1 941 911 961 1041 921 991 13 1951 1031 641 721 731 991 991 . 14 551 97: 52: 791 471 551 641 591 1 15 951 911 661 771 701 751 441 116 991 591 941 77' 621 651 631 94 1 1 7 971 921 491 741 761 73: 77* 57 1 191 1991 991 741 721 961 941 751 J 1 1 9 1 1901 941 791 631 791 751 67’ J 29 l 961 921 741 761 931 931 792 79] 1 21 1 1 991 951 971 1131 961 9:3 {221 95: 791 94: 941 1 ' 51'. 931 E 23 1 1 23: 961 991 931 196; 941 921 1961 F271 1 1951 1091 941 1991 711 991 791 J 1 25 1 129' 95I 94I 991 1941 1951 951 99 {26 1 1391 193: 71: 1 741 941 671 64 1 27 1 1911 971 991 931 771 951 991 961 1291 79I 951 631 641 711 991 561 531 1791 79: 92: 67: 531 941 79: 41: 59j 1 30 1 991 921 971 861 961 911 751 961 31 1 77' 941 71' 79' 77' 75? 991 :91 32 1 194. 991 99. 97; 99. 94. 91 1 J 1 331 1091 102; 70! _7__2_1__ 881 91! 561 1 ? 341 73' 190' 69' 60' 61; 77' 69: 671 if: 1 95: 971 931 59: 671 741 791 991 1 36 1 961 921 71: 641 991 591 75: 71 137 99: 191 76' 1 77: 961 65: {—39 991 911 701 741 921 77: 651 79 1 391 1011 951 73: 841 921 911 741 701 I?) 95' 741 99' 771 761 99! 59! 791 1 41 1141 1 991 661 791 1951 91 1 771 67 1421 951 911 641 631 731 991 511 W31 911 74' 91' 961 911 791 661 91 1441 75: 1911 91: 941 1911 1 741 _, 1 45 1 1991 971 91 . 791 991 911 __7§l 921 i 461 96: 73' 49' 441 461 751 421 551 471 199: 971 95; 1921 961 1011 92: 951 ; 49 1 192' 99: 66' 791 911 961 601 691 ; 49 1 91! 95: 67: 72: 921 971 791 9;st 136 *7 1 J 1 K 1 L 1 M l N 1 o 1 P J 59 1951 971 741 951 791 961 711 73 1 51 105! 941 1991 1 '1 911 891 110 52 971 921 941 991 771 921 751 53 1011 791 691 611 691 991 691 73 54 991 911 921 941 97: 711 791 J 55 741 941-1 1 961 741 951 791 56 771 991 541 961 741 931 521 591 57 931 941 951 751 1091 921 991 L9] 59 1041 1961 941 931 1911 971 971 791 .59 991 991 911 1991 961 1911 791 J ' 1111 1131991 651 771 761 791 g 61 991 911 911 991 951 921 771 L6! 62 1991 971 941 991 991 951 741 J 67: 961 671 991 791 941 641 70 1 64 11 1 1 941 991 571 591 791 991 74 fies 991 961 791 941 771 1091 671 92 Fa's 491 971 731 661 771 941 671 71 167 991 961 651 731 791 941 621 77 169 961 941 791 951 79! 991 911 77 ‘69 951 911 921 941 63: 741 941 99 79 931 761 731 951 591 711991 691 1 71 901 691 631 591 961 991 421 T2 67: 951 641 991 761 691 741 El 73 911 991 671 641 951 961 57: Q1 ,74 1921 951 631 55! 591 991 691 99] [75 124; 1951 92: 991 991 192: 79: 96] 1 76 631 991 641 711 541 561 921 1 77 1911 991 79 79199! 961 611 79j 79 1011 931 991 971 791 941 771 7_6| 1 791 951 971 741 951 791 1 74: 751 1 991 112: 991 192: 1031 991 991 70' 196 [91J 199: 112: 73: 93: 971 1151 751 79 1921 1111 1991 791 991 961 1921 691 ' 93 93' 99' 67' 91' 96 971 68' _, 194 1911 114. 1951 119; 951 991 991 99 1 95 1 711 921 741 641 951 791 741 1’ 96 99: 92: 74: 79: 77: 1961 67' 66 '97 941 79: 721 1 - 1 641 72 99 1 117' 1991 971 1911 991 1971 991 99 991 951 79! 761 1911 991 779 99 991 911 751 911 951 921 491 72 91 921 951 931 991 1991 1951 731 92 1011 1071 961 951 921 1931 991 951 93991 ’ 971 591 701 931 691 651 94 1971 1951 731 741 951 971 791 m . 95 1962 199: 991 991 991 195: 921 Q1 1 96 199: 1991 961 961 1151 137: 997 961 1 97 1 961 951 941 961 911 991 991 961 [991 951 921 79' 911 911 991 69: 993' 137 : 1 J 1 1 L 1 m 1 N 1 o 1 W9 99' 991 61’ 72! 751 991 71: 65 {1 99 11 51 771 491 591 691 1 231 HE 1991 911 951 991 961 991 791 951 [192 931 1301 1121 1121 1031 1291 951 07 {£31 1991 951 921 991 951 961 951 92 1 04 1 941 971 191 1 1991 791 991 = 195 751 951 761 941 991 951 72! 79 196 1 I 621 76 991 911 691 631 197 1121 1011 741 921 971 991 741 67 1991 1021 1931 791 791 1921 1911 911 199] 991 911 911 92 1171 m1 951 1191 1951 1011 1011 97 691 991 941 111 921 101' 921991 971 911991 fl 2 971 1971 791 791 971 991 691 11131 921 911 791 771 931 911 721 11141 115: 951 651 741 1031 791 941 69 1T1 51 1971 93: 761 761 961 991 751 W161 101: 911 951 791 711 721 641 1117 911 75' 991 961 951 991 941 11:9 1261 1061 77; 731 991 941 75; 761 1191 951 961 1991 661 791 77' 73 129 771 951 471 1 901 751 av 59] 121 911 961 791 991 691 1091 931 79 122 1171 961 691 741 1091 1061 921 1231 931 199: 971 991 991 921 77! 95 124 961 911 911 961 1921 941 771 92 1251 791 1031 691 571 921 1 931 1126 80! 991 69‘ 75' 431 691 661 691 1727 941 1 961 991 551 761 631 92 1 1 291 991 921 791 791 931 91 1 59. 79 I1291 114' 199: 91' 931 1011 112: 95: 99 Q1391 99: 99: 61. 911 70: 53: 16. 11 311 921 951 91: 911 941 991 931 991 {1321 95: 72: 59 63’ 64' 97' 54. 56 11’ 331 791 751 941 791 661 671 751 94 11341 991 991 901 961 991 1041 791 77 {1351 86‘ 92: 941 941 99: 991 71' 99 T 361 921 91: 95: 961 911 961 991 94 11371 112: 971 1991 102: 1991 951 941 I1391 941 74: 96! 99: 69! 76' 931 96 1391 991 901 1 951 991 1961 911 1491 931 911 991 921 1261 971 951 97 1411 111' 961 671 591 1121 941 92! 76 l1421 1191 96: 991 97: 931 1 1 11431 991 991 92: 951 931 931 791 931 11441 991 91: 77' 99: 941 991 73' 69 {1451 73: 193. 94. 931 991 991 79: 96 11461 951 86' 751 941 1 961 611 [1 471 971 79' 77' 931 791 941 79' 77 138 1 1 1 1 L 1 1 1 1 1491 1091 1061 691 102: 1011 921 711 11491 741 75: 921 741 991 931 651 11591 741 751 721 951 911 791 691 1151 1191 971 961 991 1091 921 901 1152 ' - 1061 1231 1941 1961 941 J 11531 961 961 911 961 941 921 991 79 1154 951 991 991 991 911 911 951 711 155 991 961 731 971 951 921 741 71 1561 961 961 931 941 931 641 1021 91 1571 106 1921 911 911 1061 931 901 77 1591 117 921 97: 1091 991 961 931 1591 1111 991 731 791 791 941 621 mi .1691 941 951 671 691 991 971 711 661 11611 1411 1251 991 1091 1291 1161 941 931 11621 1041 911 67181 741 991 1 631 11631 1291 1051 961 911 1 931 97: 91 17641991 791 971 731 991 911 1 11751 1991 1991 961 961 1151 1371 991 961 {1' 66 1021 901 971 941 1061 941 751 991 1T6? 971 92: 911 971 661 911 865 93‘ 1691 1061 921 991 791 641 1 741 91 1691 931 961 941 961 1941 119: 92: 11791 991 199; 1951 1991 911 11g 11711 991 961 931 991 ; 731 93 1W21 791 1071 79: 951 541 791 27' 1731 991 941 971 91 1 941 991 971 1 1 741 91 1 921 521 61 1 491 731 421 571 11751 961 991 56‘ 67' 911 991 391 63J 11761 931 991 921 1131 931 991 103. 92] = 1 771 951 991 951 93: 791 991 93: 931 31791 1011 100'. 114- 110' 95: 99: 941 m1 1 1791 952 96. 67; 74. 76' 105. 76: 721 F1391 1111 921 771 931 73: 971 751 991 11911 114' 193‘ 91* 93' 105' 191' 92: 951 11921 941 105: 94. 86: 911 91. 991 93 11931 991 971 951 961 991 991 991 1194 109: 691 96 96? 86! 671 951 J. 1T 95 961 991 79: 931 77; 911 92: 7_91 1196 951 119: 991 1071 1051 1171 77: 1 197 991 941 791 72: 73: 96: 741 791 1991 971 991 991 971 921 1951 951 1991 1921 921 991 991 1901 931 921 91 1991 991 771 691 761 751 91! 652 191 911 941 911 961 911 931 791 92 192 991 1191 1921 1031 941 931 1 1 F193 60! 91: 991 95: 76' 741 93' ‘ [194 991 991 61. 651 641 741 52; 11951 941 101: 921 791 1961 961 911 791 11961 951 119: 991 1091 911 971 751 J 139 1 , 1 1 1 1 L 1 1 1 1 A. 11971 96l 131! 91' 941 112: 1991 911 911 ; 1981 861 901 74‘. 831 781 91 1 681 78 11991 961 991 941 1911 951 961 911 12991 971 921 671 591 721 771 541 32911 741 751 791 791 771 77: 911 12021 931 871 641 681 61 1 781 481 681 12931 951 951 911 941 791 971 731 i 2941 651 941 721 631 691 591 741 12951 961 911 911 991 951 971 991 92 12961 1921 1921 931 971 1231 1151 93: 1297' 941 1191 961 721 941 1971 991 12m 1991 961 761 791 991 971 941 72 12991 1971 961 991 951 1151 1171 911 95 {2191 931 1991 951 921 971 1951 99. 93 22111 991 113: 791 911 941 1951 991 ;2121 1171 1991 791 921 991 192: 1 75 12131 1991 971 961 1961 1171 1191 991 12141 1991 971 951 1991 971 1991 1921 1 1' 21 51 73! 1 85! :1: 1 361 84! 771 321 E61 92: 77: 791 991 791 991 71: 1 12171 961 1991991 1921 941991991 97 :21 91 991 991 71 1 731 91 1 991 961 79 2191 199; 921 961 99. 941 991 96: 12291 951 1961 731 921 971 991 751 {2211 921 991 931991 941 991 991 77 {222: 1921 1911 79: 941 961 1941 911 74 1223" 931 991 991 941 961 1951 791 731 ;2241 941 921 791 941 711 91 1 741 92 {2251 641 199: 951 1 92199: 751 - 2261 921 1E0 841 941 96' 951 103' 1 2271 111 111: 129' 1271 1291 124: 116' 1 2291 1061 193; 92; 951 1141 114; 94. :2291 1951 961 911 761 1111 1991 93 2391 117' 941 94' 951 92- 77' 69 9 2311 1 911 1921 991 931 99: 921 § 12321 1141 1191 991 1171 113: 1971 971 4 32331 931 72' 1231 911 961 991 99' 1 :2341 991 992 92. 1941 1191 1911 191. 4' 12351 1'1 991 971 99: 931 191: 941 4 12361 991 92! 951 951 99: 991 611 1 {2—37" 11111 991 971 941 961 951 921 1239 1141 961 951 941 991 961 791 12391 1991 1951 921 961 111= 1291 95: 12491 961 199: 691 711 991 941 751 _1 1241 1 1921 991 951 931 191 1 1 951 J {2421 194- 72' 95: 991 94: 92: 791 91 12431 951 991 941 941 971 961 791 J 12441 96: 191: 991 991 991 951 961 1 12451 1991 941 96! 971 991 991 971 J 140 1 l 1 4_ 1 1 L 1 I N 1 1 1246 1991 1971 921 1911 1111 1951 1921 f? 247 1191 971 792 931 97: 971 72: 12491 1911 1911 921 971 1991 1991 961 12491 1141 1911 991 991 951 1151 1921 @591 991 1911 1971 1991 931 971 941 251 951 1171 1111 1161 1161 1391 1961 252 1211 1991 991 1991 1191 1241 991 2531 112; 1221 911 991 971 1171 941 254 1991 1121 1951 1141 991 1961 791 255 1921 941 1921 1931 117 1991 991 256 1911 1151 971 1131 97 1911 1961 257 911 1991 941 $1 1911 941 931 2a] 1911 1931 971 991 921 1151 951. 2591 991 1961 1921 1951 1991 1991 961 2691 791 921 931 1971 941 991 921 1261 911 991 731 95! 921 951 75! 72 1262.. 78'! 921 991 691 921 921 m1 12631 921 911 941 951 761 951 991 {E41 961 72: 921 921 931 911 1 12651 191: 1951 111; 1121 991 1191 199: 12661 961 991 931 961 791 941 961 F267 1121 1961 95! 931 1221 1921 95' {269 91 921 991 93. 911 1921 791 12691 951 1971 921 991 791 1911 791 12791 971 991 651 991 791 971 531 271 931 991991 971 193; 791 991 272" 931 941991 791 791 961 911 2731 1951 941 641 751 931 931 79! 64 2741 1921 991 921 971 1931 991 921 79 ; 2751 67: 661 791 991 61 1 791 951 72761 1951 1971 77' 971 71' 951 97' 99 ;2771 991 791 991 991 991 991 99. 96 12791 1991 1121 791 761 951 911 77: 91 279 199 66: 92' 99' 1941 991 93' 94 299 1392 114. 79; 921 199: 991 991 77 1 291 1 1991 961 941 991 941 1921 921 3292:: 139: 191' 77' 911 73: 951 741 593" 99. 732 77: 941 971 991 93; 12941 961 94. 79: 951 971 971 921 12951 951 1961 791 941 1941 991 951 2861 961 991 691 731 931 971 691 2871 961 991 711 931 961 941 951 2991 771 951 941 731 931 771 961 .2991 99; 971 922 1 95: 941 691 12991 115- 951 99: 971 1151 1291 931 12911 199' 96 77' 921 771 991 77' {2921 952 99: 93; 921 93; 194. 79: 75 12931 971 991 73: 571 91 1 991 721 79 {2941 1941 951 991 921 991 991 631 J 141 1 1 1 1 L 1 1 1 1 1295 921 971 76' 911 67' 961 791 1296 961 1991 791 931 961 1111 931 91 297" 991 991 791 91 1 961 961 771 91 299 1231 991 991 941 1991 921 921 97 2991 931 991 741 761 991 941 691 3991 1991 1121 921 951 941 1231 991 91 391 1:51 1951 921 931 1161 991 77! 76 392 1141 961 921 951 991 991 991 961 393 m1 991 951 991 671 911 511 J 394 961 1951 991 1291 921 991 971 991 395 1941 1151 951 961 1991 1991 951 g 3961 961 921 991 961 651 931 941 991 397 1 121 191 1 951 991 91 1 971 91 1 96 399 1991 911 951 961 1941 1961991 3991 1211 1391. 991 991 931 1111 991 951 3191 941 1991 1991 1931 1921 951 931 119 3111 951 961 1191 911 1961 951 931 13121 961 921 771 791 961 991 691 131 31 991 91 1 651 591 521 971 561 61 '3141 951 97: 991 1931 199; 961 751 93 3151 1321 951 991 951 931 1941 73: 961 1316 921 971 741 1 771 961 65? 79 '31 7 199; 1961 63; 651 72: 971 591 3191 1171 1171 941 911 1991 1951 991 991 3191 951 1171 741 751 971 931 791 329:: 911 991 691 751 491 771 79; 321 1991 1921 941 1941 791 991 791 951 3221 991 91 1991 75! 61 1 991 95! 3231 1:111 1931 921 921 941 991 791 91 3241 991 115 1951 1991 911 1191 961 195 3251 731 991 191: 991 951 971 74' 1991 3261 92: 861 571 691 671 491 76 3271 991 199: 641 641 641 761 621 631 3291 1941 1921 791 931 99 991 64: a1 3291 1991 1111 991 971 1991 991 911 3391 1261 1191 1911 1141 1921 122: 941 331 961 911 91 1 71 1 52: 991 971 77 332:; 1991 1991 911 941 931 1 911 71 ; 99 3331 951 751 761 931 791 791 71 1 79 334 1 991 991 921 76' 961 91 1 991 335 11 41 1171 91 1 931 991 971 991 336 991 911 971 951 1191 1991 961 961 337' 1941 1921 991 991 941 961 99' 92 3391 199: 193. 951 951 951 931 79: 199 3391 951 961 911 961 991 1171 991 931 3491 1941 123? 93: 193: 97- 191 - 92: 941 341 941 97: 991 991 751 991 79; 31 342 961 941 921 961 991 991 75; 91 1 3431 1141 1171 991 1961 1 1951 991 J 142 1 1 J T K 1 L 1 M l l 1 £344 %1 923 321 901 1 08' 94' 781 77 .345 991 193: 961 961 1191 194: 62; 97 143 1 T o 1 9 1 3 1 1 1 u 1 v 1 w 1 x 1 1 1wn—9991wn-9m911-119—99 'ELIG-RC :Euc—ws TELlG—MC 39119—1119 'ELIG—any 1 2 561 ; 11 11 1: 1: 1; 1 1 3 1 1 11. 11 1 01 01 1 1 4 1 g 11 9 1 91 91 1 1 5 961 r 11 1 1 91 91 1 1 6 911 1 11 91 11 91 91 1 1 7 1 ‘1 1! 1 1 91 91 1 9 991 731 91 9 1 11 91 1 9 711 661 11 9 1 9 91 1 119 1991991 9 9 9 9 9 9 111 1 1 1 1 1 1 1 1 11—2 921 1 91 91 91 9 91 9 13 731 1 91 9 91 91 91 9 14 741 ‘ 11 1 91 11 11 1 115 1 91 11 11 11 91 1 16 771 - 9: 91 91 11 11 1 17 751 1 11 11 91 11 1: 1 1191 . 1 11 11 11 91 91 1 W9 1 11 11 11 1! 11 1 1:29 77: 1 91 91 91 91 9: 9 121 951 1 91 91 91 91 91 9 L2; 791 ' 91 91 91 91 91 9 :23 911 . 0: 91 91 91 91 9 241 . 1 91 91 11 11 91 1 25 991 921 91 91 9 91 91 9 126 741 761 11 11 1 11 91 1 1 27 1 941 741 91 91 91 11 91 1 129 691 9 11. 11 1 91 91 1 129 461 1 11 11 1 91 91 1 .391 941 791 9; 91 91 91 91 9 ; 31 1 741 791 91 91 91 91 91 9 1321 91 9: 91 91 91 9 1331 . 1 11 11 11 91 91 1 ;341 69V 691 1‘ 1: 1? 11 1 1 1351 631 751 91 11 11 11 11 1 1361 611 991 11 11 11 91 11 1 1 37 1 2 l 13 11 1! 0: 09 1 1391 751 761 11 11 11 11 12 1 1391 921 921 11 91 11 91 91 1 '491 72: 1 9: 11 11 91 91 1 41 791 931 11 11 1 91 91 1 42 1 1 11 11 1 91 91 1 43 991 731 91 9 9 91 91 9 44 ; : 91 9 9 91 91 9 145 1 951 91 91 91 91 91 91 5461 511 43: 1' 11 11 11 1' 1 1 47 1 1991 911 91 91 91 91 91 9 2491 771 791 1: 91 11 91 91 1 {491 671 72: 1' 11 11 1 91 j 144 o 1 1 9 1 T 1 1 1 x 59 911 791 1 1 1 1 1 1 9: 1 51 941 791 91 91 11 1 a 91 1 52 i 1 1 1 1 1 I 1 1 0| 1 53 691 71 1 1 1 11 1 1 1 91 1 54 : 1 91 91 91 1 11 1 55 991 91 1 91 91 91 91 91 g 56 61 731 11 11 1 91 91 j 57 77 Q1 91 91 9 91 91 91 59 931 1991 9 91 91 91 91 q 59 ! 1 9 91 91 9 91 g 69 711 541 11 11 11 1 11 11 61 931 1 91 91 91 91 91 91 62 = 1 91 91 91 9! 91 9 1 761 11 11 11 91 91 1 64 641 471 11 1 1 1 1 : 11 1 65 941 761 91 91 91 91 9! 9 66 671 761 1 : 91 1 1 91 91 1 67 791 791 91 . 1 1 1 1 91 91 1 69 941 771 91 91 91 91 91 9 1 69 991 731 91 91 91 1 1 1; 1 79 761 661 91 91 1 1 1 1 91 1 71 531 741 11 11 1 1 1 1 1 2 1 72 991 541 1 t 91 1 z 1 : 1 : 1 73 691 991 1 1 1 1 1 91 91 1 74 621661 1 1 1 1 1 1 91 1 75 971 991 91 9 1 91 91 1 76 1 1 1 1 1 91 1 1 11 1 77 751 651 91 9 1 1 1 9 1 79 921 761 91 9 9 91 9 j 79 991 791 11 91 1 1 1 1 91 9| 1 99 1991 921 91 91 11 1 1 91 1 {91 92: 1 z 11 1 ; 91 9: 1 172 ~ 1 1 1 1 11 91 91 1 193 1 1 1 1 11 9; 91 1 94 191 1 991 91 91 91 91 91 9 85 1 I 1 1 1 l 1 1 01 01 1 96 741 761 1: 1 1 1 1 91 91 1 97 1 ' 9; 91 91 91 91 9 99 1 1 1 91 91 91 91 91 99 1 1 1 1 1 1 91 91 1 99 721 1 1 1 91 1 91 91 1 91 1 1 91 91 9 91 91 92 691 961 9 9 91 91 91 91 93 . ‘t 1 1 91 91 91 1 94 1 751 11 11 1 1 1 1 1 1 1 95 92! 961 9: 1 91 9: 9! 0 96 951 1152 9; 1 91 91 91 9 97 1951 791 91 91 1 91 91 9 . 991 931 991 or 91 1 91 91 9 145 9 dd—l-l-I-ld-ld dull d d-‘ddd-‘ddddddddddd 3 ddddddddddddd‘ A ..a 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 A 146 9 $22 a: $1! 3 ~4 2 d-l-lAdd—5“Jdd-‘dd-‘d-‘dd—ldd—A-‘d—l—L d ...—04.; m ..l 147 1 1 1 1 1 1 0 0. 99 148 149 1 1 1 1 1 1 1 1 150 l l l T 344 j 97! 1 : 01 o o 345 971 : at 0| 0| oi LIST OF REFERENCES 151 LIST OF REFERENCES Algozzine, B., & Ysseldyke, J. (1987). Questioning discrepancies: Retaking the first step 20 years later. Lennning Disability gnarteniy, in, 301-312. Barkley, R. (1990). Attention deficit nyperactivity disonder. New York: Guilford Press. Bennett, 0., & Clarizio, H. (1988). A comparison of methods for calculating a severe discrepancy. Jguznai of §chogi Psycnology, zg, 359-369. Braden, J. (1987). A comparison of regression and standard score discrepancy methods for learning disabilities identification: Effects on racial representation. Jonnnal Q; Scnogi Psychoiggy, 2;, 23-29. Braden, J., & Weiss, L. (1988). Effects of simple difference versus regression discrepancy methods: An empirical study. Jou Sc 3 c , Zé. 133-142. Chalfant, J. (1989). Diagnostic criteria for entry and exit from service: A national problem. In L. B. Silver (Ed. ), WW- Boston: College Hill Press. Chinn, P., & Hughes, 8. (1987). Representation of minority students in special education classes. Benggini_nng §negini Edncation, 8(4), 41-46. Clarizio, H. (1983). Behavior disonders in cniigren. New York: Harper and Row. Clarizio, H., & Bennett, D. (1987). Diagnostic utility of the K-ABC and WISC-R/PIAT in determining severe discrepancy. Psychology in tne §cnools, g5, 309-315. Clarizio, H., & Phillips, 8. (1986). Sex bias in the diagnosis of learning disabled students. Psychoiogy in tne §cnoois, 2;, 44-52. 152 Clarizio, H., & Phillips, S. (1989). Defining severe discrepancy in the diagnosis of learning disabilities: A comparison of methods. Jounnal of §gnool Psychology, 21, 383-391. Cohen, J. (1960). A coefficient of agreement for nominal scales. ggugational and Psychological Measurement, 29(1), 37-46. Cone, T., & Wilson, L. (1981). Quantifying a severe discrepancy: A critical analysis. Learning Disability Quarterly, 5, 359-371. Council for Learning Disabilities, Board of Trustees. (1986). The CLD Position Statement. Leanning Pisability Quartenly g, 245. Evans, L. (1992). A comparison of the impact of regression and simple difference discrepancy models on identification rates. J u a 5 ho , 2g, 17-29. Frankenberger, W., & Harper, J. (1987). States criteria and procedures for identifying learning disabled children: a comparison of 1981/82 and 1985/86 guidelines. Jou na of Learning Disabilities, 2Q, 118-121. Furlong, M. (1988). An examination of an implementation of the simple difference score distribution model in learning disability identification. Psychology in the Scho ls, 2;, 132-143. Furlong, M. & Feldman, M. (1992). Can ability-achievement regression to the mean account for MDT discretionary decisions? Psychology in the Schools, 29, 205-212. Hammill, D. (1990). On defining learning disabilities: An emerging consensus. Jgunnnl gf Learning Disabilities, 2;, 74-84. Hanna, 6., Dyck, N., & Holen, M. (1979). Objective analysis of aptitude-achievement discrepancies in LD classification. Legrning Disability Quanterly, 2, 32-38. Huebner, E. (1991). Bias in special education decisions: The contribution of analogue research. Sc 0 Ps cho o Qnagtenly, 6, 50-65. Jensen, A., & Reynolds, C. (1982). Race, social class and ability patterns on the WISC-R. s ' a d lndiyigugl Diffenences, 2, 423-438. 153 Johnson. D., & Myklebust H. (1967) Learning_d1§abilit1e§1 Pducatigna l nnin cinlgs and nzggtiggg. New York: Grune and Stratton, Inc. Kaufman. A- (1979)- Intelligen2e_te§;ing_xith_tne_fll§szz. New York: John Wiley and Sons. Kavale, K., 8 Reese, J. (1991). Teacher beliefs and perceptions about learning disabilities: A survey of Iowa practitioners. ea t e u te l , H(2), 141- 160. Macmann, G., & Barnett, D. (1985). Discrepancy score analysis: A computer simulation of classification stability. J2urnal_2f_2sYgnoedusational_A§§e§§ment. 363-375. Macmann, G., Barnett, D., Lombard, T., Belton-Kocher, E., & Sharpe, M. (1989). On the actuarial classification of children: Fundamental studies of classification agreement. J2urnal_of_§negial_fiduoation. 21. 127-149. McGrew, H., Werder, J., & Woodcook, R. (1991). WQ-B tecnnical nanual. Allen, TX: DLM. McLeskey, J. (1992). Students with learning disabilities at primary, intermediate, and secondary grade levels: identification and characteristics. e b' ' 'e ngntenly, l_, 13- 19. McLeskey, J., & Grizzle, K. (1992). Grade retention of students with learning disabilities. Exceptional Cnildnen, 58, 548-554. McLesky, J., & Waldron, N., & Wornhoff, S. (1990). Factors influencing the identification of black and white students with learning disabilites. Journal of Learning Disabilities. 2;. 362- -366. Mercer, C., King-Sears, P., & Mercer, A. (1990). Learning disability definitions and criteria used by state education departments. L__rning_DisabilitY_Quarterlx 141- 152. Michigan Association of Learning Disabilities Educators (1992). Gu 'de i es f 'dentif cat 0 d u ti studenLs_!ith_learning_gis_bilities Lansing: Livingston Education Service Agency CSPD State Initiated Project. The Psychological Corporation (1992). Wechslen individual gcnieygnent gee; ngnunl. San Antonio, TX: author. 154 Reynolds, C. (1981). The fallacy of ”two years below grade level for age" as a diagnostic criterion for reading disorders. Jonpnal pf Sonool Psycnglggy, l2(4) 350-358. Reynolds, C. (1985). Critical measurement issues in learning disabilities. ignpnal pf Specinl Pducapign, 12, 451-475. Reynolds, C. (1990). Conceptual and technical problems in learning disability diagnosis. In C. R. Reynolds & R. W. Kamphaus (Eds.), Handbook of psychological and educatignnl nsseggngnp pf gnilgpgn; lnpelligence ang acnigyengnt (pp. 571-592). New York: Guilford Press. Reynolds, C., & Stanton, H. (1988). Piscpepancy determinato; l. Bensalem, PA: TRAIN. ' Sattler, J. (1988). Asgessment of gnilgpen. San Diego, CA: Jerome M. Sattler, Publisher. Thorndike, R. (1963). The cgnceptg g: gvgp- and under- acnievemen . New York: Teachers College, Columbia University. Tucker, J. (1980). Ethnic proportions in classes for the learning disabled: Issues in nonbiased assessment. Joupnal pf Special Pducapion,~lfi, 79-91. U. S. Department of Education, Office of Special Education and Rehabilitative Services. (1992). ourteent ua peport to congress on the implementation of the indiyiduals witn disgbilities egucation ac . Washington, DC: U.S. Government Printing Office. Valus, A. (1986). Achievement potential discrepancy status of students in LD programs. Lgnpning_pi§npilipy Qunpteply, 2, 200-205. Wechsler, D. (1991). he wechsle tel ence sca e fo chilgpen - thipd edition manunl. San Antonio, TX: The Psychological Corporation. Wilson, L. & Cone, T. (1984). The regression equation method of determining academic discrepancy. gournal gf School Psychology, 22, 95-110. Woodcock, R.-(1978). ngelgpment gng gpgngardigatign of thg wgggcock-jonngon p§ycno-educntignal pattepy. Boston: Teaching Resources Corporation. Ysseldyke, J., Thurlow, M., Graden, J., Wesson, C., Algozzine, B., & Deno, S. (1983). Generalizations from five years of research on assessment and decision-making: The University of Minnesota institute. Exceptignal Egncnpiongl Quapteply, 3(1), 75-91. "111117!111111