u) ”.1... v1! 3. 5 .; i 53...; it 11:: t. 4. : uw=\ 2 .._. 5...... 7 . :1. e, v. .: if: .1, a 1. (if. 9 :1 ‘ a4 1 1. .1 #15. t 2.. 3.. f . _ . 2n]..:r..&).. 3:: 1,) P- 3’11:- f/ 9 5r? / .1 r... N This is to certify that the dissertation entitled AN EXAMINATION OF TOURISM EDUCATIONAL PUBLICATIONS AND TOURISM BUSINESSES: UNDERSTANDING THE IMPORTANCE OF READABILITY presented by Robert I. Ward Jr. has been accepted towards fulfillment of the requirements for the Ph.D. degree in Park, Recreation and Tourism Resources QTW/M Major Professor’s Signature 727% é], 1,200/7/ Date MSU is an Affirmative Action/Equal Opportunity Institution LIBRARY Michigan State University PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/01 c:/C|RC/DateDue.p65-p.15 AN EXAMINATION OF TOURISM EDUCATIONAL PUBLICATIONS AND TOURISM BUSINESSES: UNDERSTANDING THE IMPORTANCE OF READABILITY By Robert I . Ward Jr. A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Park, Recreation and Tourism Resources 2004 ABSTRACT AN EXAMINATION OF TOURISM EDUCATIONAL PUBLICATIONS AND TOURISM BUSINESSES: UNDERSTANDING THE IMPORTANCE OF READABILITY By Robert I. Ward Jr. The purpose of this study was to assess the perception that educational materials of the Cooperative Extension Service are difficult to read. The Flesch Reading Ease formula was used to measure readability levels of 130 bulletins used in tourism industry education. Findings indicated that the mean readability level approximated the level of articles found in academic journals. About ninety-percent of the bulletins fell within the readability range of materials that are typically encountered by readers ranging from Sixth grade through some college completed. Binkley’s Interactionist Theory was used as a model to develop a methodology using a criterion-referenced instrument for assessing the reading comprehension abilities of the intended readers of these materials. From a small demonstration sample, the intended readers of these tourism bulletins were found to be capable of independently reasoning with material written at least as difficult as the readability level most CES bulletins currently in print. Further studies are needed to yield statistically Significant and more precise statistics on the reading comprehension abilities of the intended audiences of these materials. Authors are encouraged to use readability formulas to calibrate reading levels of educational materials with the reading comprehension. abilities of the intended audiences. Copyright by ROBERT I. WARD J R. 2004 ACKNOWLEDGMENTS I am grateful to my faculty advisor and dissertation committee chair, Dr. Donald Holecek, for the opportunities, financial support, and. years of guidance and mentoring that made this accomplishment possible. This accomplishment would not have been possible without the support of my family and friends. I will be eternally grateful for the unwavering encouragement and support from my soul-mate and wife, Nancy. iv TABLE OF CONTENTS LIST OF TABLES ............................................................................................................ vii LIST OF FIGURES ......................................................................................................... viii CHAPTER 1: THE PROBLEM TO BE INVESTIGATED ................................................ 1 Introduction to the Study ................................................................................................. 1 Statement of the Problem ................................................................................................. 1 The Purposes of This Study ............................................................................................. 2 The Significance of This Study ........................................................................................ 2 Research Questions .......................................................................................................... 5 Research Hypotheses ....................................................................................................... 5 Assumptions ..................................................................................................................... 7 Definitions of Terms ........................................................................................................ 7 Limitations of the Study ................................................................................................ 13 CHAPTER 2: PRIOR RESEARCH .................................................................................. 15 Studies on Factors in Reading Comprehension ............................................................. 15 Communications, Language, Literacy, Reading, and Writing ....................................... 17 Studies of Functional Literacy ....................................................................................... 19 The Learning Environment -- Attributes of Distance Learning ..................................... 21 Readability Research -- Attributes of Text .................................................................... 22 Reading Comprehension -- Attributes of Readers ......................................................... 26 Learning -- Reading Comprehension ............................................................................. 31 CHAPTER 3: METHODOLOGY ..................................................................................... 36 Research Design Overview ............................................................................................ 36 Material Readability Analysis ........................................................................................ 38 The Selection of the Flesch Reading Ease Readability Formula ................................... 47 Assessing the Reading Comprehension Ability of the Intended Readers ...................... 48 The Selection of the Advanced Degrees of Reading Power Test .................................. 49 Scoring the Reading Comprehension Instrument .......................................................... 53 Sampling Strategy for Selecting the Intended Readers .................................................. 54 Characteristics of the Reading Comprehension Test Participants ................................. 56 Administration of the Reading Comprehension Instrument .......................................... 5 6 Equating Material Readability and Reading Comprehension Ability ........................... 57 Validity .......................................................................................................................... 5 8 Reliability ....................................................................................................................... 60 Data Organization and Variables in this Study .............................................................. 61 Statistical Analysis ......................................................................................................... 63 CHAPTER 4: FINDINGS AND DISCUSSION ............................................................... 65 The Readability of CES Tourism Bulletins ................................................................... 66 Hypothesis Number 1 .................................................................................................... 70 Text Readability versus Document Length .................................................................... 71 Text Readability versus Year of Publication ................................................................. 71 Text Readability versus Authoring Source .................................................................... 81 Discussion: Findings on the Readability of CES Tourism Bulletins ............................. 91 The Reading Comprehension Abilities of Intended Readers ......................................... 94 Scoring Results, Analysis, and Discussion of Findings ................................................. 95 Reading Comprehension Abilities versus Formal Education ...................................... 10] Comparing Readability versus Reader’s Comprehension Abilities ............................. 102 Hypothesis Number 2 .................................................................................................. 103 CHAPTER 5: CONCLUSIONS, IMPLICATIONS, AND RECOMMENDATIONS....105 The Purposes of this Study .......................................................................................... 105 Hypotheses ................................................................................................................... 1 05 Methodology ................................................................................................................ 1 06 Summary of Findings ................................................................................................... 107 Conclusions .................................................................................................................. 1 08 Implications of the Study ............................................................................................. 109 Recommendations for Future Studies .......................................................................... 114 APPENDICES ................................................................................................................. 1 18 A. The National Extension Tourism Database - Printed Version ........................... l 19 B. The National Extension Tourism Database - Electronic Version ...................... 121 C. An Example of a CBS Tourism Bulletin Downloaded into Microsoft Word ....123 D. Procedure for Calculation of Readability Statistics ........................................... 125 E. An Example of Calculating Readability Level of Text Material ........................ 127 F. Calculating Readability Using the Flesch Reading Ease Formula ..................... 130 G. Participant Profile Form ..................................................................................... 132 BIBLIOGRAPHY ............................................................................................................ 134 vi LIST OF TABLES Table l - Factors That Influence Reading Comprehension ............................................... 16 Table 2 - Rationale or Theoretical Basis for This Study .................................................... 18 Table 3 - A Comparison of Research Studies .................................................................... 33 Table 4 - Readability Scores of On-line CES Tourism Bulletins ...................................... 40 Table 5 - Strength of Relationships ................................................................................... 47 Table 6 - Scorecard for Selection of the Reading Comprehension Instrument .................. 51 Table 7 - An Approximation Table for Equating Reading Ease Scores and DRP Units ...55 Table 8 - Statistical Analysis Summary ............................................................................. 64 Table 9 - CES Tourism Bulletins: Flesch Reading Ease Scores by Year of Publication ..75 Table 10 - On-line Full-text CES Tourism Bulletins: Number of Bulletins by Year of Publication ................................................................................................................. 80 Table 11 - CES Tourism Bulletins: Flesch Reading Ease Scores by Authoring Source ...85 Table 12 - Authoring Sources of On-line Full-text CES Tourism Bulletins ..................... 89 Table 13 - Participants' Item Responses, Scores, Education Level, Gender ...................... 96 Table 14 - Participants' Highest Level of Formal Education Attained ............................ 101 vii LIST OF FIGURES Figure 1 - Research Design Overview ............................................................................... 37 Figure 2 - CES Tourism Bulletins: A Comparison of Grade Level Readability Calculations ................................................................................................................ 67 Figure 3 - Readability of CES Tourism Bulletins by Flesch Reading Ease Score ............ 69 Figure 4 - Length of On-line Full-text CES Tourism Bulletins in Words, n=130 ............. 72 Figure 5 - Length of On-line Full-text CES Tourism Bulletins, n=130 ............................ 73 Figure 6 - On-line Full-text CES Tourism Bulletins: Bulletin Length versus Readability Score .......................................................................................................................... 74 Figure 7 - On-line F ull-text CES Tourism Bulletins by Year of Publication, n=130 ........ 82 Figure 8 - Readability Ranges of On-line Full-text CES Tourism Bulletins by Year of Publication, n=130 ..................................................................................................... 83 Figure 9 - Readability Ranges of On-line Full-text CES Tourism Bulletins by Authoring Source, n=130 ............................................................................................................ 90 Figure 10 - Reading Comprehension Scores of Intended Readers of CES Tourism Bulletins ..................................................................................................................... 97 Figure 11 - Advanced DRP Scores of Intended Readers of CES Tourism Bulletins ........ 99 viii CHAPTER 1 THE PROBLEM TO BE INVESTIGATED Introduction to the Studv In a study by Archer (1972), Floridians were found to be avoiding publications printed by the Florida Cooperative Extension Service because the publications were hard to read and used an unfamiliar technical style. Johnson and Verma (1990) found that material written by the Alabama Cooperative Extension Service was over two grades higher than the reading grade level of the average US. adult, supporting findings of earlier studies of Cooperative Extension Service publications. The effectiveness of educational offerings of the Cooperative Extension Service is rooted in the ability to communicate effectively with the intended learner. In the most fundamental sense, educational materials written in a style or format that is perceived to be difficult to read not only jeopardizes opportunities for learning but threatens the utilization of Cooperative Extension Service educational offerings. Using formal education as a model, are providers of Cooperative Extension Service non-formal education offerings evaluating the effectiveness of their materials and methods? How readable are current Cooperative Extension Service educational publications? Statement of the ProbleI_n_ Limited studies have found that Cooperative Extension Service educational publications are perceived to be difficult to read. The Purposes of This Stu_dy The purposes of this study were: 1. to measure the readability of one type of CES educational publications -- tourism bulletins. 2. to demonstrate a methodology for measuring the reading comprehension abilities of the intended readers of these bulletins. 3. to examine the relationship that exists between the readability Of educational materials and the reading comprehension abilities of their intended readers. 4. to present a methodology for improving distance learning performance in a way that matches the readability of educational materials with the reading comprehension abilities of their intended readers. The Significance of This Study The non-formal educational mode of the Cooperative Extension Service is non- formal distance learning, when the learner is apart from the instructor. Distance learning performance is ultimately dependent on a successful match of educational materials and a leamer’s abilities. The Significance of this study lies in the examination of the construct of learning that results from reading comprehension. The findings of this study are expected to contribute to learning theory by examining the question of whether perceived readability difficulties can be Simply attributed to surface features of text material or to other factors. The findings of this study are expected to contribute to the understanding of learning that occurs in distance education. The design of this study combined traditional measures of material readability with processes that measure cognitive reading comprehension in a way that is intended to improve distance learning performance. This problem was worthy of research attention for two reasons. First, CES educational materials individually and m have rarely been gauged for readability. Readability analyses are routinely conducted on textbooks in formal education, but such analyses have rarely focused on non-formal or distance learning text materials that are the common venues of Cooperative Extension Service adult education. Currently there are over 250 published educational bulletins available for use in tourism industry education. These bulletins are authored primarily by professionals of the US. Department of Agriculture, Agricultural Experiment Stations, and Cooperative Extension Service at Land-Grant Universities of thirty-five states. For this study tourism bulletins that are available through the Cooperative Extension Service and other sources were examined. The examination of these bulletins utilized the National Extension Tourism Database, an Intemet-based repository of electronic versions of these bulletins. In the current process of authoring these bulletins, the originating sources are scattered, offering wide variations in the assumptions that the authors make about their intended readers. Further, this authoring process allows wide variations in the authors’ sensitivities to both the readability of materials and the understanding of their intended readers’ learning expectations and assumptions. This study analyzed the readability of these educational materials. The second reason why this problem was worth studying addressed the question of whether the readability of CES tourism bulletins match their intended readers’ ability to comprehend the content material in the bulletins. For successful learning to occur, 3 readable text material must be matched to their intended readers’ reading abilities. In non-classroom settings, where the teacher-student relationship is Often non-existent, this match is critical. In the present case of CES tourism bulletins, both printed and electronic versions exist. In both forms, the current authoring process does not provide any central “watchdog” on the front-end to monitor either material readability or appropriateness to their intended readers’ reading abilities. This problem has been further compounded by the lack of “back—end” feedback on bulletin effectiveness in the non-formal distance learning education settings that are common. To make matters worse, as distance learning continues to explode in popularity the content of Intemet-available offerings quite Often originates unedited from current printed versions. The utility and effectiveness of future Cooperative Extension Service educational materials are expected to be improved by the findings of this study. The methodology used in this study is readily accessible and easily adaptable to any learning performance analyses by merely changing the text material and/or the intended readers to be sampled. Authoring sources will initially benefit from this study by gaining an awareness of both the readability levels of these educational materials and a sensitivity of the reading comprehension abilities of their intended readers. This Should lead to more effective authoring of educational materials. The findings of this study are expected to benefit county Cooperative Extension Service offices by providing improved educational materials that reflect a better application of readability principles to intended readers’ reading comprehension abilities. Finally, the intended readers will benefit from enhanced learning performance from educational materials that are written at a level that is more suitably matched to their reading comprehension abilities. 4 The findings of this study extend the utility of readability formulas that are now used extensively for testing the readability of materials used in formal education to the use of these instruments in a non-formal, adult education, real-world application. This study also proposes a methodology for assessing the reading comprehension abilities for the intended audience. Research Questions The research questions that drove this study were: 1. What is the readability level of the most difficult CES tourism bulletin? Of the easiest? Of ninety-percent of these bulletins? 2. Is there a relationship between the readability of these tourism bulletins and their other attributes, Specifically, authoring source, publication date, and length of bulletin? 3. Are these tourism bulletins written at an appropriate level of difficulty for their intended readers? At what levels of material readability will the intended readers comprehend at the independent learning level? 4. IS there a relationship between the intended reader’s level of educational attainment and the reader’s reading comprehension ability? Reseamhr Hypotheses H1: CES tourism bulletins are written at a readability level that is less difficult than the average academic journal or quarterly. The result from using a readability formula to analyze surface features of an individual text document is expressed as a Flesch Reading Ease Score with a value ranging from zero, most difficult, to 100, easiest. Academic journals or quarterlies typically range from thirty to fifty (Flesch, 1949). This hypothesis predicts that the value of the arithmetic mean of all CES tourism bulletins sampled in this study will be greater than fifty. H2: When reading text material, the intended readers of CES tourism bulletins are capable of performing at the independent comprehension level only when the readability of the text is not higher than the readability level of text typically found in high school textbooks. The intended readers of these bulletins were operationally defined for this study as the owners and managers of businesses that cater to the tourism industry. In this study, the Advanced version of the Degrees of Reading Power reading comprehension test was administered to a sample of these intended readers. The results were expressed in terms of raw scores, the number of questions answered correctly, at a Specified level of comprehension (P=.90). These scores measured the difficulty of material that readers are able to reason with successfully. Within the reading community, reading performance is traditionally reported at three levels of comprehension, the independent, instructional, and frustration levels (Betts, 1946). The probability P=.90 was an estimate of the likelihood of the reader’s comprehension when independently reading material of this, or lower, difficulty. As measured on the fixed-interval DRP Scale of Text Difficulty from zero to 100, the lower the score the easier the text (Touchstone Applied Science, 2001). Commonly encountered English text ranges from about 25 to 85 DRP units. The average 6 text difficulty for high school textbooks is about 62 DRP units (Touchstone Applied Science Associates, 2001). Arithmetically, this hypothesis would be expressed as the mean DRP test score of the intended readers of CES tourism bulletins at which they perform at the independent level (P=.90) will be equal to or less than 62 DRP units. TO comprehend reading material that is more difficult, these readers would require instructional assistance. Assumptions It is assumed that, in a distance learning Situation an independent level of learner performance is the optimal educational objective. The independent level was operationally defined for this study as the ability of the learner to read material with a ninety-percent understanding without any instructional assistance. This generally accepted passage performance criteria was stated by Bormuth (1971) citing reading instruction research on informal reading inventory procedures. Predictions of a learner’s probability of performing at this level can be based in part on assessing the readability of the instructional materials by measuring surface features, such as counts of words per sentence, of the text. Learning performance can be improved by authoring with a sense of the intended reader’s reading abilities. Definition; of Terms Academic journals or quarterlies Academic publications and magazines written for professionals in education. Examples would include Adult A-DRP Units Adjunct comprehension aids Bormuth Grade Level Bormuth readability formula CES tourism bulletins Education Quarterly, The Journal of Higher Education, and Tourism and Hospitality Research. A variant definition of Degrees of Reading Power units, representing the converted scores from the Advanced version of the Degrees of Reading Power test. See DRP units. Textual material features, such as statements of objectives or study questions, located within, at the beginning of, or at the end of the text to enhance reading comprehension. An index that determines a readability grade level based on characters per word and words per sentences. The Bormuth readability formula for passages as calculated by the formula: R = .886593 - .083640 (LET/W) + .161911 (DLL/W)3 - .021401 (W/SEN) + .000577 (W/SEN)2 - .000005 (W/SEN)3 Where R = grade level; LET = letters in passage; W = words in passage; DLL = Dale Long List words in passage; and SEN = sentences in passage. Educational materials, in either print or electronic media, that are authored by professionals of the US. Department of Agriculture, Agricultural Experiment Stations, and Cooperative Extension Service at Land-Grant Universities 8 Coleman-Liau Grade Level Criterion-referenced test Distance education Distance learning of thirty-five states for educational use in the tourism and hospitality industry. An index that determines a readability grade level based on characters per word and words per sentences. Performance of a test-taker as measured against a criterion, for example, the readability score of materials, rather than against the performance of other test takers. The family of instructional methods in which the teaching behaviors are executed apart from the learning behaviors. The desired outcome of distance education programs; learning at a distance. Distance learning perfomiance A numerical value, in terms of DRP units, that DRP units approximates the most difficult material with which a reader can independently reason. Synonymous with Independent level. Degrees of Reading Power units. A measure that, depending on context, expresses (1) the readability of textual material, expressed as a numerical value on a scale from zero to 100 where the higher the value, the more difficult (less readable) the material, or (2) the reading comprehension ability of an individual reader where the value, when accompanied by a level of comprehension, expresses the most difficult material with which an individual can reason. DRP Scale of Text Difficulty An index of text complexity. From 3 Degrees of Reading F lesch Reading Ease Score Flesch-Kincaid Grade Level Power test, the raw score value is converted to DRP units with an accompanying level of comprehension that expresses Simultaneously the reading ability of a reader and the difficulty of the text. The scores, depicted on a fixed interval scale ranging from zero, easy, to 100, difficult, provide an estimate of the difficulty level of reading materials the reader can comprehend at independent, instruction, and frustration levels. Most English text ranges between 25 and 85 DRP units on this scale. An index that computes readability based on the average number of syllables per word and the average number of words per sentence. Scores range from zero to 100. The average writing score is approximately 60 to 70. The higher the score, the greater the number of people who can readily understand the document. An index that computes readability based on the average number of syllables per word and the average number of words per sentence. The score indicates a grade-school level. For example, a score of 8.0 means that an eighth 10 Frustration level Independent level Instructional level Learning Learning performance level grader would understand the document. Standard writing approximately equates to the seventh-to-eighth—grade level. A learner is performing at the frustration level if the material is too difficult to understand, even with instructional assistance. Also see Independent level and Instructional level. A learner is performing at the independent level if the learner is able to read material with a high degree of understanding without any instructional assistance. Synonymous with Distance learning performance. A learner is performing at the instructional level if instructional assistance is needed for the learner to understand. Also see Frustration level and Independent level. All of what we come to know, consciously and unconsciously, by what-ever means. A part of that will have come to us through education, that process which is frequently, but not always, characterized by the interaction of a teacher and a student. A measure of reading comprehension expressed as a value from zero to 100. An indication in DRP units of the approximate readability level of material that a reader is 11 Material readability Non-formal education Norm-referenced test P-value Power test Prior subject knowledge Readability Readability grade level able to comprehend. Also see Independent level, Instructional level, Frustration level. See Text material readability. Any organized educational activity outside the established formal system, whether operating separately or as an important feature of some broader activity, that is intended to serve identifiable learning clienteles and learning objectives. A test-taker’s performance is interpreted in relation to the performance of other test-takers. The percentage of comprehension indicating how well a student can reason with textual material. Tests with items arranged in order of difficulty and administered without time limits. In contrast, speed tests are timed. A measure of an individual’s mastery of a subject. The sum of all those elements within a given piece of textual material that affects the success that a group of readers has with it. The success is the extent to which readers understand it, read it at an optimum speed, and find it interesting. The showing an individual would make if he took a graded reading test, loosely equivalent to the educational grade. 12 Readability level Reading Similar to Readability grade level but expressed not as an academic grade level but as a point value on either the Flesch Reading Ease Scale from zero to 100, or the DRP Scale of Text Difficulty from zero to 100. The construction of meaning by a reader interpreting a text. Reading comprehension ability Surface features of text Text material readability The process of using the cues provided by the author plus one’s prior knowledge to infer the author’s intended meaning. Counts of the numbers and frequencies of characters, words, sentences, paragraphs, and word familiarity. Textual material readability. Calculated from the frequency of occurrence of surface features of text, such as words per sentence and sentences per paragraph, and commonly encountered words. See Flesch Reading Ease Score; DRP units; Coleman-Liau Grade Level; Bormuth Grade Level; and Flesch-Kincaid Grade Level. Limitatiogs of the Stu_dy 1. Only publications that were available and viewable as full-text versions on the National Extension Tourism Database Internet website were examined in this study. An oversampling strategy calculated readability scores and publication characteristics for all 130 of these bulletins. The remaining, approximately 120, bulletins that were 13 available only as printed versions were not analyzed in this study. The readability analysis of on-line full-text versions is believed to be representative of the readability of all CES tourism bulletins. . Costs. Budget limitations limited the Size of the reader sample to a target Size of 25 individuals, due to the cost of obtaining the reading comprehension DRP tests and scoring materials. 14 CHAPTER 2 PRIOR RESEARCH SLIdies on Factors in Regina Comprehension Historically, the theoretical basis for understanding reading comprehension has evolved from analyses of surface features of text, for example, counts of words and sentences, to observations of readers’ behavior and understanding cognitive processes. AS this theoretical basis has evolved, technological advances have opened new approaches in how we learn. Computer-assisted instruction and the growth of distance learning have enabled learning environments where behavior is typically not observable and where unique complexities have been added to the understanding of cognitive processes. Theories and issues surrounding the use of technology, delivery methods, materials, and the unique attributes of distance learners are pertinent to understanding this learning environment. A “short list” of factors that have been identified by various researchers as having an influence on reading comprehension is depicted in Table 1. From this list, factors were selected that were most relevant to this study. After interpreting the reading comprehension factors Shown in Table l, and allowing for a certain amount of redundancy among the different sources, the factors were reduced in this grouping schema to three primary clusters — factors that are directly assignable to the text material, factors attributable to the reader, and/or factors in the learning environment. For example, one can see the similarity of factors that various sources attribute to the 15 Table 1 Factors That Influence Reading Comprehension Attributes of: Factors Text Reader Environment Sources Syntax X Goodman & Burke; Rye Sentence length X Rye Word length X Rye Word frequency X Rye Subject matter/content X Rye; Johnston Organization of material X Rye Semantics X Goodman 8 Burke Characteristics of the text X Johnston Column size X Rye Line spacing X Rye Type of print X Rye Motivation/interest X Fry; Johnston Purpose X Johnston Reader's ability and desire to read X Rye Angle at which book is held X Rye Difficulties in expression and organizing information from memory X Johnston Memory and retrieval requirements X Johnston Reasoning requirements X Johnston Ability to comprehend X Binkley Prior subject matter knowledge X Rumelhart; Kintsch; Borm uth Test-wiseness X Johnston The nature of the task X X Johnston Social setting and interaction X Johnston; Palloff & Pratt Expectation and perceived task demands of the examiner X Johnston Production requirements X Johnston Physical environment X Rye; Palloff & Pratt Technology X Palloff & Pratt Sources (see Bibliography section for complete citations): Binkley, M. R. (1988). Bormuth. J. R. (1967). Fry, E. B. (1988). Johnston, P. H. (1983). Kintsch, W. (1987). Palloff, R. M. & Pratt, K. (1999). Rumelhart, D. E. (1980). Rye, J. (1982) 16 surface features of text such as sentence length, word length, word frequency, and characteristics of text. Similarly, many attributes of readers center on the reading comprehension ability of the reader. A summary of the theoretical basis for this study, Showing the most relevant constructs, theories, studies, and associated merits and limitations is displayed in Table 2. In this depiction, constructs are logically ordered and each entry concludes with the key remaining unanswered needs that triggered subsequent literature research. Communications, Langgage, Literacy, Reading, and Writing Communications theory deals with the exchange of thought, either by Spoken or written symbols. The language that we use for communicating serves many purposes, sharing information, understanding, literary response and expression, critical analysis and evaluation, and social interaction (New York State Learning Standards, 1997). Language components include receiving information by listening and reading, expressing information by Speaking and writing, and thinking (Blankenship, Colvin, & Laminack, 1993) Reading has been defined as “the Skill of extracting meaning from print to the same degree that one extracts it from the sound stream” (Gleitman & Rozin, 1977). Writeability, which is the corollary to readability, is concerned with writing, rewriting, or editing to get those materials to the desired readability level (Fry, 1988). For each purpose of writing there is a unique structure. Purposes include descriptive writing (material that describes), expository writing (material that explains or gives directions), and argumentative or persuasive writing (material that persuades) (Gillet & Temple, 17 .cozmsgm mEEmm. cocEmfi m E Ezemcocanoo was 3995 2 nomz I mocmfmotclabciffim— oocflwi moioafi, Lovmmmwmmofimfwyoaé lam emceemeymzolémMIEccmMcEE 9F .mocmELotoa mEEmo. m>an_ 2 .020 E 3:533. 3:668 £5, 3:53 3252 999.8 .2 pence: m_ 5539 Eb: 60355388 mEumm: :0 83805 92:58 Co Comte 05 Boo—mm. censor: Co mosaic 9: Co 356 m 8.53.. 9:58. ..8 $566 mast; 62.3% on o._. .2332: 9 .3056 2m .wcozmzomno 65323 méc: .mommoooa m>£cmoo .mEEmm. 09566 E .853 i 29.2 5:ng Lo ..mcozozbms .Eovcoacug mm 3ch . 9:0qu Co .32 mcfimom L , . .oocmEcotoa L mEEmo. co Soto 9: can 83805 830935 ..xofimumcc 96533095 .co_mcocoEEoo 0:68. 9:39: 8c cap .33 Co 3.23. momtzw 2:0 33% 60:83an mocflmfi Lo REE—-5: 5 cm: vm=E= cm; m>mc $385“. .538an 58:2 E 353%. Co E9583 8., was $9868 @5833 ..8 new: 9m amigos. 3:538. Owe—2 E mwmoozw OEOm . van .9550 bm_:nmoo> 859% 2.2.52 ucm 386E Son Co $8.265. Epsom xomnnooh Max:836 coucmg Co 032265. 628925 Loco ucm Eanm - Locomca ”$6.3 95:50:25 @558. 802820 .5: .2 2563mm .. :29: LoumoL-_m_§mE Lozmm fleece. .6 33.265. Loon. 2265228 58% 3.53. So ”mocmEoEoo .3___n_wwmoom Euro @558. 8565 assume .8 @5098 cam mEE>> L L .m_o>o_ @589 EL L EcEmmommm b=5mumm= L _:o_wcccmano L 9.6mm: Co c3285. Boer: «wEozomLmE: L boar: mEmcow_ Scoop: 35:62.05 wmmmmooa c>£cmoo Scoop: o>£cmoo .mmfimtms Loummm 585 05285. $252 ommm mcfimom 5on ”axe“ Co 8553 83:5 938E 9 $38.8 L 3:33.92 Lemme: I I.I-IIIIIII Z L i .i: _ I+I : cosmoEzEEooL E mmmzmcfl Co 33 coacmcmanO 9.6mm. - mEEmmj whence. Co 32555 - co_mcocmEEoo __ IE “x2 Co 33955 - £588”. 30.9.52 Co 0.9 6.392. new 39:8. 8566 .6 8358920 33805 02:58 cam 2252030 55323 $093: .2383 Co mmfiaw >89: 85383:. 22“. 96mm: ucm mast; Co mmwoasd $.85 mcozmoEzEEoo fl _ _ L @558. mocmumfi - EmEc§>cm 9:53. on... _. e I... a L wcozmo_c:EEoo mueoz 22.3%: 83:3 363w 3:35 3:52.00 ..— i . I . IIIILIIIII #55 £5 .8 Beam 30:23:... .0 2305mm N aim... l8 1990). Jenkins states that “language is central to learning and a prerequisite for most human communication” and that educators need to find the appropriate style of writing, saying that “. . .written language tends to be more formal than Spoken” (Jenkins, 1981, p. 21). Misanchuk (1994) states that: “Writing for instructional materials is qualitatively different than writing for other purposes. By virtue of a post-secondary education, most of us write in a fairly scholarly manner — quite differently than we would speak to a class. Yet instruction frequently benefits from the use of language more like that used for Speaking than for writing journal articles and books” (p. 127). The purpose of reading is also communication, comprehending the meaning of the author (Goodman & Burke, 1980). In describing readability, Fry (1988) states that: “True readability is the goal of most authors. They want to communicate ideas to the reader. The basic idea behind readability has always been to help writers, editors, teachers, and librarians to match the difficulty of written material with the reading ability of the student. A good match improves communication and learning” (pp. 77-78). Studies of Functional Literacy According to Chisman (1990), adult literacy is a five-part construct, consisting of “basic Skills” all adults Should master. The skills are reading, writing, verbal communication in English, math and problem-solving. The term basic skills is often used interchangeably with the term literacy in discussions of the adult education field. Functional literacy refers to mastering basic Skills well enough to meet individual goals and societal demands. Chisman states that at least 20 to 30 million American adults do not have the basic Skills required to function effectively in our society, and a large portion of them suffer from economic and social distress that reasonably can be related to their lack of basic Skills. (Chisman, 1990). 19 Thomas (1993), Martin (1992), and Moynahan (1991) conducted studies on functional literacy in the workplace. Mavrogenes (1988), and Klare and Buck (1954) state that the reading level of the average US. adult was found to be the 9th grade. Chall (1983) estimates high-school graduates’ average reading level at the 12‘“ grade. So what reading level is appropriate for effective communication? Fry (1988) provides a beginning to the answer to this question by advising authors to: “Know your audience. Write directly to someone. Select the proper level of sophistication, then try to write a little below that level” (p. 87). Other researchers disagree. Chall and Conard (1991) and Vygotsky (1978) advocate writing a bit above this level. Chall (1983) states that the difficulty of material affects the probability of successful learning: Materials of a readability level of 4th grade or higher are very different from materials with readability levels at the 3rd grade and below. Materials at grade levels 1 to 3 are quite simple in vocabulary and syntax and are usually about elementary, familiar ideas and things (Chall, Bissex, Conard, and Harris-Sharples, 1996). Indeed, it is only at about a 4th grade readability level and higher that it is possible to write ‘information—type’ reading materials and narrative of a substantial nature. (p. 74) To summarize the literature described thus far, communications theory and studies of writing and reading underscore the importance or both writing and reading for meaning, but are prone to criticisms of being rhetorical in nature when the intended readers and their attributes are not known. To be more effective, a better material-to- reader match is needed, especially for non—classroom situations. 20 The Learning Environment -- Attributes of Distance Learning In distance learning, communications are often one way. Not knowing their intended learner and their characteristics make the authoring task difficult. The distance learning environment has been defined as “. . .all deliberate and planned learning that is directed or facilitated in a structured manner by an instructor or instructors who are separated in Space and/or time from the learners so that communication between them is through print, or electronic media, or combinations of these” (Moore, 1991, p. 346). Jenkins (1981) describes some of the unique challenges of distance education: We learn only if conditions are right. Our understanding of new material depends on how interesting we find it, and on what we know already, on its presentation, and on our motivation to learn and remember it. In face-to- face education, the teacher can arrange his lessons to suit his students. He backtracks, asks questions, initiates discussions and sets exercises whenever he sees the need. The teacher at a distance has to approach his teaching quite differently. He must design materials that motivate, explain, and teach. (p. 153) Distance learners have Special attributes. Speth’s F ield-Dependence theories (as cited in Threlkeld & Brozoska, 1994) view Field-Independent persons as autonomous and detached from others. Field-Dependent learners require more structure and reinforcement. Adult learners involved in distance education are characterized by maturity, high motivation levels, and self-discipline. Adult learners are more likely to perform better in telecourses due to maturity, better self-discipline, prior completion of more college credit hours than younger students, the likelihood of having full-time careers, and paying for their own education. Conducting a learner analysis prior to developing a distance education course iS also viewed as very important (Threlkeld and Brozoska, 1994). 21 Materials used in distance learning deserve Special consideration. Holloway (1983) states that: “In the evaluation of materials, the type of medium is much less important than the characteristics of the medium and of the learners who use it” (p. 95). Misanchuk (1994) describes the nature of printed text material: “As noted in the list of limitations of text, interaction is difficult to achieve. Print is largely a one-way communication medium” (p. 124). Many of the benefits and limitations of printed text are also common to electronic text material. To summarize, an effective offering in the distance learning environment requires not only attention to the considerations for a good match between reader and material, but also attention to special factors related to the presentation of the offering. Jenkin’s (1981) studies addressed some of the factors attributed to learners, and Holloway (1983) downplays the importance of the type of medium in favor of the importance of the attributes Of both the material and the learners. This study focused on a better understanding of the attributes of both materials and learners. ReadabilityI Research -- Attributes of Text Zakaluk and Samuels ( 1988a) state that they can trace text comprehensibility back to Greek scholars. Chall (1988) traced the beginnings of modern readability research to two sources — studies of vocabulary control and studies of readability measurement, starting in the 19205. Word counts by Thorndike in 1921 were the basis for grade level assessments. Lively and Pressey conducted the first readability study in 1923. Initial research in readability, comprehension difficulty, included aspects of interest, legibility, 22 and ease of comprehension. Vocabulary studies became strong predictors of text difficulty. The shift toward readable writing can be attributed to the adult education movement growth that occurred during the Depression of the 19303. Studies of adult reading interests by librarians and educators led to work on the first adult readability formula by the educators Ralph Tyler and Edgar Dale in the mid-19303. Dale developed word lists based on familiarity, unlike Thomdike’s lists that were based on frequency of use. Gray and Leary (1935) used eighty-two factors for predicting reading comprehension performance by adults. In 1948, the Dale-Chall readability formula was developed using a list of about three thousand words, and it has stood as a simple yet accurate measure of readability (Chall & Conard, 1991). F lesch published a readability formula in 1948 that measured just two elements, reading ease and human interest. Flesch’s first readability formula became popular and greatly increased readership of mass communications. Flesch (1949) popularized readability principles in his book m Art of Plain Talk. Various readability formulas subsequently emerged as predictors of the difficulty of written materials (Chall & Conard, 1991). The Flesch formula is now the most widely used of all readability formulas, followed by Dale and Chall’s formula (Chall & Conard, 1991). More recent studies on assessing both students’ reading abilities and text readability led to the development of yet another readability formula by Bormuth in 1971. Readability formulas have both limitations and critics. AS a writer’s tool for text analysis, formulas are commonly used without the presence of the target audience 23 (Kaestle, 1991). Formulas are useful as tools to measure the readability level of written text, but they can neither measure nor replace writing style (Klare & Buck, 1954). While formulas can measure text readability based on surface attributes, formulas can not measure reading comprehension (Huggins & Adams, 1980). Abuses include poorer writing when readability formulas are used in the authoring process as a device to obtain lower readability scores (Chall, 1988). Authors of some Of the readability studies and formulas include Lorge (1939), Washburne and Morphett (193 8), Singer (1975), Danielson and Bryan (1963), Fry (1963, 1977), McLaughlin (1969), and Bormuth (1967). Uses of readability formulas include studies of the readability of newsletters by Balachandran (1997); of health education materials by Barteaux (1990), Duffy (1989), Schmitz (1994), and Dusch (1993); of financial reports by Bly (1994) and Yundt (1985); of vocational materials by Welch (1981) and Vick (1985); and of occupational materials by Thornton (1981). In a break from attempts to quantitatively score text readability, Chall, Bissex, Conard and Harris- Sharples (1996) advocated qualitative assessment of text readability, citing the inability of classic readability formulas to measure cognitive aspects. A search of literature reveals that CBS educational materials have rarely been assessed for readability. One study by Nehiley and Williams (1980) found that CES educational materials were written at readability levels higher than those of their intended audiences. Johnson and Verma (1990) reached the same conclusion. Risdon (1990) suggested that Extension staff could apply learning theory to develop more effective written materials. Another study by Liptak (1991) found that using commercially available computer software aided readability in writing for Extension audiences. 24 Achterberg, VanHom, Maretzke, Matheson, and Sylvester (1994) assessed readability grade levels for nutrition education bulletins and concluded that reducing content is more effective than rewriting down to low-literate audiences. Boone and Smith (1996) concluded that there had been limited research on cognition and readability of CES publications. Simeral (2001) lamented on how the efficiency of technology in the electronic world facilitates CES educational program delivery at the expense of benefits formerly realized through personal contact by stating that “. . .communication technology has also reduced the amount of face-to-face, personal contact with and among clientele, which used to be a hallmark of Extension work.” To conclude, atheoretical thinking and research have resulted in an evolving description of the readability of text, and therefore the probable degree of reading comprehension, based on the surface features of text material. The flaw in the use of these formulas as a tool to improve learning comprehension is the absence of factors that absolutely describe the abilities of the reader. Given this limitation, the best gauge for estimating the difficulty of material, based on surface features of the text, is a measure that expresses readability not in terms of grade levels that are often misinterpreted, but in terms of a theoretical scale. The Flesch Reading Ease formula provides such a measure. What is needed in order to improve learning comprehension is a way to measure the abilities of the readers, particularly the ability to comprehend reading material of known and varying readability. 25 Reading Comprehension -- Attribates of Readers Within the body of readability research, an evolutionary change occurred in the mid-fifties as emphasis Shifted from understanding the effects of text material attributes to understanding the readers of the material and the processes that occur. Klare and Buck (1954) stated that: The place for the writer to begin his study, as we have implied, is with the reader. Without knowing the reader and his interests the writer may well end up talking to himself -- or to nobody. (p. 18) Klare and Buck went on to cite reasons why writers have a lack of interest in matching written texts with readers fail to produce readable material. The reasons include: 0 failure to recognize the need for any concern 0 lack of knowledge of how to effectively write to reach readers 0 reluctance to condescend to readers’ levels 0 too much bother to try to meet their readers - resistance to scientific knowledge that would destroy the art of writing (Klare & Buck, 1954). Goodman (1968) proposed that reading is a predictive process. Even through the late Sixties, many researchers still advocated the predictive theory in understanding reading. Many researchers championed the Shift from theory based on predictions and observable behavior to work on cognitive approaches in order to understand human information processing. Text factors alone were no longer deemed as adequate predictors of readability. Since the work of Gray and Leary (l 93 8), cited earlier, research on 26 readability has increasingly examined the effect of reader attributes. Studies on one variable, reader interest, include work by Denbow, 1973; Entin, 1980; and Entin and Klare, 1985. In an analysis of the attributes of readers, Iser (1978) describes various readers. The “ideal” reader would Share virtually all of the author’s knowledge and instincts. The “informed” reader is quite linguistically competent and strives to use all of her or his knowledge to interpret texts. The “intended” reader is the reader the author had in mind, which might be indicated in the text in various ways. Work by Kintsch (1979), Kintsch and Vipond (1979), and Miller and Kintsch (1980) have attempted to include cognitive variables such as Short term memory searches and buffer size. Thompson, Simonson, and I—Iargrave (1992) describe cognitive theory in this way: Cognitive theory focuses on internal processes in the learner in contrast to behaviorism that focuses on outward observable behavior. Cognitive theory explores ‘the way information is received, organized and retained and used by the brain’. (p. 10) By the early seventies, Williams (1970) discussed the then-emerging Shift in readability research to cognitive processes: This recent emphasis on cognitive processes has led to a decline in interest in questions that are not clearly related to cognition. Interest in writing, for example, is minimal, even though writing is itself a crucial skill that is intimately related to reading, and even though many beginning reading programs emphasize ‘kinesthetic’ methods of one sort or another. (p. 273) Studies of cognitive processes of readers by Williams (1977) and Levin and Kaplan (1970) describes readers as samplers, constantly skimming and predicting. In the late seventies, Rumelhart (1977) proposed the Schema-Theoretic Model to explain reading comprehension. This theory views the process of reading as the process of 27 choosing and verifying conceptual schemata for the text using both a bottom-up (from the text) and top-down (from the reader) processing of the text. Rumelhart claims that the Skilled reader uses both Simultaneously. One important cognitive variable that has received noteworthy research attention is the attribute of the reader’s prior subject matter knowledge. Kintsch (1979), Kintsch and Vipond (1979) and Meyer (1977) theorize that a propositional structure is formed by readers for storing knowledge. Reber and Scarborough (1977) state that: The cognitive processes underlying the reading Skill of the fluent adult reader probably differ substantially from those of the beginning reader. Kintsch (1974) theorized that ‘meaningful material is memorially represented by a propositional structure’. Kintsch shows how fluent readers extract information from printed text by building up propositional hierarchies. (p. xi) In the propositional structure theory, basic units of meaning from the text are used to progressively build an enlarging text structure. Kintsch (1987) later proposed that readability is not a property of a text, but a result of a reader-text interaction. According to Chall and Conard (1991), the propositional approach of Kintsch and Vipond (1979); Miller and Kintsch (1980); and Meyer (1977) seems to hold in analyzing textbooks (Chall and Conard, 1991). There has been little research in the application of propositional theory in analyzing education material that is used in non-formal education. Chall (1983) proposes Six stages through which people progress in reading development. Much of this theory is based on Piaget’s theory of stages and cognitive development and Perry’s study of advanced intellectual development. In stage 5, adults ages eighteen and up, Chall proposes that past knowledge is required for full comprehension. 28 Cognitive processing through schemata has been theorized as the basis for comprehension. Schemata are defined by Anderson (1977) as: Cognitive structures, called schemata (Anderson, 1977; Rumelhart, 1980; Spiro, 1977) serve as a framework for storing information and for interpreting information implicit in the text. When readers cannot exactly recall aspects of a story, they rely on previously formed schemata to reconstruct what might have occurred. Therefore, substantial empirical data support the presence of schemata that provide a basis for comprehending, interpreting, and remembering discourse. (pp. 129-130) Chapman (1993) labels schemata as the mental models used to organize prior knowledge structures. Tuinman (1986) states that when the text’s information structure matches the reader’s schemata, reading is merely recognition. The use of the term schemata in cognitive psychology refers to basic understandings or mental structures. World knowledge refers to the things readers know that enable them to fill in the gaps when faced with text. According to Gillet and Temple (1990), “Readers are thought to have both kinds of schemata; schemata that organize world knowledge, and schemata for text structure” (p. 54). Gillet and Temple further state that: Schema theory holds that the author communicates meaning by mentioning items that form part of our schemata, or frameworks of remembered information. For our part, we summon up schemata that fit the supplied details and help us to flesh out and make sense of the text. Our schemata have stored in them an array of details that an author may not make explicit, but which help us to understand a text. We could not understand text otherwise. (p. 329) Gillet and Temple (1990) view the use of schemata as: The use of schemata can be assessed informally to determine: (a) what information readers already have about the subject and (b) how they relate new information to already-acquired information. (p. 387) Some researchers, including Valencia and Pearson (1987) continue to advocate behavioral observation as the best possible assessment of reading. 29 In the process of researching the effect of various text and reader attributes on readability, studies by Funkhouser and Macoby (1971) and Klare, Mabry and Gustafson (1955) seem to indicate that as the subjects’ prior knowledge of content increased, the effect of readability decreased, but were inconclusive due to experimental conditions. Studies by Chiesi, Spilich, and VOSS (1979); Pearson, Hansen, and Gordon (1979); Stevens (1980); Taylor (1979); Dooling and Lachman (1971); Bransford and Johnson (1972); and Bransford and McCarrell (1974) have demonstrated the importance of prior subject matter knowledge on reading comprehension. Entin and Klare (1978, 1985) showed that a measured degree of prior knowledge had a clear effect. Cloze procedures were used for assessment of prior knowledge by Sylvester (1981). Chase (1984) examined variables in text (structure) and readers (prior world knowledge and the reader’s knowledge about text structure) and their effect on text readability and comprehensibility. Entin (1980) and Entin and Klare (1985) found that prior knowledge played a Significant role in determining the effects of interest and readability. A clear relationship was not obtained due to problems in getting a satisfactory measure of prior knowledge. The reader’s prior knowledge and understanding are among the factors seen to influence comprehensibility (Zakaluk & Samuels, 1988b). Studies by Klare (1988) seemed to indicate that as the subject’s prior knowledge of content increased, the effect of readability decreased. Singh (1994) developed a new methodology that incorporated prior knowledge and subject interest and found this to be more valid than a readability formula as a measure of the readability of health materials. To summarize, when compared to text variables, research attempts to date have had limited success in incorporating reader variables into readability formulas. Current 30 theories about the value of an interactionist approach continue to advance research in the understanding of the cognitive aspects of reading comprehension. Further research is needed to better understand these processes. This was the focus of this study. Learning -- Reading Comprehension According to Chall, Bissex, Conard, and Harris-Sharples (1996), “Reading difficulty has been and continues to be one of the most important factors in reading comprehension” (p. 9). Chall and Conard (1991) make the following comment on the prediction of reading comprehension difficulty: One can estimate text difficulty from its internal features, such as frequency of unfamiliar vocabulary, difficulty of content or concepts, complexity of syntax, organization, and cohesiveness. Indeed, it has been possible for nearly 70 years to use text features to predict the reading comprehension difficulty of texts in terms of the reading abilities needed to read and understand them. (p. 4) In support of a broader understanding of the factors of reading comprehension, the readability of materials can be assessed using readability formulas, but other factors affect comprehension, such as format, content, abstraction, and organization (Thompson & Davis, 1984). Learning performance level is a measure of reading comprehension. Klare (1988) proposed that reader learning performance, information gain, was the result of interactions between reader competence, motivation, material content, and material readability. Chall and Conard (1991) state that reading proficiency is affected by the interaction of three factors: material complexity, the reader’s familiarity with the subject matter, and the kinds of questions asked. According to Johnston (1983), reading comprehension is viewed as the process of using the cues provided by the author and 31 one’s prior knowledge to infer the author’s intended meaning. Johnston also states that reading comprehension must result in a change in knowledge. Rye (1982) states that: “Learning a subject involves using language in relation to that subject, and reading is an important language activity. Reading involves thinking about meaning and as such is a process that needs continual development” (p. 89). Interactionist theory has become one of the most popular theories for examining reading comprehension. As noted by Binkley (1988), this theory states that an assessment of a text separate from an assessment of the readers’ characteristics cannot give a measure of the text’s comprehensibility. Binkley states that: Reading is an interaction between an author (who has made certain assumptions about an audience) and readers (who may or may not have the assumed attributes). Therefore, an assessment of a text separate from an assessment of the readers’ characteristics cannot give a measure of the text’s comprehensibility. In designing an assessment procedure, the emphasis should be on gathering information about text in relation to a particular body of students. To do so, the assessment instrument Should relate the salient features of the text with the readers’ ability to comprehend. The instrument will thus yield information about both the reader and text. (p. 107) Text/reader interaction studies were conducted by Simpson (1988), Pride (1987), Harris-Sharples (1983), Thompson and Davis (1984), Baxter (1992), and Binkley (1988). Johnston (1983),§xamined the effects of color, print Size and graphics on readability. Research by Zakaluk and Samuels (1988b) is of particular interest to this study in that the objectives of both studies lie in improving reading comprehension using combined traditional methods, readability formulas for analyzing attributes of materials, and new techniques, for examining selected attributes of readers. A comparison summary of the research of Zakaluk and Samuels and the present study is Shown in Table 3. 32 Table 3 A Comparison of Research Studies Study purpose =TTarget readers Zakaluk & Samuels (1988) This study Focus is on the individual student; diagnostic for individual students 5th graders =Outside the head factors: 1. Text readability 2. Material subject 3. Adjunct aids? Passages ranged from grades 4 through 7; Graded by Fry readability formula Social studies and science- health texts Evaluation; criterion- referenced (material readability) not norm- referenced Adults. W assumed to be college level; study used DRP test to assess. DRP instrument uses general subject text passages of increasing difficulty. General subjects and tourism subjects; nonformal "real world" materials. Uses a point system to predict Not examined in this study. reading comprehension; material with adjunct aids reduces readability grade. Inside the head factors: 1. Prior subject matter knowledge 2. Vocabulary Predictor (3rd scale) Word association; main idea key word, written word association, 3 minutes limit; 1 to 10 points. Word recognition - scored as non accurate, accurate or automatic; open—ended recall after reading passage @ grade level scored as difficult or satisfactory. A third line (low, average, high); predictor of reading comprehension; for individual student 33 DRP instrument was used to assess prior knowledge on general subjects. Not tested as a separate factor. A reading comprehension test assesses reader's independent comprehension level (distance learning performance) at given levels of material readability. Unlike the Zakaluk and Samuels research that studied fifth grade readers, the attributes of the intended readers in the present study were not as precisely understood. The learning environment for the present study also differed. In this study, the focus was on a distance learning environment. The outcome of the study by Zakaluk and Samuels was expressed as the predicted learning performance level, when readers are independently interacting with material of known and varying readability. A nomograph, a table that uses information from two sources to provide information about a third area of interest, was created by Zakaluk and Samuels as a way to predict the level of reading comprehension of individual readers. Learning performance was expressed in the present study at one level of comprehension, the independent level. The work of Bormuth stimulated the revision of readability estimation from the use of grade levels to expressions of reading levels when readers are faced with material of known readability. “In 1989, the International Reading Association passed a resolution opposing assessment measures that define reading as a sequence of discrete skills and encouraged the development of measures that engage and assess the cognitive processes of reading” (Touchstone Applied Science, 2002, p. 6). The design of the present study has advanced the techniques and instrumentation of Bormuth to improve on the research of Zakaluk and Samuels in predicting learning performance in a distance learning environment, the mode of the readers of CES tourism bulletins. To restate the research questions for the present study: 1. What is the readability level of the most difficult CES tourism bulletin? Of the easiest? Of ninety-percent of these bulletins? 34 2. Is there a relationship between the readability of these tourism bulletins and their attributes, that is, authoring source, publication date, and length of bulletin? Are these tourism bulletins written at an appropriate level of difficulty for their intended readers? 3. At what level of material readability will the intended readers comprehend at the independent level? 4. Is there a relationship between the intended reader’s level of educational attainment and. the reader’s reading comprehension ability? To restate the earlier hypotheses for this study: H1: CES tourism bulletins are written at a readability level that is less difficult than the average academic journal or quarterly. and H2: When reading text material, the intended readers of CES tourism bulletins are capable of performing at the independent comprehension level only when the readability of the text is not higher than the readability level of text typically found in high school textbooks. 35 CHAPTER 3 METHODOLOGY Research Design Overview The design approach, data organization and analysis, and statistical techniques are described in the sub-sections that follow. The research design overview is depicted in Figure 1 as a three-stage design. The first stage addressed the readability of educational material. The samples of textual material, CES tourism bulletins, were individually passed through four readability formulas to calculate readability scores for each bulletin. Additionally, selected attributes of each bulletin were collected from each bulletin for later analysis. Readability scores and attribute data were analyzed to address the first two research questions and the first hypothesis. A methodology for assessing the reading comprehension ability of the intended readers of these bulletins was described in the second stage. A standardized instrument, the Advanced version of the Degrees of Reading Power Test, was administered to a convenience sample of willing participants. These participants were asked to declare their highest level of formal educational attainment for later analysis. The instrument was designed to not only score each participant on the number of correct responses but, most importantly, to measure the most difficult material that each participant could independently comprehend at a ninety-percent level of comprehension. This independent level score was operationally defined for this study as the “distance learning performance” for that individual. This stage was concluded with an analysis of DRP test 36 Figure 1. Research Design Overview Stage 1: Assessing readability of text materials Educational materials (CES tourism bulletins] Assess readability of Readability Research “"98”“ #1 sampled bulletins scores Hypothesis #1 “x Collect selected text \ attributes Analysis Research question #2 Stage 2: Assessing reading comprehension abilities Intended readers \ Raw score Assess sampled reader's DRP test Independent Distance comprehension ability "— scores level score Learning \g Performance Collect selected A I . reader attributes na ”5'8 Research question #3 Research question #4 Hypothesis #2 Stage 3: Equating readability of materials with reading abilities Flesch Readin . . Degrees of Ease Scores 9 Approxumatlon Table Ilia-aging Power m 37 results and provided information for addressing the third research question and the second hypothesis. In the final stage of this study, an approximation table was created in order to correlate material readability scores from the first stage analysis of bulletins with the results of the reading comprehension test. This last stage is necessary so that authors creating educational material can use the readability formulas that are more readily available in the composition process in order to write for improved learning performance levels. Material Readability Analysis The issue of the readability of educational material was raised in research question number 1: “What is the readability level of the most difficult CES tourism bulletin? Of the easiest? Of ninety-percent of these bulletins?” This examination of learning performance in a distance education environment began with the selection of educational materials. CES tourism bulletins are textual materials available in printed, and in some cases electronic, form. The first stage of this study measured the readability of CES tourism bulletins using four established readability formulas. The population of tourism bulletins measured consisted of the on-line full-text CES tourism bulletins currently on the website of the National Extension Tourism Database. The total number of bulletins in this database is about 250 bulletins. Only the subset of the bulletins that exist as full-text on—line versions were examined in this study. Of the 250 bulletins, 130 are available on-line as full-text versions. All of the 130 bulletins, an over-sampling, were examined using readability formulas. 38 The introduction page of the printed version of the National Extension Tourism Database is Shown in Appendix A. The introduction page of the electronic version of this database is reproduced in Appendix B. An example of a CES tourism bulletin downloaded from the on—line database into Microsoft Word is shown in Appendix C. The procedure developed for the calculation of readability statistics is shown in Appendix D. An example of a bulletin selected for readability analysis is shown in Appendix E. The full bulletin is not shown. The second page of this Appendix Shows the readability statistics that resulted from the readability analysis. A display of the calculated readability scores of the on-line CES tourism bulletins that were analyzed in this study is shown in Table 4. 39 IMIFFI L. v.8 _ QOF 3F. 38 205:. FSF 358.28 389% I B F! L mam L smF Nam FFFF saw 5965 $2 932960 889.3 0.9 L m.mFl We.-. - was $3 ,. 2% 99.8 2 82 message 8888 :IFF.F lmRI KI mam NSF Seasoasmg a: 55990 838mm IobFI .. FF 2: L F3 EFF saw 53225. 82 35690 88F?” 1m Bi L m.,F.-.FI 3 #8 Row saw 58222 NSF F9385 oowmfimm iImF-_..F twirl. QNF mam 80F cmo_m>mo_m5m .>> 82 52880 835mm IlmIFFl - -m...mF.-i-..r-.,m.FF H. Nov ~ka 2.25 22 558:8 288mm QPF 3F slime QB SS 5208? 82 @5558 mmmmommm mFF mam mFF -L Ev RR 28255. 32 3:358 820me INVFIFI . 3F N? Nam FoFN saw 39:22 a: Bean. :8 98%? E: an 2: j F8 «3.: 5595.8; .3. 28255 $898 mFF 3N . .mNF o9 £8 L 9825.2 08F mmsaom 85va -Imbr- .- 3F .ma $0 $5 98252 82 B568 REES FFF L ELF F.oF 3v BSF L- 8320 a: 9:258 5.898 FFF L 0.8 _ 0.2 I. may 8% L - 2.6. 08F 6.8538 98.9.8 -bFlF- 3F 02!. one 8% L 3533. NSF 9585...... 8888 «OF :3? Fa _ F8 52: L - saw 59:25. FSF 228055 3.558 ImLIFF, - I Fem 3F L cam 88 L - 9595525 6.: mzaesa $828 w FF _ Fem . QmF ._ L 0% SF. I __mEoo as -, Ease? mFFmemm mFF .1: L QNF _. ._ New ommw 9885.2 82 L 8:38.... 858mm 2: 3F L is new mFom i 8256 as .1 058m 3898 mFF 3F 2: in S8 989.55. 82 ween? 889mm 2: BF as can Sew EEO 8w >2 32 as... 82 Fmoovwmm _o>0i_ _o>o.._ 200$ _m>ou_ 069.0 0620 0mmm may—O; 2:20 .53 28:5. 9.5qu E 5:.Eom can—200 -532“. gonor— 595... 350w Etc—53‘ 3mm 2:... 0.9:“. «53:45 sales“: wmo 2:35 a 888 3:52.81 v 0.98. 40 -|0L,0F| I019 .. . 0L0 F.F0 000 . 0.008005 002 F00. 0022.“. 00000000 I|0FF- 1mm0i 0.: 0.00». 0000 L 0230 0000022. F. Fm 2.5.0000 0000300 imFF L 000i 0.: 0.00 L 0000 L 0F01FI0 00.0.6.2 002 9.. 2.3.00.0“. 0000FF00 -f00F. MEI r 0.0 0.00 000 L 0.00000“? 002 0< 2.5.0.00”. 0:00.000 :10 FF. 000 LL- 0.0 -0..00| 000 iFicoE.0> 2?. 00000000”. 00000000 i0.0F - 0wlFI 0.0 0.00 L F000 0029...... 002 000.2000“. F0F00000 l F..,.FF .- bk F.FF 10.00 ........ 0000 9800...... 002 00000.02“. 0000:00 I0..FFy 0.00 F.0F .000} 00F0--L 00.0.00. .0... 000000.90 0500000 I w: 01.0.F. 0.FF 00.00 0000 _ 000.00 .0... 00:00:05 00.000.0F .. w FF- . 0 00F! .. 0.0 0.00.. .i F000 00.0.00 .00 000000.90 0:00000 i0..0F.. l0-mFi L 0.0F ; . I0.m0 ! 0F0F LL.00o_0>0o.0.0m >2 000F 00:00:..me 00000000 , 1m.F..F._ L 0.0F 0.0F-.. : 0.00 002 . 06005.02. F00F 2.090;..me 0F00F000 0.10.0- L 0..FF 0.0 0.00 0000 . . 060.0012... 002 0.._F.0__0mF.0m...-wmp00000 -|0.0.F.. . ‘0. 0F 0.0F . 0.00 0000 990 000000... F00F 0.00.9.0ch 00000000 im0F1 L- 0-0F 0.0-F.. 0.00 0000 - .290 000.0002 0.2 2003.050. 0F000F00 ...LE- L FFF .- F.0F. 0.00 020 L 9.0.0. R0F 002F005 FF000F00 i0. F-.F.. . w0F 0.0F 0.00 0000 009050.050. >>L000F 3.0.0080 0000F000 - 1m?! L |00F| _. 0.0 0.00 L 0000 990 00022.2. 002 260.5 FF000000 iim 0F .- L -0HF1 0.0 - F00 L 0000 L 00000 000.00.20.02 0.0502050 0000:00 ..lmowi MEI. I 0.: 0.00 0000 __ ..0...ooL F00F .0502050 0002000 |0 FF. . 0.00 0.0,. 0.00 0000 29.0.0262 2,02 F00F m. 00.00.00mm...00000h0b. T0 .FF - . 0.0F 0.0.- 0.00 :3 - 0.00.... .0... 005020.60. 00000000 ileFi 0.0F 0.0. 0.00 F000 - 0.00.... F00F 00502050 0200000 0: 0.00 L} 0.8 F00 00E - 0.00.... F00F 005020.60. 0080000 0.: L 0.0F 0.8 0.00 0000 0.00.... F00F 005020.00. 0080000 0.: 0.00 9: 0.00 00F0F 0.00.... F00F 00502050. 0050000 _¢>an_ _0>ou_ 950$ _o>o._ 060.0 065.0 amam £325 L 30.5 :0... .2025. 92.003. :. L 5:.Eom can—0.00 -53.“. 0.000.". 593.. 3.:5w 9.20530 23. 0.0.2 03:0. ..|. .l. ..i L .0008. 0 0.00.0 41 ,. Mt! ‘ 3.: v: N? g, 9.3!? 22m 89:22 83 35882 88:88 lb? « MEI ,3 1, Q8, «88 , 935963 52 8“. 852m: 8888 law: of. _ we _. :8 88 8 9me C852: 82 EmEmmmcmz 83:8 l2: MP, W low 8 New 83 ”1 9885.2 88 $8 26.. .8838 LI: 9% -m.o . 8.8 83 _. Enigma? .-.ezooau: 8588 E+ I 38. NE 0.8 8.8 mam 896E .3. 3:53.. 8388 ‘lmbr x NE- Er -y v.8 08 8232 52 3252.. 8388 2: , J4m|8| ‘ 8.9: ‘w 93 88 _ -.-QSm $952. 82 885.25 88:88 0:, I Emir! - lat. _ Q9. 85 8298.9..«1? 32 SSE. 8388 E1? 4 08! _ 0.8 k ..H 8.8 R8 52883 32 _QoEEOEYmBSNmm {We -. vmrl 8 mm? W 8.8 82 5208.2, .3. 36830128838 |m.m ; 3.- 13 0.8 - EFF: _sommz .3. M 295%; 8888 .E H E. ; cm L :m ._ 2: .4 ,‘ 5.82.2 .3. 3:938: 8388 limo -8- OF. E _ 0.8 Sm 99938522 22..., 2:238: 8388 Mm T89 No.5 .. 5mm- 1;-m8 m 29933283 22:, €958: 5388 v: 8.8 a 8.8 w 0.8 88 _ a Eman_m>mo .931 u _ Lou. .250 _mco_mmm _ l 1 - .-| - _ . -|-. I! .928 £52 82 w..@fl-o_._o§mflxmv58.8- m : mom _ v8 _ 0.8 88 D _ _ N W Eman_m>mo _mSm u H _ .2 .950 6:231 V iii, ii p i i- I .928 £52 8? 02m 2.99: 8588 m2 1 .0ka .Eii 8.3.-. 8 58 1x .928 £52 82 £280 800 8888 0.2 .98 N: 8.3 Q: 8 a .3. mmcsww 8858 8.: 3m 08 Q8 88 __mEoo 52 uses”. 8588 _o>0._ _o>on_ 0.50m \ _0>0.._ 009.0 009.0 cmam may—O; 2.20 an: 23.5. 9.531 E i tau—Egon cap—.200 -commE 532.... 59—01. 09:35 m:_._O—__a:< 3&0 , 2:5. D_n:n_ _ 3.2.8. v 298 42 -LF.FF LmMmF m.oF . L v.3 omoF , 99m $96.20me 50> 99:05 vFooommm -L-m.o _, WIC- mHmL- _L mdm «mm L _ Emu. meF meEoE mommNme 11m bF| , . .. 0.3 -. . mNF ..L NFLV NmmF .- 83m cmmEQLSL meF ...mc.o_.n_ 383mm -|m.F_F _. L .le-NL -1 FLmF L. 04% 02b L $035.5. ommF 3:965 33%?” ..lwal_ - L - WNFL - . '99 L Won E: L L .5083 6.: mfimsccmi mmwmommm moF fiLNLNNL F.N|F: _ Fsm me L - .5035 .3. 9:05:55 vmnoommm - 2: L _.EL-IFI- FFF--- now 85 9.3m 8.me LL52 32 $959.08. 8328 0: _ , «.mF _- mwo. me 98 .L- EEO mow >2 momF 03:8, 262 omoovwmm -_ 0.0F- WWW limwwL Goo mmNm E90 8% >2 mmmF 33> 262 moFvamm mFF - mbw - _ v.0...F ‘ F.om FRN inHommchs. NmmF m $89.55. 8338 _Lm-FF oNF - m.oF| . v.3. mnFm . 9035...). mmmF 188255. mvanvmm -|.mErF F.oF L_; wwlF- - 9mm 33 L £89.55. mmmF 09085.5. FNNFommm Lo. FF I 0.0m FFF . mFm meow _ Eommcsz ommF m 982.55. mmmmvmmm m FF L o.mF L m.-FF, -L Q? mEm .2823. mmmF .938”. .55. momommmm mFF F.oF mwm-L men momv .5892. meF >mcLSmmms. vFoFmem :LmFF L m.mF L- m.mF mNm voomm mCLmSL NmmF FmELzmmms. $353 w FF L 0mm L vJ-mF -. Ném mvmv __mEoo wmmF 9:. 93.8.62 333mm leFF _ Qmm F.oF adv .va 99m $950.5. omoF oh mczmxas. ananmm 019 m.vF -_--.im.loF|- vsv move .cmo_m>mo_m5m .>> NwoF mg... 9:255. Nonmmmmm F.FF LL .L..mF mdLF1 whit- mNmFF 99m $95.5. NSF _>. 9...:mesz vaonmm m. FF __ N. Fm m 9 Fig 8me L mmmF .0 93.9.5.2 mommommm _ EmEQOLm>mo .93. .2 L950 .2069”. I- - ||i - _mbcmo ctoz wd mdF Qm N8 mwom 98m 595.5. mmmF o 9:355. mommmnmm No Em NF vdm mmmm 9me :mmEoLSL omoF o 9:355. Fommmnmm .33 .05.. «zoom :33 320 2:20 mmmm 3.33 2.20 22.. 23:6. 9:231 E 5:.Eom £25.00 £03.". :32". 598.. ceaow m:_..o£:< Sun 25. Q can 528. v 228 43 FFF |m-.mFI - QNF 0.8 $8 . 9825.2 88 55859. om 88$? - 1m: 93 02 mom 8% L 3085...: 52 8.623 95 8:28 -IF-.oF bMFF ,- 2: Lme- L 8: L 523»? .3. .2335 85:8 W8 02 3 F.om LRRL 3% $95.: $2, - 9.62% 88:3 AL,.FF SF 3F 03 £8 xLoEszmLo 8?, 028m 33%?“ . 8w 3.3 $90 , :02 3F 2 0.8 L RE. . 9me F5963 82 .L -- 58 $388 _iFFF 8F 3 F8 L $@-._-_ 99m 596:2 88L. -- . acesm Nvoomvmm 2: m9 F.mF F.FN L 2m $5: 82.. ochmSm F8088 5.3.8:? <8: IF..F. F 8F 3 m3 ,-§m--- c6685., ..L.FL--1L-L..9m8_mumom 83:8 .-me .- m.F.F-. L .3 . m3 m3: 9me macmx 08F COLEmm-fi‘ 30%me aim? 2.: SF L 0.8 EE _mLoxmo €026me assmmfl 3888 {wow LHNF I 0.2 . 0.8 So.“ 208E: 08F LcmLsmme-m.._...oovomem - 1m k F FFF «NF @W L E3 gov-MERLE Emsmmmm. 8358 -lm FF Em- +. 3F won 58 i cacowmgémmf .cmssmm-E 380me EME |o..mF|l .-.ZF .omm L 89 L smcommsa .BL EmsmLMwm mmoFmem -|o,lo-F QWF.-. $- . Nam $8 5885.2 82 5:95:08”. 8838 |F_,.F.F Wit-L mFF L 0.3 0me comEmLO..vmmF 9.63% 8:38 immF- awri o9 _ 0.3. Emu . 92m mmwcmeommF 55.559 8888 .IMroF o..NIF F-Vm M?- .L SE v.66 $965.32 8.38m 3meme - Lo] FF 1mg #- oFF mom Em mLommCCLEmemF _mcgfieomm $5.38 1m: FF FFN 3F , mi. -WE-FW- . 265a .BLL- @5283“. 3888 m FF 3F QNF wow 83 {0252250 80F _ Essen. $838 8w 3.3 “new _ - m FF :F 3: m5. Foom _ 99m amazes. N8: Essen. 3888 _w>w._ _o>mn_ 200$ _m>m._ 069.0 069.0 amam ago; 3.20 an... 29:5. 9.63”. c. L £55.30 sac—200 $.0me sung“. cum—.0..— wohaow m:_._O—__u:< 350 2:... East L- 6.2.8. v 265 44 [MOW JWB 3 ;_ «.8 ES. 5822 52 £2 9:52, 83:8 .|~.2 Emmy i . IE: _ 08 ER 4». 92m 828. 82 505559; .8838 1mm :59; .13, , ‘ mom :8 99w, £96568? 8.95258 988.8 aim: ‘09: «.9 - $8 $5 emummmmfgzwmsr Em 558 omBNmmm I|mlt - . tom ., Zr _... com 8: 99m camzoéowfl 5258 888%! im C. ,. v._m.wl-,,...g INNF! 9 rmm Es Fr 99m Eggggfime E 5258 888B -lm: l Emmi 0.9.1. w, own .i 83 m .Qmum 5912252 5.2 5258 3828 .i3 RP 3% 2K 28 ‘ 1. iosmmxmmme >8 5258 888?, he! _Hmlr, .\ .Nwmxg 3m 28 H r Sommcgéimfl 2m 5258 mmoofimm -Ifrr simmwr Zr 1 wow £3 M £8252:me 2 .558 88:8 lrt.-. ,T, 0.2 . Z: ..V mom 8% I __msoomme :0 8: 8852 im: ,1]: EN 31-!wa { -+ #5 .88 | 3825.2 82 9825.2 9: mmbowwmImI m : v.2 v: v.3 53 I 823 .3. .8365 8888 n: 2: 2: m? 28 98252 $2 9:55 omoomvmm .05.. _o>m.._ 200m _w>m._ 2.20 2.26 mmmm mEo>> 380 :53 23:5. 9.53m E _ fizz—ham 58200 -532". .3me 59.3 350w mctofia< Sun: 2:... Egan. , _ 8.2.8. v 22¢ 45 M _ 229.82 No- mEES 5&8“. - WW -, ; W 22992 No- mEafimEmolo 538 gages m. ”28536 258235 to- 4042mm; .230 ..--! _. 285:? :9, 233265 co... 40mm”: _230 -- mgflmtoo .mmc ”EmoEcgw Em; 230:2“me ad- Augmflmmflublom . _. .z W 28%ng 3|. mmu.\u..o>>_o:oo .-Ioo. . VP-.. -.ow -, ._ «.5 38m . 5:325 Em film: -. _ flaw-.. .. 15.3 -- . EPJESN - .8: 41.3 - JEI _ .- mm--- EN 2: I55 .10. :. - --mfil. VB! -.N...mr.m©mm- _ - E n: we 98 tom 38 .. - , E v: 02 0.2 0.9. 308 _ -w E, _m>3 _w>o._ Boom _ .33 2:20 2:20 amum «9.25 m 2.20 33.. 2525. 9.531 E W 53:55 52.8.00 -53.“. :32... 598.. . _ - I l r- 8.2.8. v 2%» 46 The Selection of the Flesch Reading Ease Regahilitv Formula Using the process described in Appendix D, 130 CES tourism bulletins were run through the four readability formulas that were available in Microsoft Word for Windows. Correlation coefficients were calculated to compare the resulting readability scores. The descriptions of the strength of relationships between variables in this study was determined using rule-of-thumb guidelines from Ary, Jacobs & Razavieh (1996): Table 5 - Strength of Relationships Value of r Relationship .86 to 1.0 Very high .70 to .85 High .50 to .69 Moderate .20 to .49 Low .00 to .19 Negligible The correlation coefficient between Flesch Reading Ease Scores and Bormuth Grade Levels indicated a moderate negative relationship at -0.60, and between Flesch Reading Ease Scores and Coleman-Lian Grade Levels a low negative relationship at - .40. The correlation between Flesch Reading Ease Scores and Flesch-Kincaid Grade Levels showed a very high negative relationship at -.90. But, because all four of these formulas are measures of only the surface features of text and not of a reader’s reading comprehension abilities, another instrument was needed to assess the reader’s abilities and to equate the relationship between the difficulty of text material and the reader’s reading comprehension ability. For each bulletin, the Flesch Reading Ease readability formula was used to calculate a readability score that resulted from the analysis of the surface features of the text that appeared in each bulletin. The decision to use this particular formula was based 47 on the established popularity of the formula and its convenient availability as a feature in Microsoft Word for Windows word processing computer software. The resulting Flesch Reading Ease Scores, expressed on a scale from zero, most difficult to read, to 100, easiest to read, provided a relative approximation of a reader’s expected difficulty or ease of understanding the text. The scores from all sampled bulletins provided statistics on the readability of all CES tourism bulletins — the most difficult, the easiest, and ninety- percent of the bulletins. Bulletin readability levels were correlated with other bulletin attributes, authoring source, year of publication, and the length of the bulletin. The Flesch Reading Ease readability formula is described in detail in Appendix F. In research question number 2, the issue of potential intervening variables was raised: “Is there a relationship between the readability of these CES tourism bulletins and their other attributes, that is, authoring source, publication date, and length of bulletin?” For each on-line bulletin selected for readability analysis, these three attributes were collected. Columns labeled “Date”, “Authoring Source”, and “Length in Words” in Table 4 show these values, along with the readability scores calculated from four readability formulas. Assessing the Reading Comprehension Abilitv of the Intended Readers The next stage of the design addressed the issue that readability formula results do not provide a perfect measure of readability. Some of the problems associated with the use of any readability formula as an absolute measure of readability include: 48 1. The author of the text often does not specify, or perhaps even know, the intended reader, including education, reading ability, prior knowledge of subject, etc. 2. What the score implies as “easy” for one reader may be “difficult” for another reader. 3. The readability score is often merely a measure of the surface features of the text, not a measure of content or coherence, and 4. The readability score does not take into consideration the reader’s learning environment, where no instructor assistance is available, as in distance learning. In research question number 3, the appropriateness of the readability of educational materials to their intended readers was raised: “Are these CES tourism bulletins written at an appropriate level of difficulty for their intended readers? At what levels of material readability will the intended readers comprehend at the independent learning level?” In order to answer these questions, the reading comprehension abilities of these readers had to be assessed. The Selection of the Advanced Degrees of Reading Power Test The search for an appropriate instrument to be used for the assessment of the reading comprehension ability of the intended readers of these educational materials began with a definition of the selection criteria to be used: 49 l. Suited to assessing an adult population 2. Assesses an individual’s reading comprehension ability 3. Tests an individual’s prior knowledge of tourism subjects 4. Criterion-based, not norm-based 5. Instrument must have proven validity and reliability 6. Results expressed as independent learning performance, not grade levels 7. Administration must be simple, to a group, as a silent test 8. Instrument must be inexpensive to purchase and score. The search for an instrument yielded no single instrument that would meet all of the above criteria. There were twelve instruments considered as the final candidates. Table 6 is a display of the scorecard that was used to determine the most appropriate instrument for this study. Meeting criterion number 1, suitability to an adult population, was critical, as was criterion number 2, reading comprehension assessment. There were no instruments found that could meet criterion number 3, assessment of prior knowledge of tourism subjects, so this criterion was dropped. Criterion number 4 was important for the study design purpose of relating reading comprehension ability of the intended readers back to a criterion, the readability level of text materials, rather than the norm, the performance of other readers. Criterion number 5, proven validity and reliability, was critical. Criterion number 6, results expressed as independent learning performance not grade levels, was a strict criterion that was critical to the design of this study. This issue facilitated an interval scale scoring requirement that would more closely relate to the readability scaling technique previously selected for assessing readability of the text materials. The final selection was weighed heavily on this criterion. Criterion number 7, 50 1>-3>-zzi i >->-,>->->~ 1 >>} 1 ,>z.>f>->-i >>->->- 1 1 1 1 1 1 1 1" “7"”‘W ’ 1 1 I 1 z 1 | 5 . 1 . zz1z‘zlz.zz,zz -4. __ A . | l >->->->->->- >-l>-i>' _‘L._ §>— >-1>-.>-l>- >-.>-. 12 1 1. 1 .908 new $293 9 @298on on “war: E9559: . m “mo“ #55 w mm .305 m 9 .o_aE_m on “ma: co_um:m_c_Eu< . _ i a Hm_m>o_ mama 6c. .oocmctotma mEEmm. EmacmamoE mm ummmoaxo gamma . _ _ 3:536. new 36:? $55 o>mc 63E EmEEfiE . M ommmnéto: 8: .commn-co_§_._o . wwwofiaw Emtse .6 $8365 .25 $53229: cm Emma. . beam cemcocanoo 9.63.. 9332205 cm 8332 . w-NCOVLDCONCO - cozmSaoa gnaw cm mEmmowmm 2 02:6 . . l. l. umnEsc cotmfio l . l 1‘1 11.‘ 81:28 9 :39: 02 n z ..Ecflmfi .n > iii ~ ~ i2¥,;fli,,g . 4 1 bosom mcfimmm 27.0590 xooouoo>> .. ..uomSmm - Eozmm 5:06:91 ommamcm». xcmemo; . 5592650 38% 2:25 _ __ comccofiooouogfl I I l 1 amok EoEm>mEo< omcmm- on; memom 8.5395 “35-2955 m . 32:8 83 7.8 368m €598me 1 1 -.p 1 1 EoEo>oEo< .6 botmm.-_c=>_ | 1 2222 2 2'2 2' “wok mcfimmm mcanmEomumO ..mchmm._.,_mE 5W2, mcfimom do wEoEmmmm< unoccmmfi 1 mcfimom .6 mEmEmmmmm< 0:85.90 JZZZl . 530a mcfimom .6 $2me l ,, . :5 garmfioaacsm F¢::>>-i rszl>-l m>—:>—>—’ <- >-'>- > >- ‘N >- >- >- >- >-3>-‘> >-'>'>-;>- >-j 9‘- zi>- >-.>-‘>- >-1>- >- >-1>— > >1 . ._..__1.2. .. , . . . «58:52.. i 1 1 1n 2, 1 1 ~»——1 . lco>-‘>- 1 :22: 55:5 ..lll.-- -L: -1 _ - _ 39:2qu 232.23... E00 @5331 m5 *0 553.8 .2 23208 ,1- . - L V _ m 03¢ 51 administration of the instrument, was stated to facilitate group testing. Criterion number 8, expense, was necessary due to stated budget limitations. The instruments that best fit the selection criteria were the Degrees of Reading Power set of instruments. Within the product offerings, there were two choices that were considered for this study, the Stande DRP Test, suitable for grades 3 through 12+, and the Advanced DRP Test. The following statements from the instrument publisher, Touchstone Applied Science Associates, Inc. (2002) heavily influenced the final decision to select the Advanced DRP Test for this study: “Primary and Standard DRP tests measure student ability to construct surface meaning from continuous prose materials. Advanced DRP tests extend this definition of comprehension by assessing how well students are able to reason with textual materials” (p. 4). “Advanced DRP test items do not require prior topic knowledge to choose the correct answer. Answering correctly depends upon comprehending and manipulating particular propositions in text” (p. 6). “Advanced DRP test questions are designed to engage those cognitive processes required to remember or locate, think about, analyze, derive, and/or combine text propositions Within each Advanced DRP test passage, the questions are designed to assess the ability to integrate propositions over ever-increasing amounts of text” (p. 31). “There is little opposition to the notion that the ability to read with comprehension is one of the most important goals, if not the primary outcome, of all instruction in the elementary school. Similarly, there is little opposition to the notion that the ability to reason with textual material is one of the most important goals of instruction in the high school. Attainment of these two important educational goals can be assessed by Standard and Advanced DRP tests, respectively” (p. 35). The second stage of the study used the Advanced version of the Degrees of Reading Power Test, a standardized criterion-referenced instrument designed to measure the difficulty of materials that the intended readers are able to reason with successfully. 52 The criterion in this instrument is the difficulty level of text material. The purpose of this test is to determine the most difficult text that a reader can read with a given level of comprehension. In this test, text passages of general subjects are ranked in order of increasing difficulty in readability and presented to the readers in an untimed silent reading test. At the end of each passage, the reader chooses the best answer for each question from the choices provided. From the results of the DRP test, raw scores, the number of questions answered correctly, were subsequently converted to an equivalent value on an absolute interval scale, expressed in terms of DRP units, that approximate the difficulty of text material with a level of comprehension, “P-value”, associated with independent learning, P = .90. Scoring the Reading Comprehension [Mument For hand scoring the Advanced DRP test T-2 version, an answer key provided by the instrument vendor was used. This answer key was in the form of a transparency that was laid over each answer sheet. The number of correct responses were added to derive a raw score total for each participant. The raw score for each participant was then entered on a computerized spreadsheet to create a profile record for each participant. An additional calculation was necessary to convert raw scores to a score on an absolute interval scale. The following excerpts from the Advanced DRP Handbook (Touchstone Applied Science Associates, 2002) provides the rationale for converting scores: “Raw scores, percentile ranks, and grade equivalents are not equal-interval scales and therefore should not be used to describe growth. Other norm- referenced scores, such as stanines and Normal Curve Equivalents (N CES) are equal-interval scales. However, these scales are normative, rather than absolute. The numbers on these scales do not increase as a student grows 53 in the trait being measured. A student who is making progress at an ‘expected’ rate, as determined by norms, will stay at an NCE of 28 (or 68) from one testing to the next or at stanine 2, or 4, or 7, year after year” (p. 35). “Advanced DRP scaled scores, like Primary and Standard DRP scaled scores, come as close to forming an absolute scale that has equal intervals as is known in academic achievement measurement. A growth of 5 units on an Advanced DRP test in grade 7 is the same amount of improvement as is 5 units of growth in grade 10. Thus, it is possible to measure growth of individuals or groups using the Advanced DRP scaled scores and to compare the growth of one individual or group with another” (p. 35). In a hypothetical example, a raw score of 20 converts to 68 DRP units at P=.90. This would indicate that this individual could independently comprehend material that is written up to the difficulty of text materials typically written at the level of the college introductory text books. See Table 7. One additional attribute that is related to reading comprehension was collected from participants’ voluntary responses. This attribute is the participant’s highest level of formal education attained. Sampling Strategy for Selecting the Intended Readers The population that is the intended audience for CBS tourism bulletins includes owners and managers of businesses that cater to the tourism industry. As a rule, for statistical significance, Fraenkel and Wallen (1996, pp. 104, 106, 218) recommend a minimum sample size of 20 or 30. A sample goal of 25 was selected for this study due to: (1) financial constraints of the study, (2) the design objective of demonstrating the methodology versus generalizability of results, and (3) the vendor-specified ordering unit multiples for the instrument that was selected to assess the reading comprehension 54 66$ E 56236 .662. .6V6B6.m .AroomV 6.66:6V6 __56666. 6E. 6.16.8.6 61066596, Vx6V 6m6amC6_ :6:ma V6666: 66-06 .NTm .APV 0300 655.0: 6.5666”. 6865 2 2:3 16.602.662.653 6:66.665 62.16.66 :60 .6V.o_V6oE:EEoo €6.66 6:6 656661.86va 2 66:63:62. .66.:owN .09.. .63 .66 .261. .6686: .x.10>. 362 652.3 6i136666. V0 :6 65.. A6665- .w. .2662“. 66.:ow V1- F26:50.. _6co_666Vo.d _-NM.N-M1V -.oVLVVcflw-W 6.66130 66:66.6 6-m6__oo| -, i Vlomonol V Via--55 36% 1.1- I - 66.666636 -1 V. 1- I 11 - l, 1.1-. .11 111i. .1 .111- 6:6 6_6c.:o_ _ 6Vx6V 666:8 .66>V6.Vn_ V AK oVE666o< V6m6=oo 6:86 ..o ..w .I V A6m6=ooV 666.6 52. 0V 2er V_ on 9 on V111 V_:o_1c_n_ 1V66cVN6m6E 6995 6.683561% E. No 111 -i 1 .- i1 1 -- 6.89me _866 :9: -V1 6.61 V 56.6 ...6 45861 - - A6966 695 666.6 £6. 2 £9 - -16116 12. 66 -_1V_.8_..._o 2.6. V 6:31.66 ..m. 3. 666.66 V0 66. ch666m .6665 £161 V V 6x68661856 6_66_s_-1-V1 @661 . 1.6VM161@.n.V- .1. 56-5 c-Vn. 1 - : 666.6 £6,656 1111.] V E 906 V 6.66:6Vw 6.896:oofi6662656_616-6-1166_V6_w.VV-6.__6 1:66- -1 1 . 1 16.561666--. i 11 -1 . 61612 E V 1.666 2.6. -. 1 -- 66.16661 1 V16 - ,. 38 1116-6 1 - 66666 _- , 6.1.6 6.82me .856 665:. 9. 6258 56 V 666.6 56 62.66 666 V66> 26666336. V.6__mcmV66_66w 66-06 .1-1 - 1 111 - Aom.un_V 66632.8 6666.6 666.0 @5666”. 66V6EVV6m 6.00m 6.5m 3...: 62.8 66V6:._V6w 666m Vo 569.6660 666.6. 656666 5666:. 665-6662 6.6666. :26 .0“. Vx6V 6666.6 .0 66.86 626.. 65666160 666560 6.08 666m 9:666”. c662". - 1111 11. 11 ..-iVliiLi i - l- -| .V 11 | .--1161VV-c: amo 6:6 638 666m 56661 :66: m .2 63a... scam—tic. < :4 V A. 636k 55 abilities of the readers. The sample size was small but deemed adequate for demonstrating the methodology for the study. Characteri_stics of the Reading Comprehension Test Participants A convenience sample of nineteen participants was recruited. A broad definition of “tourism” was used in this study to attract a sample of entrepreneurs, owners, and managers of businesses that cater to tourists. The number of participants recruited was intended to be a number sufficient in size to demonstrate the methodology and not a rigorous statistical sample. The participants consisted of twelve females and seven males. The occupations and the number of the participants were: owner - retail staff professionals management - historical museum owner - bed & breakfast consultant - computer systems consultant - hospitality director - business improvement graduate student book publisher p—‘r—‘n-dr—‘v—av—twhfl Administration of the Reading Comprehension Instrument The Advanced Degrees of Reading Power instrument was selected to measure the ability of each reader to reason with textual material and consisted of increasingly difficult passages about general subjects. This fact was disclosed to the participants. For each passage, the participants were instructed to read the passage and then select the one best answer for each test item from the choices provided. Before administering the DRP instrument, participants were asked to review a consent form, and, if willing to continue, 56 to sign and return one COpy, keeping a second copy for contact information if needed. A Participant Profile Form, Appendix G, was completed by each participant. A confidential identification number was created using the month and day of the participant’s mother’s birthday. Participants were asked to write on their answer sheet two numbers to indicate their highest educational level attained. This educational attainment information was used in a subsequent analysis to answer research question number 4: “Is there a relationship between the intended reader’s level of educational attainment and the reader’s reading comprehension ability?” No time limit was set for the administration of the DRP instrument. The examiner collected the test booklet, the answer sheet, the consent form, and the Participant Profile Form as each participant exited. _E_qr_latin2 Material Readability and Reading Comprehension Abilitv The need to approximate material readability scores to scores from the reading comprehension test was addressed in the final stage of the design for this study. The Degrees of Reading Power instrument is a reading comprehension assessment test that is based on another readability formula, the Bormuth formula. This instrument was selected over other available reading comprehension tests because it offers the advantage of expressing scores that are criterion-referenced, the criterion being the level of text difficulty that readers are able to read and comprehend. Limited samples provided by the instrument supplier were qualified to state that the calculated DRP unit values were based on larger samples of text. One problem for this study was not knowing the exact 57 algorithm of the proprietary DRP formula. Further, the DRP supplier stated that publishers are not permitted to calculate and publish DRP values for their own book The F lesch Reading Ease Scores are a measure of the readability of text base surface features and expressed on an interval scale ranging from zero, most difficult, 100, easiest. This scale appears almost inverse to the DRP Scale of Text Difficulty. DRP Scale of Text Difficulty is also an interval scale consisting of DRP units, a mez of a reader’s ability to comprehend at different levels of text readability. A DRP val zero is the easiest, 100 is the most difficult. In this study, Flesch scores were equate DRP scores in order to enable authors of CES educational materials access to lower- more readily available readability formulas when gauging text readability. Each bul sampled had a computed Flesch Reading Ease Score. Since Flesch and DRP scores not expected to be a perfect inverse relationship, an approximation cross-reference v1 constructed, see Table 7. Validity The Validiiv of the Flesch Reading Ease Readability Formula Readability formulas have proven validity as predictors of learning performa (Chall & Conard, 1991, p. 15). Studies by Chall (1958); Chall, Bissex, Conard, & E Sharples (1996); Fry (1988); and Klare (1974) have further provided evidence of the validity of readability formulas (Chall, Conard, & Harris-Sharples, 1996). Validation studies of the F lesch Reading Ease formula are described by Flesc (1949). Flesch concludes that “These studies show high correlations between readal 58 as measured by the formula, and readership, reading speed, comprehension, and retention” (F lesch, 1949, p. 225). The Validity of the Degrees of Reading Power Instrument Construct validity of DRP reading comprehension tests is grounded in the main purpose of reading, which is to construct meaning from text. The construction of test items on DRP tests measures the reader’s ability to use semantic and syntactic cues to read with comprehension and to reason with test passages. Prior subject knowledge is not critical. The reader’s knowledge of linguistic cues and the ability to reason at higher cognitive levels are measured in DRP test items. According to Touchstone Applied Science Associates (2001), the correlation between the readability of passages and the average difficulty of the items embedded in them is very high (r=.95). Thus, construct validity is supported by the comparison of DRP test scores with expectations. Content validity of DRP tests is based on the design of the instruments as criterion-referenced tests that measure a single objective, reading comprehension of English text. Test items on general subjects are randomly selected from the universe of all prose subject matter. Criterion-related validity tests whether DRP test scores actually forecast a reader’s ability to reason with item passages of varying readability. Very high correlation (r=.90) has been found when comparing DRP scores with the reader’s ability to produce semantically sensible responses for blanks in test passages. DRP scores have been shown to correctly forecast reader’s performance at levels of comprehension ranging from P=.50 to P=.90. Convergent validity is evidenced by correlations between .75 and .80 when 59 comparing DRP scores with other reading comprehension tests (Touchstone Applied Science Associates, 1995b). One additional point concerns the underlying architecture of the DRP test instrument. According to Touchstone Applied Science Associates (2002): (The) Bormuth formula provides a continuous scale over the entire range of readability. . .(and has) a relatively low standard error of measurement. . .(and) the validity. . .is higher than that of other formulas. It is important to note that the Advanced DRP technology... is not dependent on the use of this formula. If a better estimate of text readability were to be developed, it could be substituted. (p. 12) Reliability The Reliability of the Flesch Reading Ease Formula Evaluations of the reliability of readability formulas has been limited (Chall, 195 8, p. 68). Chall proposes two kinds of reliability testing for readability measurement. Analyst reliability is evidence of the objectivity of the technique. Sampling reliability is evidence of the representativeness of the sample analyzed for the entire book or article. In an analyst reliability study, the Flesch readability formula was found to have high reliability coefficients when assessing word length and sentence length factors. Sampling reliability studies for readability formulas are non-existent (Chall, 1958, p. 162). The Reliability of the Degrees of Reading Power Instrument The internal consistency of the items on the DRP tests has been demonstrated by the Kuder-Richardson (K-R 20) reliability coefficient. K-R 2O coefficients for grades 11 and 12 range from .93 to .97 with 59 of the 72 coefficients equal to or greater than .95. 60 These values indicate that DRP test forms have a very high degree of internal consistency. Test-retest coefficients and altemate-form reliability coefficients were r=.95 when DRP tests were administered to grade 4 and grade 6 students. DRP tests administered in pre- and post-test studies also showed expected gain in individual readability ability (Touchstone Applied Science Associates, 1995b). Dataf Organization and Variables in this Study Main Dependent Variable Distance Learning Performance Level: Independent level score, in DRP units from the Advanced Degrees of Reading Power test, stated with a level of comprehension; variable name: INDEPEND; interval data; range 0 to 99.9. Independent Variables Publication characteristics, all are attribute variables: 0 Publication ID: From the National Extension Tourism Database; variable name: PUBID; nominal data; eight numeric digits. 0 Publication Title: From the National Extension Tourism Database; abbreviated first ten characters of bulletin title; Variable name: TITLE; nominal data. 0 Authoring Source: From the National Extension Tourism Database; variable name: SOURCE; nominal data; fifty characters. Alternately, a two digit numeric code for computer analysis. 61 Year of Publication: From the National Extension Tourism Database; variable name: DATE; interval data; four digits (year 19xx to 2004; “n.d.” or “9999” for no date). Number of Words in Bulletin: variable name: WORDS; ratio data; range 0 to 99999. F lesch Reading Ease Readability Score: variable name: FRESCORE; attribute variable, calculated in this study; interval data; range 0 to 99.9. F lesch-Kincaid Grade Level: Variable name: FKGL; interval data; range 0 to 99.9. Coleman-Liau Grade Level: Variable name: CLGL; interval data; range 0 to 99.9. Bormuth Grade Level: Variable name: BGL; interval data; range 0 to 99.9. Reader characteristics, all attribute variables: Reader ID: Variable name: R_ID; four numeric digits in the format MMDD where MMDD equals the month and day of the participant’s mother’s birthday; nominal data. Highest level of formal education attained: From participant’s voluntary response; variable name: EDLEVEL; nominal data; range 00 to 18. 00 to 11 = grade level completed 12 = completed high school or trade school 13 = some college beyond high school 62 14 = Associate’s degree 15 = some college beyond Associate’s degree 16 = Bachelor’s degree 17 = some graduate work or degree 18 = graduate degree (Master’s or Doctorate) - Raw Score: The number of correct item responses from the DRP test; variable name: RAWSCORE; interval data; range 0 to 24. 0 Independent DRP Score: Also known as the Distance Learning Performance Level; variable name: INDEPEND; interval data; range 0 to 99.9. Expressed in DRP units, the A-DRP Score, at a stated level of comprehension. Statistical Analysis A summary of the variables, statistical techniques, and display formats used to analyze and report results for each of the research questions and hypotheses in this study is provided in Table 8. 63 Table 8 Statistical Analysis Summary Research Questions or Hypotheses Variables Data type Statistical Display analysis Research Question #1 (Readability of bulletins): Mean; median; Table 4; histograms mode; correlation (Figures 2.3) coefficients Flesch Reading Ease Score F RESCORE Interval Most difficult bulletin? Min Easiest bulletin? Max 90 percent of bulletins? Mean, Std Dev. Hypothesis #1 (Readability of bulletins): Flesch Reading Ease Score FRESCORE Interval Mean Table 4 Research Question #2 (Readability versus selected text attributes): Flesch Reading Ease Score FRESCORE Interval Length of bulletin WORDS Ratio Range, mean, Histograms (Figures median. mode, 4.5); Table 4; Scatterplot correlation (Figure 6) coefficient Year of publication DATE Interval Range, mean, Tables 4,9; histograms median, mode, (Figures 7.8); frequency correlation distribution (Table 10) coefficient Authoring source SOURCE Nominal Min, max, mean; Table 4; histogram (Figure 9); Table 11 Research Question #3 (Reader’s comprehension levels): DRP test items correct RAWSCORE Interval Min. max, median. Table 13; histogram mode, mean. std. (Figure 10) dev. DRP independent level INDEPEND Interval Histogram (Figure 11) Hypothesis #2: Reader's independent level material: DRP independent level INDEPEND Interval Mean Table 7 Research Question #4 (Reader’s education versus comprehension levels): Highest educational level attained EDLEVEL Nominal Min,max, mode Table 13; frequency distribution (Table 14) DRP items correct RAWSCORE Interval Table 13; histogram (Figure 10) DRP independent level INDEPEND Interval Table 13; histogram (Figure 11) 64 CHAPTER 4 FINDINGS AND DISCUSSION The findings from limited studies have indicated that CES tourism bulletins are perceived to be difficult to read. The non-formal educational mode of Cooperative Extension Service is distance learning, when the learner is apart from the instructor. The predictability of distance learning performance is ultimately dependent on a successful match of educational materials and a learner’s abilities. The research questions in this study were: 1. What is the readability level of the most difficult CES tourism bulletin? Of the easiest? Of ninety—percent of these bulletins? 2. Is there a relationship between the readability of these tourism bulletins and their attributes, that is, authoring source, publication date, and length of bulletin? 3. Are these tourism bulletins written at an appropriate level of difficulty for their intended readers? At what level of material readability will the intended readers comprehend at the independent level? 4. Is there a relationship between the intended reader’s level of educational attainment and the reader’s reading comprehension ability? There were two research hypotheses for this study: H 1: CES tourism bulletins are written at a readability level that is less difficult than the average academic journal or quarterly, and 65 H2: When reading text material, the intended readers of CES tourism bulletins are capable of performing at the independent comprehension level only when the readability of the text is not higher than the readability level of text typically found in high school textbooks. The Readability of CES Tourism Bulletins In research question number 1, the readability of CES tourism bulletins was raised. The results of the readability scores calculated for 130 on-line CES tourism bulletins sampled are shown in Table 4. For each bulletin, an abbreviated title, the authoring source, date of publication, length in words, and the readability results from four readability formulas are shown. Figure 2 is a display of the results of three formulas that express readability in terms of grade level. In this figure, readability results are grouped by grade levels. Most readability results using the Bormuth formula were in the 11.0 to 11.9 grade level range. The range of results using the Flesch-Kincaid formula were in the 5th to 11th grade levels. The Coleman Liau results were expressed as grade levels, but it was difficult to explain the wide variance of results from 8th grade to 47th grade. Calculations of readability from the Bormuth scale are quite homogeneous, falling within the 8th to 12th grade boundaries. Flesch-Kincaid calculations placed most bulletins in the 10.0 to 10.9 grade level range, while results from the Coleman-Liau calculations showed a nearly normal distribution with most bulletins in the 16.0 to 16.9 grade level range. From Table 4, the arithmetic means for 130 CES tourism bulletins sampled were: Flesch-Kincaid Grade Level, 10.4; 66 m_m>m._ $90 cSEuom n; w_m>m._ mumuo :mjémEQooP . ,. ,l . woman. .32 2.20 = e've-O've 6712-029 F ace-0'09 6'88‘0‘82 F 6'9z-0'9z 6'0L-0'0L 6'8-0'8 6'17'0‘17 6 62-02 6'0'0 = 673 ' O'VZ ': 6'ZZ ‘ 023 c: 6'8L ' O'8L — 6'9 ' 09 E c E = ‘ I: = ‘ I: \ = r: 6'9L ' 0le : 6'7L ' 0471' —‘—= 621, ~ O'ZL l I: d I l l 285-68: I. _— 9.23.3250 R_znmumom _o>o._ 320 ho cowtmnEoo < ”252:5 Em_._:o._. mmo .N 959". or f om om ov on 00 on on sunaunq )0 JaqwnN 67 Coleman-Liau Grade Level, 17.3; and Bormuth Grade Level, 11.0. Note that studies have shown that the reading grade level for the average US. adult is 9th grade (Mavrogenes, 1988). The histogram in Figure 3 is a display of the readability results from another readability formula that expresses results of the readability analysis in a different way. Instead of a grade level range, readability in the F lesch Reading Ease formula is expressed as a score on a scale from zero to 100. On this interval scale, the higher the score, the easier the text. A score between 80 and 90 approximates a reading level at completion of the fifth grade. A score of 70 and above is easy for most people. The score for most documents will be about 60, on the average. Scores from zero to 30 are the equivalent of a college graduate reading level, typically scientific magazines. Academic journals or quarterlies usually fall in the range of 30 to 50. The arithmetic mean for the 130 bulletins sampled was calculated to be 47.7 on this scale of readability. On Table 7, this equates to a 13th to 16th grade reading level. The median was 46.0 and the mode was 50.1. One standard deviation was 11.4. The number of bulletins that fell in grouped ranges on the Flesch Reading Ease Scores scale are shown in Figure 3. When bracketing FRE scores in groups of tens, the results showed that most CES tourism bulletins fell within the 41 to 50.9 range, a readability range for material that is slightly more readable than typical academic journals or quarterlies. The easiest, most readable, CES tourism bulletin sampled was 91.7 on the scale from zero to 100. This value equates to a reading level associated with material suitable for readers that have completed fourth grade. The most difficult bulletin sampled scored 21.1, a value associated with material typically appropriate for college graduates. 68 23.5%.: .223 28%.: 9203 3:95 Boom owmm 9:231 come.”— mdm - rm mdm - Fm mdw - K WON - 5 0.00 - _.m mdm - 2V mdv - rm mdm - _‘N mdN - 3 0.9. - o 0C) n m:=o__:m Enrich wmo so 3:53.31 .n 0.59". or me 0N mm on on ov mv om sunsunq io .IaqumN 69 If the mean and the median were equal, this would be treated as a normal distribution where sixty-seven percent, one standard deviation, of the bulletins would fall in a range between 36.3 and 59.1, and approximately ninety-five percent, two standard deviations, fall in the range between 24.9 and 70.5. However, because the mean and median were not equal at 47.7 and 46.0 respectively, Chebyshev’s theorem (Johnson, 1976) can be applied. This theorem states that the proportion of any distribution that lies within k standard deviations of the mean is at least 1 - 1/k2, where k is any positive number greater than one. Applying this formula, two standard deviations calculated out to seventy-five percent of all scores fell within 24.9 and 70.5. At three standard deviations, approximately eighty-nine percent fell between 13.5 and 81.9 on the F lesch Reading Ease scale. This range equates to material appropriate for readers having reading comprehension abilities ranging from approximately sixth grade through those readers with some college completed. Hypothesis Number 1 In Hypothesis number 1, the prediction was made that the arithmetic mean for readability of all sampled bulletins would be greater than 50. As measured on the Flesch Reading Ease scale, the typical readability value of academic journals or quarterlies ranged from 30 to 50. The results of the calculation for all bulletins sampled shows the arithmetic mean to be 47.7, slightly more difficult than predicted. These findings indicate that Hypothesis number 1 can not be supported. The arithmetic mean fell within the readability range of typical academic journals or quarterlies. 70 Text Readability versus Document Length The question of a possible relationship between text readability and document length was raised in research question number 2. The word length for each bulletin sampled is displayed in Table 4. Graphically, word length data are shown in Figure 4. Most of the 130 CES tourism bulletins sampled fell in the 3,000 word range, see Figure 5. The median was 3001, the mean was 3730, and the mode was 952 words. The shortest bulletin examined was 100 words, and the longest was 29,604 words. The relationship between the Flesch Reading Ease Score and the length of the bulletin in number of words is shown in Figure 6. On this chart, the readability of documents is expressed on the Flesch Reading Ease scale of 0, most difficult, to 100, easiest to read. The relationship between readability and document length was found to be neglible at r = —0. 1. These results indicated that readability was not significantly related to document length. Text Readability versus Year of Publication Another issue that was raised in research question number 2 concerned the potential relationship between bulletin readability and year of publication: Were more recent bulletins more readable? For each bulletin sampled, the year of publication, when shown on the source document, is included in Table 4. A re-sorting of these data by year of publication is shown in Table 9, with the mean, median, mode, and range readability values calculated by year of publication. The last page of Table 9 summarizes the descriptive statistics by year. The number of bulletins, by year of publication, is shown as a frequency distribution in Table 10. From the data shown, most of the source bulletins did not have 71 L .6 9 LZL 9LL Amomoacsa >m_am__u .2 592:5 some on uocmfiwm o:_m> EEOQES a. 32:00 :_uo__:m lrlrly LOOSGBQZIQQQQVVESZZL LQLQLQLQLQLQLQLQLQLQL o2": .2225 =_ 23:5 ease» mmo “x343”. 25-5 so 59.3 .4 2:9". ooom 0000? 0003 oooom ooomm oooom SpJOM u! q16ua1 72 Amos—$39.... c. £09.30”: £3.25 mm m_. NF : or m w n o m v m N r mm 8.": 653:3 5.58 mmo 38:3". 8..-:0 .o 598.. .m 2:9". sunanna JO JaqumN 73 $.33 :. 59$. :..o..:m oooom ooomm OOOON 0009. 0000—. o so 9 9‘9 .9 9 09¢ cowofi O O Q ~ ...“O’ 00. o 9 4.00.9090 9 o lo 9 o 909 .‘O O O o s» O 9 o l 28m 5.883. m:m..e> £95.. :.uo_.:m ”m:_ue..:m_ Ew_.:o.. mmo “83.3". o:..-:0 .m 959". 0.0 0.3 OdN 0.0m odn 0.0m 0.00 0.03 (alqepeai Mall 5! DOT) 31093 9523 Bugpeau qosaH 74 Table 9 ' ' l I l l l l 1 CES Tgrislm Bulletins: Flesch Reading Ease Scores by Year of Publicgtiohw l Flesch Reading Ease Scores Pub ID T Year Flesch T l I Reading 1 Ease Score Min Max Mean Median 33410154 1972 45.4 331 19706 1972 50.1 33129714 1972 57.8 1972 45 T 58 51 50 33129713 1974 46.3 1974 46 46 , 46 46" T 33209403 1975 61.1 1975 '61T 61 - 61 61 T 33129711 1977 53.2 1977 53 53 T53 53TT 33319733 1978 39.5 1978 1 40T 40 7 40 40T 33529766 1979 36.9 ’TT‘ T’T ‘ T T " 33300010 1979 40.2 T’ T T 33800103 1979 52.8 T ] T T 33209402 1979 62.0 I T T T T 33209401 1979 63.7TT 1979 ' 37 64 T51 53 T 33200014 1980 43.4 TT ' ' ' ‘ " " T 33129603 1980 45.0 T T 33129604 1980 49.8 ' ’ T 33719791 1980 49.9 i ’ T T 33129606 1980 60.3 1980 43 V 60 50 '50 ' 33129602 1981 50.9 ‘J T T T T 33319734 1981 55.1 ‘ w T 33119708 1981 58.2T 1981 51 ; 58‘T 55 E 33529767 1982 47.4 ,-- 1 ’ Tm ,____ TT 33729800 1982 60.4 1982 60 , 60 60 , 60 33339740 1983 633 1983 63 63 63 ' 63 33810001 1985 50.5 ' ' T T 33000130 1985 70.8 1985 J 51 71 .— ‘61 . ‘E _ 33500046 1986 36.6 % i 1 j , 33119709 1986 42.0 . 33710084 1986 42.2 f T 33710088 1986 44.1 T ,T 1 33710085 1986 45.0 A TT 1 33209601 1986 45.5 1 1 33200020 1986 45.8 T T 33209845 1986 46.0 T ' 33710086 1986 48.1 _ T 33839811 1986 55.0 _ , g ,f 1 7 # 33200016 1986 60.6 _ j _ l... j 1’ 33729801 1986 64.4 _ __, _______> 1.. __ _ _l_ _ _ 1 ___fi 33739803 1986 66.2 1_9z__1_6_I __37_ L 66 _ ._ 49 1 _ 4_6, 33520069 1987 29.2 j , _ i __ g _ 33700082 1987 30.6 T ! Table9 cont'd | Flesch Reading Ease Scores Pub ID Year Flesch T Reading 1 Ease Score Min Max Mean Median 33000005 1987 35.1 33850115 1987 37.0 ' , 33710087 1987 40.9TT[ TT 1 T T T T T 33740097 1987 41.2 T T ’ T 33420030 1987 49.8 7 33710093 1987 91.1 33720096 1987 54.1T” T 33420034 1987 62.6 1987 ‘ 29 63 43 TT41 33419744 1988 34.2 TT' T T T ' 33520056 1988 36.0 T 33710083 1988 43.5 T T 33400020 1988 52.2 . TT T 33400021 1988 66.2 1'1989T 3T4 lT66 T 46 44 33701999 1989 42.1 T T T T' ' T 33840107 1989 50.5 1 33420029 1989 51.4 T TT T T 33420025 1989 51.4 TT TT TT T T 33840421 1989 65.4 ' 1999i 42 T65 TTTTT 52 51 33600001 1990 21.1 TT TT' T T T T 33860122 1990 32.0 ‘ ’ TT T T , T 33840419 1990 44.0 T' T T T 33840108 1990 45.7 TL :" TT, T _T T A 33840420 1990 46.0 33420043 1990 48.9 T 1 T L T T T 33842999 1990 51.5 , T l T 'T T TT TT T 33310406 1990 53.0 1999" "21 T 53 TTTTT 43T TT49T T 33520132 1991 92.9 T T ' T T T T 33529773 1991 36.4 ' T 33701799 1991 36.8! L T T T TT _ T T 33710094 1991 40.2 * 33520067 1991 40.8 T T T T TT T 33700083 1991 43.7 L T . _ T T # _T 33510310 1991 44.3 ‘ 1 - _ _ 33420136 1991 44.9 _L 1 _ _ __ 33420138 1991 45.1 ‘ ' 33420035 1991 45.1 _ _ }_T :. j, (TL _T T 33420137 1991 48.0 I l | T 33420139 1991 50.4 T T T T ” T 33530075 1991 53.0 L _ .T. _T_ L_T__ __TL _T_ T T _ T 33511014 1991 54.5 1991 ; 33 55 : 44 ' #45 - 33519758 1992 32.5 T T_TL T ‘ T __ T_ _T_ 33809807 1992 45.0 A -- _. L l # _ 33809809 1992 47.1 _ _ ._ __fi _ __L _ _ __ 33809023 1992 47.8 I T l l Table 9 (cont'd) Flesch Reading Ease Scores Pub ID Year Flesch Reading Ease Score Min Max Mean Median 33413999 1992 50.1 33832715 1992 55.6 33200131 1992 72.6 1992 33 T T73 T50 193T 33419745 1993 48.4 1993 T 48 48 TLT48 48 33702003 1994 31.8 TT ’TIT ‘ TTT 33417120 1994 39.0 T T ' T 1 33702004 1994 39.4 i T TT T T T 33700412 1994 40.5 I T TT T TT TT 33510409 1994 40.8 ,T1994 Tl 3T2T "T41 38T ’ T 39 T 33510408 1995 34.4 j TT T T T 33529764 1995 38.4 T T T T T 33510407 1995 39.5 _ T T T T T 33530608 1995 43.8 I T T" T T T T 33520714 1995 44.3 I T TT T T T T T T T 33508252 1995 56.8 T T T T T 33425108 1995 66.3 1995 34 TL T66 T46 T T44 T 33840311 1996 40.8 1996 41L4L1 LTLLLT41LTT LT 4Tl: L 33840031 1997 58.0 1997 58 ; 58 58 58 33801221 1998 38.0 1998 38 38 ET TT 38 T 33840030 1999 58.6 1999 59 59 LTT59T T T59TTTT 33420042 2002 56.1 2002 56 56 IT 56 T T 56 T 33130042 n.d. 25.6 TT T TTTT TT ‘ TTTT T 33209724 n.d_. ._ 27.1 L L L L L T T T T T T‘ T T 33139716 n.d. .1 29.0 . l , T T TT T T 33831715 n.d. 1 34.3 ’LT T T T T T TT 33311029 n.d. L 35.0 L T T T T T T T 33411028 n.d. T 35.8 T TT TT T TTTTT T T T T *T T TT 00110707 00: .1TT3T5-9Tfff .1 _ _ ;;::___ -:_:- _ _ 33209725 n.d. 36.3 L . - L _ L _ _LL 00000002 00.11-92.0- L -- - -_ _ - - _- 33839810 n.d. L 38.2 1 . l L 33510050 n.dTT _I 41.3 I! T T T ‘T ‘ T TT T T T' T T 33300009 anTT 43.4 L L L L L L L 'T l T T T T _ T 33300004 n.d. I 44.6 T T T T ‘7 T T T TL T 33330176 n.d.T 49.2 T T I " IT T T 33420027 n.d. -49-7 - TT T T T T TTT TT T T T T T T T T 33420037 n.d.T LL50J1 .; T LT TTTT - T " i T ' ' T ,T T L T 33300175 naITT 517T L; T T TIT T1 T TT 33719792 n.d. . 52.1 _ TTT LTT T T ‘ =TLT TT T 33420040 n._d_TL LL 54.3 L _ _T __LTLLLT LTLTL_LLLTTLL L L T 33209729 n.d. 56.5 ------._--- ; L _ LLLLL _LLL LLLL 33720002 n.d. 59.4 . , ‘ I 33830523 n_._d. 70.3 LTlTTTT TTlTTT TTLTTTLT 33719793 MI 74.8 T T ITTT T T T TT TT’ 77 Igble 9 (cont'd) Flesch Reading Ease Scores Pub ID Year Flesch Reading Ease Score Min Max Mean Median 33209722 n.d. 75.6 33209723 n.d. 91.7 n.d. 26 92 48 45 78 Table 9 (cont'd) W Summary of Readability Scores __ of On-Iine Full Text Tourism Bulletins by Year of Publication — _____ L _ Year Min Max Mean Median 1972 45 58 51 50 T 1973 1974 46 T ' T46 4T6T 46 TT TTT T TL TT 1975 61' T T61 61 "TTT 61 T ‘ T 1976 TT " 1 T TTT TTTT 1977 53 TT 53 53 1 53 TTTTT 1978 40 40 L 40 P TT40T ‘ ‘ 1070 32 00 _1111; 01 “‘00_ 1980 43 60 L-§Q-..-50 L _ _L 1,-- T 1981 51 TL: 58 TL 55 1 55 = 1982 60 i 60 ; 60 ‘ 60 1983 6T3 T63 63T 63T 1984 TT TT TTTT L' T T TTTTTTT L 1985 51 T” T71TT 61 TT T61 TTT TT T TT 1986 37TTL' 66 49TT 46 T TTTT T 1987 29 " T63 73 T T? _T : *__ _ 1‘ 1000 04 1:001 ‘00- 00: _ 1989 42 L 65 gL _ 51 __ TTT T T 1990 21 L53" T 43 46 ;-“ 111 : 1001 03-: 00.3; _00 . , 1992 33 i 73 .; 50L 48 _T -TT- T T 1993 403T T8 ITLS -__.48 1994 32 L: 41 g 38 39 T T TT TT 1995 34 L ”'66 L 46 T44 T_T . 1 T 1996 4? 'ij 41: -4_1T _TE - _ __2 . 1997 5.8- :- _5-8_ T 58 - _58_ 1 * 1998 38 € 38 L 38 L 38 T T 1999 59 T T59 ' 59 . 59 TTTT T NW TTQ-T l - _T_. _T_-i T_ 1 T T 2001 :_L __._1_______, __ - - _1 . J - 2°02 §6_ .-,,-5_5____-_ .flffi ___-#55. ---1. "A % T9%--@-L-§”- I l? ' - Note: “n.d." signifies no date of publication on source bulletin. =1 79 Table 10 On-line Full Text Tourism Bulletins: Number of Bulletinsiy Year of Publication Year of Number of on publication line bulletins 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 9999 Total 1 30 aamoémwmm—s—so—A—sow N _. m—xoo—s—s—s—sxicn—sxi4>oomo1 Mean 4 Median 1.5 Mode 1 th Max 25 Source: National Extension Tourism Database Available on-Iine at http://www.msue.msu.edu/imp/modtd/mastertd/html 80 a year of publication printed. Of those bulletins that did have a date, most were authored in the years between 1986 and 1991. A histogram showing the number of bulletins by year of publication is shown in Figure 7. Discounting the publications that did not have a publication date on the on-line database, the year with the most bulletins was 1991 with fourteen, followed closely by 1986 with thirteen. The histogram of readability scores of on-line full-text CES tourism bulletins by year of publication is shown in Figure 8. On this chart, the values next to 2002 are “no date” publications. From Table 7, recall that a score between 80 to 90 approximates a fifth grade completion reading level. A score of 70 and above is easy for most people. Most documents will score about 60, on the average. Scores from 0 to 30 are the equivalent of a college graduate reading level, approximating the readability of scientific magazines. Academic journals or quarterlies usually fall in the range of 30 to 50. The data from this study indicated that the low end of readability scores for bulletins sampled occurred in the 20 to 30 range, and the easiest bulletins occurred in the above 60 range. Most sampled CES tourism bulletins averaged in the high 30s to high 50s range, typical for academic journals or quarterlies. The correlation coefficient calculated for the relationship between year of publication and Flesch Reading Ease Score was found to be r = -O.2, a low negative relationship. Text Readability versus Authoring Source The final material attribute that was examined in this study was the potential relationship between bulletin readability and authoring source. As a group, were bulletins authored by one particular source more, or less, readable than other bulletins authored by another source? In this study, the “authoring source” distinction remained at the agency 81 ZOOZ ‘— 0002 3562.: m_ 2% 8:832. 2.. 62855 :88; 8:833 co 58> Ir '7 Ir IV 6 6 6 6 6 6 6 6 9 .7 z 0 866L '7 6 8 8 986L me '7 6 8 .V 0 Ir 6 8 Z N IV 6 9 0 8L6L Y— 916L O VA6L ‘— ZLGL mm ST: 523235 co 38> 3 232.5 5258 $09613". 25-5 s 2:9". or mr om mm sunaunq jO JaqwnN 82 T3213 7 v6.). 0, m gs.- 7v 0 0 Z SCENE—23¢ to 560> OOOZ 866L 966i V66L ZBGL 066L 886L 986L 786L ZBGL 086L 8A6L 916L VL6L on _.u: .co=~o__n:n_ ho 89> 3 952:5 Emtzo... wmo “Ewes". o:__-:o ho mom—Em 3:533”. .m 959". ZLGL 00L) aJoos asea Bugpeau uosaH (tsetsea 83 or organization name level and was not extended to identify individual persons within those organizations. The results of re-sorting the sampled bulletins by authoring source are displayed in Table 11. The arithmetic mean and range of readability scores for all of the bulletins for each unique source were calculated and displayed in the columns shown. For processing purposes a unique Source ID code was assigned to each authoring source. The authoring sources for all bulletins sampled are presented in Table 12. Interpreting a randomly example selected from Table 11, all CES tourism bulletins authored by Cornell University were found to range from a readability of 33, college level, to 51, approximately 10‘“ grade, with a mean of 38, about in the middle of the 13‘h to 16th grade range. These data, readability scores by authoring source, are displayed in Figure 9. The data in this chart provide an answer to the question of whether bulletins originating from the same authoring source tend to be more, or less, readable. On this chart, one source assigned a Source ID value of 21 authored one bulletin that was scored as the most difficult to read with a score of 21, falling in the 20 to 30 readability range that is typical for scientific magazines, see Table 7. At the other extreme, there was one bulletin written by Source ID 15 that was scored as the easiest to read at a score of 92. Upon examination, this latter bulletin length was only 100 words, the minimum recommended for readability analysis. That same authoring source averaged a score of 54 across eight bulletins examined. When looking at Figure 9, notice sources such as Source ID’s 12 and 25 having wide ranges of scores, then notice the mean score for those sources. Discounting wide variations in readability by authoring sources where the total number of 84 Table 11 I 7‘ CES Tourism Bulletins: Flesch Readin Ease Scores by Authoring Source LL Authoring Source Flesch Source ID Min Max Meainm Reading Ease Score Arkansas 1 72.6 , Arkansas 77777 " 36.4 T 1 36 1 73 55A 7 Auburn U " T 45.6 ’ 2 45 45 45 Clemson T 7 7 T 54.3 T i i * Clemson — T ' H 49.7 H i T Clemson ' i 39.0 T 3 39 54 48 ___—Cornell ’ r - ,. 37.5 __ j ** Cornell T L 40L _ L Cornell T _ _ TTT29 Cornell H— L 34.2 L 7 Cornell ' T f 50.5 " 4' T33 ' 51 38 Great Lakes Sea Grant Networkfi 40.8 L i T T O T '— Great Lakes Sea Grant Networkw': 44.3 T 5 41 44 43 Idaho 1 T _ 53.2 T T _ TTT Idaho L T 56?" " ' 6 51 T 53 52 Illinois T 4579T ' ‘ 11111010 ;1 _ r 1 ___:000; i * - Illinois ’ L L 48.0 Illinois T " 45.1 f T Illinois T T . 'LT5T6T4" ‘ T Illinois . ’ 59.4 L 7 ‘ _45 5‘9 49 TTTT Iowa . L 48.9 1 8 49 49 49 Kansas State— L 40.8 . L T if _— Kansas State 45.0 Kansas State T 49.8 T T Kansas-State ” 60.3 T ‘ 9 41 60 49 _Kentucky i 43.819 __ _- Kentucky 70.8 1o 44 71 58 T Maine T T LT3L2T.5 L '11 T 33 33 I 33' Michigan State L L ' 551—-.. . T L i T Michigan State L L _ 38.2 . T T L L LT T _ @5500’31010 - ".‘9001—11 _ _ Michigan State L . LL 44.1 L i L L L Li L TMTiohIgan State . _ _ 29.2 L _ __ L L L 9100190001010 _ 1" ___—00:1; I " - 2 Michigan State 55.0 fichfian State LL 46.3 L T Wingan State I T 42.0 T i 85 Table 11 (cont'd) 1 i _T— Authoring Sou—roe J : Flesch SouTFcelDi Min 7 Max T MeaTnT T Reading Ease Score Michigan State ? 35. 9 Michigan State ' T T 45. 0 Michigan State T T TTT; 29.0 T l T Michigan State 5 T —50.1 ' Michigan State T T T. 7_53TOTLL . T 5 Michigan State 3 42—2 ‘ Michigan State T ‘ IT 544 Michigan State T H 66.2 MlchIgLan State T ' T454“ T Michlgan State T 5 T T 49.9 Michigan State T T T4127 Michigan State T ' 5—43—77 ' Michiga_n State _, T T T 747.6 ' Michigan State T T5755 ' T Michigan State ”—56.1" 'V T _ Michigan State L L ”65.3 TL Michigan State 56 _1 T Mlchigan State 30.6 Michigan State T 35.1 T Michigan State 7 LL _36T6_ Michigan State L L . 369 V Michigan State M606 T T 12 T T 29 66 T 47 Minnesota ’ ’ T " T514 ‘ ' ’ ‘ Minnesota T. 757 L . T Minnesota T ,5 T557 L Minnesota I! T 46.0 T T T Minnesota T 42.1 T T T Minnesota ' 2' 62.6" . mnesota ' ‘ 71—43 T l 5 _MiLnnes—ota— . ”-4355 l T Minnesota h 166.27 ' Mlnnesota _ ' T515 . T Minnesota T ' 35.0 ? Minnesota T L i 787 aneSota T L h 50.1 .. T Minnesota T T T. 440 I l 1 T T T T Minnegata—T T T _T i 50.5 Ephesota 5 T 52.2 ‘ _ T Minnehs—ofita—fi I T537 . T l T 1 T T _ M , . WT” . _ . -. ., _ Minnesota T T P 735T 5 T T 86 Table 11 (cont'd) 1 t _ 7% Authoring Souroe¥m 7T Flesch Soiree |_D; Mini 4 Max M‘efi' Reading Ease Score Minnesota 49.8 . Minnesota — __ — _ 51.4 _ 7 Minnesota — — "fit—16.9?“ fl ,7, KnTr'mésota ' ‘P 4* w * 54.1 ’ ' ‘ 13 36 ' 66 49 Mississippi State Z "—637 ‘ — 7' Mississippi State "7 ‘ 62.0 ' A14 62 ’ 64 63 Missouri 4 ' 937 _ ' Missouri P if if 75.6 I! Missouri _ _ v '7 52.8 f O f i Missouri ___ 1 I 54.57 .7 7 Missouri j 27.1 ' ‘ ‘ Missouri _ _ i 36.3 C. i + a Missouri _ _ _ 1 39.5 i _— Missouri 7 58.2 15 27 i 92 54* New Mexico State 43".?— W _—F_ ”M” New Mexico State 40.57”“ ‘ 16 41 44 42 North Central Regional Center for ‘ ___— . H Rural Development 32.0 North Central Regional Center for” ' ’7 W Rural Development . 37.0 North Central Regional Center for _ A Rural Development 47.1 17 32 47 39 North Dakota U ‘ g _456 . _ "D North Dakota fl i 45.5 — _ No—rtrTDakota ’ 46.6" I 18' 46 w ' 46 46 ' NY Sea Grant P V #580 a 3 I V f 7* ”A W NY Sea Grant ’ 66.3 ' f _ — _— NY Sea Grant; 1 "£65” ’ i 19 58 _ 66 61 Purdue _ g 40.2 . _ i _ i 7* ___ Purdue _511‘ .7 , _ Purdue 2 . .. 37.6 1 1 _ ““Purdue a "“492. 2 _ 1 u;__ _ Purdue ! g g ; 44.6 f + _ k __4~i __ Purdue . 43.4 - 20 1 66 1- __52 44 USDA Agricultural Library ‘ " 21".1 " ‘ ‘21 ' 21 21 21 Vermont _ ; 1703 I ‘ ‘52 7O _ _ 7O 76' _— W. Rural Devel Cen _ i 39.5 I ' _ _ g —d 7W. Rural Devel Cen , 7 56? W 7 _ Y , 74 W. Rural Devel Cen _ , I 7 34.4 . _ i -12“: ___ W. Rural Devel Cen }_ 38.4 _ _ ‘ fi W. Rural Devel Cen A [ 40.8 87 Table 11 (cont'd) l if f” ‘ , W Authoring Source Flesch Sourcel—D‘IWEMin W Max " Meal“ Reading Ease Score 5 W. Rural Devel Cen 47.4 23 ' 34 ; 57 43 WestVirginia 25.6 _ ‘_ W ‘7' If, 8488* WestVirginia 50.11 " 27 : ;26 ._ 50' : 38 VWsconsin ' 56.8 ; i 7- ' i W \Msconsin __ W 77556—8 Wisconsin 61.1 ‘: CPD VWsconsin —— _ 35.8 1i i f , 7 Wisconsin 1% 31.8 - i Vifisconsin 35.0 ‘ Vlfisconsin g 39.4 j 88* i VWsconsin _ i; 74.8 ' fl 2 Wisconsin _ “—52.1 ; 2_5______. ' 82m 7 775* f 49 ? 41.3 99 41 41 4‘1" 88 Table 12 _A_uthoringSources of On-line F ull-text CES Tourism Bulletins Variable Names: SOURCE ID SOURCE 1 Arizona 2 Arkansas 3 Clemson/Clemson University 4 Cornell 5 Cornell and New York Sea Grant 6 Great Lakes Sea Grant Network 7 Illinois Sea Grant Indian tip Sheet Series 8 Iowa 9 Kansas State University 1 0 Kentucky 11 Maine Agricultural Experiment Station 12 Michigan State University 13 Minnesota, University of Minnesota 14 Minnesota Sea Grant, University of Minnesota 15 Mississippi State University 16 New Mexico State University 17 North Dakota State University 18 Purdue 19 Rural information Center 20 Tourism USA - Chapter 3 21 University of Idaho 22 University of Kentucky 23 University of Missouri 24 University of Tennessee 25 University of Wisconsin, Madison 26 University of Wisconsin Extension 27 West Virginia University 28 Western Rural Development Center 99 Other Source: National Extension Tourism Database Available on-line at http://www.msue.msu.edu/imp/modtd/mastertd/html 89 EmoEDN me—U. 55.7 6 6 Aouooou 9 NF 23: cow. 0. 350w ZZZ Z lrlrlrlr lr 9.700 Ir erlv Llr lr 068199V83L06819978Z 7v 7v o2": .ooSow 9.20.3.2 3 m:_ao__:m Emtzo... wmo «x243; o:__-:0 no woman”. bzsmumom .m 2:9". IV 001.) 3.1005 asea Bugpeea qosaH (tsetsea 90 bulletins consisted of one or two, one conclusion drawn from this display suggests that more by coincidence than by design, bulletins as a group from any single authoring source tended to average in the high 305 to high 505, readability scores typical of academic journals or quarterlies. _D_iscussion: Findings on the Readabilitv of CES Touri_sm Bulletins One important factor to consider when using reading comprehension to assess distance learning performance is the readability of the text material. CES tourism bulletins sampled in this study were analyzed for readability based solely on surface features. Readability levels of these 130 bulletins were found to fall within the limits of the readability range that is typical for academic journals or quarterlies, a range that approximates the reading levels associated with educational materials suitable for high school graduates or readers with some college. Findings also indicated that variances in the readability of these bulletins do not seem to be related to either the year in which the bulletins were published or to the authoring source. Readability likely becomes more difficult as document length increases, not due to surface features, but to other reader- related factors such as fatigue that are beyond the scope of this study. When comparing findings of this study with findings from other studies, the first point is to recall the corroborative findings of Mavrogenes (1988) and Klare and Buck (1954) that the reading level of the average US. adult was found to be the 9th grade. The CES tourism bulletins that were sampled in this study were found to have a readability level similar to materials that are typically encountered by readers who are high school graduates or have some college. Does this necessarily suggest that the intended readers 91 of these CES tourism bulletins have achieved that formal education attainment level? AceOrding to a study by Nehiley and Williams (1980), cited earlier, CES educational matterials were found to be written at readability levels higher than the intended audience. Holloway (1983) was cited earlier as downplaying the type of medium, such as printed or electronic, in favor of paying attention to the attributes of both the materials and the learners. The focus of the present study was on selected attributes of both the materials and the intended readers while discovering the frailties of the current authoring process along the way. Electronic versions of these bulletins were typically unedited postings of the printed versions, where the readability checking process was random at best. Kaestle (1991) recognized that any authoring process, even with the aid of readability formula checks, is commonly done in the absence of the target audience. The studies by Kintsch (1987), cited earlier, propose that readability is not a property of a text, but a result of a reader-text interaction. The findings of the present study reinforce Kintsch’s findings. This study used a three-stage approach of measuring readability of text materials, assessing intended reader’s comprehension abilities, then assessing the degree of match in order to increase effectiveness of distance learning. Not considering each of these points would leave open questions and lead to questionable conclusions. In the earlier citation of the work by Gray and Leary (193 5) where eighty-two factors were used for predicting reading comprehension performance by adults, the effects of each of these factors have been advanced over time by the findings of subseq; uent studies. The readability analyses from the present study first analyzed selected textual properties, followed by analyses of selected reader attributes. The 92 important point is that the exhaustive studies by Gray and Leary, and by others, isolated some attributes as having a more significant effect on learning than other attributes. Many of these more significant attributes have evolved into the readability formulas in use today. The purpose of the present study was not as much an assessment of these text and reader attributes as it was a correlation study of the degree of reader-material match. This study provided a benchmark of the material readability of CES tourism bulletins and a methodology for equating material readability with the reading comprehension abilities of the intended readers. According to the study by Chall and Conard (1991), cited earlier, predictions of reading comprehension based on readability analysis alone is not a new issue. Klare (1988), cited earlier, proposed that learning performance was the result of reader competence (the present study also assessed this), motivation (not assessed here), material content (assessed here in the reading comprehension test), and material readability (assessed here). The findings of the present study add to the research on learning performance by focusing on distance learning performance and assessing learning performance at the independent level of comprehension (P = .90). Rye (1982) advocated thinking about meaning as a critical part of the process of learning by reading. The design of the present study reinforced Rye’s view of learning through a methodology that assessed the reader’s ability to reason with text. Similarly, the work by Zakaluk and Samuels (1988b), cited earlier, includes both material readability and the reader’s comprehension abilities in a process prescribed for predicting learning performance. The present study further advances the improvement of learning performance by assessing learning performance on criterion-referenced, that is, material ranked by readability, rather than norm-referenced measures. 93 The Reading Comprehension Abilities of Intended Readers Referring to the design overview depicted in Figure 1, the design for the present study included a second stage where a reading comprehension instrument was administered to a sample of the intended readers of the CES tourism bulletins. Analyses of the resulting test scores were used to address the issue raised in research question number 3 and to test hypothesis number 2. Research question number 3 was raised to gain a sense of the reading comprehension abilities of the intended readers: “Are these CES tourism bulletins written at an appropriate level of difficulty for their intended readers? At what levels of material readability will the intended readers comprehend at the independent level?” Hypothesis number 2 was stated to compare the readability of text materials with the reading comprehension abilities of the intended readers: “When reading text material, the intended readers of CES tourism bulletins are capable of performing at the independent comprehension level only when the readability of the text is not higher than the readability level of text typically found in high school textbooks”. Additionally, during the administration of the test instrument, the sampled readers were asked to code their highest level of formal education attained on the Participant Profile Form, Appendix G, for later use in addressing research question number 4: “Is there a relationship between the intended reader’s level of educational attainment and the reader’s reading comprehension ability?” The third stage of the design that is depicted in Figure l entailed the creation of an approximation table for equating Flesch Reading Ease scores, material readability 94 measures, with the Degrees of Reading Power Units, measures of criterion-referenced reading comprehension. The approximation table that was created was depicted as Table 7 in the Methodology chapter. Scoring Results, Analysig. and Discussion of Findings An answer key provided by the test vendor was used for hand scoring the Advanced DRP test, version T-2. The numbers of correct responses were added to derive a raw score total for each participant. The item responses of right, wrong, or blank, and the raw score total of correct responses for each participant were then entered on a computerized spreadsheet to create a profile record for each participant. The participants’ item responses, scores, education levels, and genders are shown in Table 13. The display of the raw scores for all test-takers is found in Figure 10. From Table 13, the mean, 19.1, and median, 20.0, indicate an approximately normal distribution. Raw scores cannot be compared across different versions, such as T-2 and T-4, of the test, therefore raw scores were converted to Advanced DRP Scores using Appendix B in the Advanced DRP Handbook (Touchstone Applied Science Associates, 2002). In the present study, this conversion is more important for the objective of determining reading comprehension on an absolute interval scale that would approximate the scale of readability that was selected in gauging text readability. The resulting A- DRP Scores at the Independent level (P = .90) indicated the difficulty of materials, in DRP units, with which a student can effectively analyze, evaluate, and extend the ideas presented. The comprehension level (P-value) of .90 was selected as the level of comprehension for independent learning for this study, but other values could be easily 95 96 l:- Fma1111-m-F1F1usosm AA . A 1A ._ A A . 1 WA .A11-A- . A A_ A Ase-aslee- as A A _ A A A . A _ A A1. A A . A 11 AQNFA o8 o-F..F11_ Es. A - A . 1A1. A . A i All 11 A1 A -_ A ‘ .- A 0.6 60...... ass - A ,. . A: -_‘ . _ A .111 A11-._A-. 1- 111-16.8 @116...F1N11_1 A. oaos. ; , __1-11 A A. A . A H A A mew-11:2 zoos. .111-1A A A --11A-- A - A A . -A A A 11-u|-11F...F 1- .8 .lFF o Aolmor 1o o o ..F .o -F 1 o .6 0A FF 0. AF -F..F o o F-._ F F F 82 In. 1M1 B; mF -A- T1110- ,.F. F 1.1-AF F F: o- o F F- AF o 11AFlF1F1-1F o F A F ..o F NFNF u 9 1. F..t,.1._1-m-F ..-1F Ac 3 n. F F 1F1‘ol1F-1 1ow1o11F- F AF o -_F F. AF 0 F -F-A.1 F o F Fmoo ”-_IA E S mF .1FA.-F o _ o-11F1 .-F1 1A1F..--F1A.F.1A _o 101Ao 0 AF F -F ..F1 AF F _o o A F 6 F mwuo s. FF 8 6F c-611o1- 1 F F F _o _o F A-Amlkre- -F---AF F 1 F AF AF o F F A _F F F oFNF -- u NF Fm FF- F F 0. Ac 0 F F5 F 1F1..F.1.AF F F10 . F F AF -01: F A F 5 F 28 u 6F 8 6F AF 0 F1 Ao F-1F. F F F-l-F F F F F .F. F 0 AF 11FAF AFA .F o o NNNF a FF 8 mF-1A1F1F1-F1- o o .F F o1-F11b1 F F-.- F F1.AF1- F o F FA-FAFA F F F 82 s. 2 8 om F.-.F---F11 Flo-F F 01F--.- F o F F F-.A1F1- F.F F F-._F.AF.A _F o F :8 s. M: 8 om F- ..F -F F F F1111-w-o1 F F o o _ AF F AF1A 1-11F1. F F F1-FL-F .A “F o .F :8 u 6F ...NF FN1-1F1-F- F o .o _F -F-..o---F F F F - -W1F F-A.F1A F F F F--F1.F1_lAF F _F FFFF u 2 We Fm F F111F FAF F ...F F F F o F: FF _F_ F o F FAF FA AF FAF sumo ”A 6F NF Fm F...F F111F _F F F F F F. F F: AF AF AF A F F F- F F AF A. :1vo «moo u 2 Q Fm F F F F FiF F o F F o o_ FAF AF-A _F F F F FAF1A11A.-F FAF 88 s. FF Q Fm F F .1F-- F F F 111F1.F 1F F. F o F F A0 A A1F11..-F--F.- o F F A F .F AF 83 s. 6F ms Fm F1F _.FA F o F F .o .F- .F F F1F-..F--AF1AF . AF AF- 0 A F F AFA .A-F F AF 88 s. 2 5 MNFFAOA FFF F__F F1 FFF- F_1FA1FA AFAFFA FFAF..A.AFFAF meF s. S Fa mNFFF FFF FoF FFF.1....FFFAAFFFFFFAAFFFNSF u oF Fa «NFFFoFFFFFFFFFFFAFFFFFFAAFFFmFFo .6650 Sam 86."... 3. 8 Nu Fm on 2 2:2 232 NF :2 a a F. o m «A. -5 N ,F 9 Boom Boom a s m m e n N F. 1.111| amo-< 3am - FicsmFemeosiFresm 1 A 1- ounmmmn 3 «$2032 E3. A 11.11111 :11 A -A A . A A A A A A A A A [11 1 l A.. A 1_ .- 11f 1lAr Sues-Cook ago 36:23 A A A A Fooco0._o>o4 couaozom .wo.oom.mom:oomom Ewarflcflxlofilam 1111 .A AA. - A A A A A A A , . . 2633 FmEog 7.2 «N Fo So momcoawe Footoo Fo FEE—.5 Boom in”. N 9 t we we _.F mm rm 0 maze—Em Em_._:oh mwo Fo who—owe". uoucofi. ho mohoom :o_m:o_._oEEoo mfiumom .3 959"— ‘— siuedgoiued ism io JaqtunN 97 computed. The test results after conversion of the raw scores to A-DRP units are shown in Figure 11. Table 7, described earlier, provides an explanation of the meaning of the A-DRP unit values. In analyzing item response patterns in Table 13, typical responses showed that the participant’s ability to reason with text decreased as the text material became more difficult. Note that two participants with identification numbers (ID) of 0822 and 1222 may be examples of “cold-start” test-takers. Once acclimated, item response patterns for these participants indicated that they had no further difficulty with the test. Note that one participant with an ID of 1212 was unable to finish the test due to a. scheduling conflict. Sometimes, a participant would answer incorrectly all items in the same passage. Examples of this are ID 1216 on passage 8, ID 0225 on passage 5, and ID 1006 on passage 5. This could have been due to the participant’s unfamiliarity with the passage subject, the difficulty of the text material, or a limit to the reasoning ability of the participant. In the case of ID 1216, the ability to reason with the test passages dropped after passage 4. This may be a good example of the “falling off” item response pattern expected when the reasoning ability limit of the test—taker is reached. The item response patterns of ID 0225 showed immediate recovery after incorrect responses to items in passage 5. Analysis of item response patterns for participant 1006 indicated a slight recovery at passage 6 but a clear falling-off for the remainder of the test. There were no raw score results that fell into the chance level where item response patterns would indicate pure guessing. From the descriptive statistics shown on Table 13, the raw scores suggested that the test-takers had little problem with the abilities that were tested in this 98 :5.an 28m «won .250“. 9.581 F0 3295 ooocm>u< vm mm mm mm _.0 am pm om 252:5 Emtzoh wmo Fo Eon—mom 323:. Co moeoom man. coocm>u< .2. 2:9". sauedgogued isai io iaqwnN 99 particular instrument. The reasoning abilities of this sample of intended readers are best evidenced when eighty-three percent, eleven of nineteen, correctly answered 20 or more items out of 24. This raises a question about the appropriateness of this instrument as the test-takers’ scores approached the ceiling for this instrument. The raw scoring results, see Figure 10, suggest that this instrument was perhaps not particularly well centered, at least for this sample of test-takers. Reporting of Advanced DRP test results are criterion-referenced, not norm- referenced. Reporting of A-DRP scores must always be accompanied by a P-value, the level of comprehension. For learning performance at the independent level, the P-value for this study was established at P=.90. This means that this participant was able to comprehend ninety-percent of the material up to the readability level that was indicated by the accompanying A-DRP score. Because the objective of the present study was assessing distance learning performance where independent, unassisted comprehension is key, assessing and reporting results at the independent level was appropriate. An analysis of A—DRP Scores, see Figure 11, provided a comparison to readability ratings for literature that approximates some of these A-DRP scores: 72 = typical introductory college textbooks in accounting and economics 70 = front page articles in newspapers; employment manuals 65 = state-issued driver’s handbooks; consumer articles in adult general interest magazines 51 = Treasure Island, The Call of the Wild. [Source: Touchstone Applied Science Associates, TASA web site]. 100 The mean and median A-DRP Scores for this sample of readers were approximately 68 DRP units, per Table 13. This indicated that the intended readers were able to reason independently (P= .90) with text materials written at the level of articles found in adult general interest magazines, per Table 7. Reading Comprehension Abilities versus Formal Education The question of a possible relationship between an intended reader’s reading comprehension ability and that reader’s formal education was raised in research question number 4: “Is there a relationship between the intended reader’s level of educational attainment and the reader’s reading comprehension ability?” The information about highest level of formal education attained was taken from the disclosures by test-takers on the Participant Profile form, see Appendix G, and is displayed in Table 13. A frequency distribution of the attained education data is displayed in Table 14. Table 14 - Participants' Highest Level of Formal Education Attained Highest Formal Education Level Attained Number of Participants 18 (Graduate degree) 4 17 (Some graduate work) 1 16 (Bachelor’s degree) 6 14 (Associate’s degree) 4 13 (Some college) 3 12 (Completed high school) 1 The declared values of the highest formal education level attained ranged from high school completed through graduate degree completed. All but one participant had 101 attended college, 6 of 19 had attained a bachelor’s degree, and 4 of 19 had a graduate degree. From the reading comprehension test, the raw score mean was 19.1, and the mode was 21. When converted to A-DRP scores at P = .90, the values were 67.9 and 72 respectively. These test scores indicated that these readers were capable of independently comprehending text materials written in the range of adult general interest magazines, first year college texts, and the low end of the range for professional journals. In Table 7, this range would approximate an estimated reading grade range between 13th and 16‘“ grade to just under college graduate. When compared to the declared education attainment levels, these findings indicate a close relationship exists between formal education level and the reading comprehension ability of the 19 participants. Comparing Readability versus Reader’s Comprehension Abilities The findings from the first stage of the study indicated that the mean readability value for the 130 educational bulletins sampled was calculated to be 47.7 on the Flesch Reading Ease Scale. Using the approximation table, Table 7, this value fell into the upper end of the range of text materials that were labeled as “difficult”, materials typically encountered by 13th to 16th grade students. In the second stage of the study, the limited sample of intended readers found them to be capable of independently comprehending text materials written at the level of materials typically found in the range of adult general interest magazines, first year college texts, and the low end of professional journals. On Table 7, these materials approximate the 13th to 16th grade reading levels. On the basis of these findings, it appears that these materials, on the 102 average, were written at a level that the intended readers were capable of independently comprehending. When adding two reading grades to the findings that the reading level of the average US. adult was found to be the 9“‘ grade, as stated by Mavrogenes (1988), cited earlier, and Klare and Buck (1954), the findings of this study concur with the findings of Johnson and Verma (1990), cited earlier, that material written by the Alabama Cooperative Extension Service was over two grades higher than the reading grade level of the average U. S. adult. Hypothesis Number 2 Hypothesis number 2 was stated to compare the readability levels of text materials with the reading comprehension abilities of the intended readers: “When reading text material, the intended readers of CES tourism bulletins are capable of performing at the independent comprehension level only when the readability of the text is not higher than the readability level of text typically found in high school textbooks”. By following the information presented in Table 7, the findings of this study indicate that the mean readability Flesch Reading Ease Score of the CES tourism bulletins sampled was 47.7, falling in the 13‘h to 16‘h grade level range of 30 to 50. With a mean of 47.7, the CES tourism bulletins sampled were written at a level that is slightly above the readability levels of high school textbooks, 50 to 60 on Table 7. From Table 7, material found in high school textbooks averages 62 A-DRP units. The mean score from the reading comprehension test of 19 participants was 67.9 A-DRP units at P=.90. This indicates that the readers were capable of independent comprehension of materials written not only at the levels of high school textbooks but somewhat more difficult materials. 103 Therefore, the findings from this study do not support hypothesis number 2. The participants in this study were able to independently comprehend materials having readability levels more difficult than average high school textbooks. It bears repeating that the small sample of “intended readers” in this study was a demonstrative, rather than a generalizable, sample size. Nevertheless, the findings from this sample not only served to test the methodology, but to gain a preliminary sense of the capabilities and attributes of the intended readers. 104 CHAPTER 5 CONCLUSIONS, IMPLICATIONS, AND RECOMNIENDATIONS The Purposes of thi_s Study This study was undertaken to examine a perception that Cooperative Extension Service educational materials are difficult to read. This study examined one category of Cooperative Extension Service educational materials, CES tourism bulletins designed for use in tourism industry education. The purposes of this study were: 1. to measure the readability of one type of CES educational publications -- tourism bulletins. 2. to demonstrate a methodology for measuring the reading comprehension abilities of the intended readers of these bulletins. 3. to examine the relationship that exists between the readability of educational materials and the reading comprehension abilities of their intended readers. 4. to present a methodology for improving distance learning performance in a way that matches the readability of educational materials with the reading comprehension abilities of their intended readers. Hypotheses Hypothesis number 1 stated that: “CES tourism bulletins are written at a readability level that is less difficult than the average academic journal or quarterly.” The findings of this study indicate that the mean readability level of the bulletins sampled fell 105 within that range of academic journals or quarterlies. Therefore, the findings of this study do not support hypothesis number 1. Hypothesis number 2 stated that: “When reading text material, the intended readers of CES tourism bulletins are capable of performing at the independent comprehension level only when the readability of the text is not higher than the readability level of text typically found in high school textbooks.” Based on a small convenience sample of readers, the preliminary findings of this study indicate that the intended readers are capable of independently comprehending text materials written at this level as well as materials that are more difficult. Therefore, the findings of this study do not support hypothesis number 2. To repeat, these findings are from a small sample, n=l9, and are not generalizable. Methodology Of all the alternatives considered, the approach that was chosen for this study was aimed at the most fundamental aspect of the “difficult to read” perception. This study was designed to assess the readability of materials by measuring the surface features of text. The design of this study included traditional measures of material readability, readability formulas. The second stage of this study involved administering a commercially available instrument to a small sample of willing participants in order to demonstrate a methodology for assessing the reading comprehension abilities of the intended readers of these materials. The test scores that resulted from the instrument were criterion-referenced scores, indicating the most difficult material that each participant could independently comprehend. The scores, that is, the readability level of 106 text, were then approximated to the readability scores of the materials produced from the readability analysis. This information was then used to address the problem of the study, the perception that CBS educational materials are difficult to read. Summary of Findings Findings from the first stage of the research design indicate that, based on an over-sampling of 130 on-line full-text tourism bulletins, the mean readability level of CES tourism bulletins is within the readability range that is typical for academic journals or quarterlies. This range approximates the reading levels associated with educational materials suitable for high school graduates or readers with some college. These findings do not support hypothesis number 1 that stated that the readability of all sampled bulletins are less difficult than typical academic journals or quarterlies. This may be largely due to the educational attainment level of the authors as most academics tend to write to their peers and are not necessarily trained to write to an extension reader audience. The easiest to read bulletin was found to have been written at approximately the fourth grade completion level. The most difficult bulletin was found to be written at a readability level typically associated with material appropriate for college graduates. About ninety-percent of the bulletins analyzed fell within the readability range of materials that are typically encountered by readers ranging from the sixth grade through some college completed. Study findings also indicate that variances in the readability of these bulletins are not strongly related to the year in which the bulletins were published, nor to the authoring source, nor to the length of the bulletins in number of words. 107 The second stage of this study proposed a methodology for assessing the reading comprehension abilities of the intended readers of these bulletins. A standardized reading comprehension instrument was administered to a small convenience sample of readers. The findings of this demonstration sample indicate that the intended readers of these bulletins are capable of independently reasoning with text materials written at the average readability level of high school textbooks. The readers in this sample were also found to be capable of independently reasoning with even more difficult text material. An approximation table was created in the third stage of the study to be used to equate Flesch Reading Ease readability scores of textual material with Degrees of Reading Power Units, a measure of the reading comprehension abilities of readers using a criterion-based instrument. Conclusions Are CES tourism bulletins difficult to read? The findings from this study show that when defining “difficult to read” as the readability of written words, nearly ninety- percent of these bulletins were written at levels suitable for readers ranging in reading abilities from the sixth grade completion level to the completion of some college. Based on preliminary findings, this readability range is well within the comprehension abilities of the intended readers. The bulletins that would likely be “difficult to read” are the few remaining bulletins that share readability levels with materials that are more difficult, such as materials often encountered by college graduates. When defining “difficult to read” in ways other than a readability assessment that is based purely on the surface features of written words, the answer can vary. When 108 materials are perceived as “difficult to read” by one individual, this does not necessarily mean that the same material is difficult for another individual. Based on preliminary reading comprehension test results from a small sample of readers, the mean readability level of sampled CES tourism bulletins is in line with capabilities for independent comprehension for the intended readers of these materials. Administering this instrument to a sample larger than nineteen participants would yield statistically significant results. This study also examined the relationship between text readability and other text attributes such as document length, authoring source, and year of publication. No strong relationships were found. Additional research is needed to examine other factors that may contribute to this perceived “difficult to read” problem. The findings of this study indicate the presence of a close relationship between the intended readers’ reading comprehension ability and their highest level of formal education attained. Additional. text material attributes, such as the effect of adjunct comprehension aids, and learner attributes, such as the effect of prior subject knowledge, interest, motivation, etc., should be explored. The investigation of these and other attributes were beyond the scope of this study. Implications of the Study This study measured the readability levels of materials in one subject area, tourism subjects, and one type of educational materials, bulletins, used in Cooperative Extension Service education. The same approach that was used in this study can be used to assess the readability levels of other subjects or types of educational materials. Three fundamental clusters of factors that influence reading comprehension were assessed in 109 this study. In Table 1, these factors were identified. Many of the factors associated with text were examined through analyses of surface features of text, factors commonly expressed in readability formulas. The specific reader attributes listed in Table l were not the focus of this study. Rather, a more encompassing attribute, reading comprehension ability, was assessed. This attribute was presented in Table 1 and attributed to Johnston (1983) and Binkley (1988), cited earlier. The environment attributes investigated in this study pertained to learning performance in a distance learning setting. In short, this study assessed each of the three fundamental clusters. The findings of this study indicate that both the reading comprehension abilities of the intended readers and the readability level of the materials are, on average, approximately equal at the 13‘h to 16‘‘1 grade reading levels. If these preliminary findings hold in future studies with larger reader sample sizes, the question remains: Is this level higher, “more scholarly”, and thereby less effective according to the belief of Misanchuk (1994), cited earlier? Misanchuk believed instruction is more effective when materials are written at speaking levels rather than at higher scholarly levels. If considering 9“ grade the average reading level for US. adults (Mavrogenes, 1988), cited earlier, are these bulletins considered “more scholarly” at the 13‘‘1 to 16“1 grade readability level? This study produced preliminary findings that indicate a close match between the readability levels of sampled materials and the intended readers’ reading comprehension abilities, based on a limited sample of readers. Fry (1988), cited earlier, found that a close match is a key to improving communications and learning. The findings from this study also indicate that these materials are written at a readability level that is slightly below the highest formal education level for the small 1 10 number of readers sampled. This finding aligns with a principle favored by Fry (1988), cited earlier, who advocated writing at a slightly lower level than the proper sophistication level of the intended audience. The methodology demonstrated in this study illustrates the importance of matching reader’s abilities with material readability. This position is advocated by Threlkeld and Brozoska (1994), cited earlier, who state this as key to the development of effective distance education courses. The findings from this study contribute a benchmark of the readability levels of CES tourism bulletins. Text readability was assessed in this study through the use of readability formulas and, using the advice of Huggins and Adams (1980), cited earlier, a step was added to demonstrate a way to assess the reading comprehension abilities of the intended readers. The preliminary findings from this study do not corroborate findings of Nehiley and Williams (1980) and Johnson and Verma (1990), cited earlier, that CES educational materials are written at a readability level that is higher than the reading abilities of the intended audience. In this study, preliminary findings suggest that these values were approximately equal. The findings of this study provide support to research findings by Liptak (1991), cited earlier, on the benefits of using readability formulas when writing for Extension audiences. Are readability scores based on surface features of text an adequate measure of the educational effectiveness of educational materials? Educational materials have evolved from primarily written text to more visual communications vehicles such as figures, charts, color and other adjunct comprehension aids. When these materials are absent of 111 these aids, the conveyance of meaning relies singularly on written text. Incorporating these aids would likely increase their educational effectiveness by conveying meaning in ways that are visually appealing and meaningful for audiences having diverse learning styles. As stated earlier, this study was focused on the readability assessment of educational materials, and readability formulas have proven validity and reliability. Other approaches, other studies, may find other ways to measure effectiveness. This study broaches the subject of understanding cognitive processes involved in reading comprehension. Although this study did not focus on identifying or isolating specific cognitive processes nor on advancing cognitive theory, the findings of this study do underscore the importance of recognizing the role of cognitive processes in understanding reading comprehension. Like the research by Kintsch (1987), cited earlier, the findings from this study indicate that readability is a result of a reader-text interaction. The study findings do not favor Kintsch’s (1987) cognitive propositional structure theory over the schematic theories of Anderson (1977), Rumelhart (1980) and others. There is no empirical evidence from the present study to conclude which cognitive theory is at work in the understanding of readability. The findings of this study add evidence to the belief that unobservable cognitive factors are present in the reading comprehension processes invoked in distance learning. This counters the beliefs of Valencia and Pearson (1987), cited earlier, that favor behavioral observations as the best possible assessment of reading. The role of one cognitive factor, an individual’s prior knowledge of subject matter, was addressed in this study. Although unable to discover an instrument to assess the intended readers’ prior 112 knowledge of tourism subjects, the prior knowledge assessed in this study was on general subjects. No conclusions on the effect of prior knowledge on learning performance can be made from the findings from this study. The findings from this study support Klare (1988), cited earlier, in the belief that learning performance is the result of interactions between reader competence, material content, and material readability. This study underscores support for the Interactionist theory for examining reading comprehension, as advocated by Binkley (1988), cited earlier. When comparing the findings from the present study with the research of Zakaluk and Samuels (1988b), cited earlier, the methodology developed in the present study provides a model for assessing the readability levels of both materials and the intended readers. This study went beyond assessing learning performance in a classroom setting where behavior is observable to developing a methodology for improving performance in distance learning, where learning behaviors are not observable. Findings on the readability levels and reading comprehension levels of the intended readers can benefit authors of Cooperative Extension Service educational materials. When authoring educational materials, authors are encouraged to include readability level checks in the process. The readability formula instruments used in this study are readily available on word processing software typically used by authors of educational materials. The authors should be sensitive to reading comprehension abilities of the intended readers. Until further studies yield more precise and statistically significant findings on the reading comprehension abilities of the intended readers, the goal for authors should be to strive for facilitating independent comprehension by centering the readability level when creating educational materials. This will 113 accommodate readers performing at lower comprehension levels while reducing the risk of boredom for readers who are performing at higher levels. The important point for authors and Extension educators to remember is that the intended reader is typically operating in the independent learning mode, distance learning without instructional assistance available. Unfortunately for the intended audience of these educational materials, the findings from this study offer little immediate benefit. Any hopes for long-term improvements in learning performance will accrue only as current materials are reviewed or possibly rewritten by authoring sources. Until remedied, this spiral of frustration will worsen as readers attempt to comprehend materials written at levels too difficult to understand. The lack of instructional assistance will likely further erode the current under-utilization of these educational materials. Recommendations for Future Studies 1. Additional research is recommended to address other factors related to learning performance such as the impact of adding adjunct comprehension aids in designing text material, and the effect of the intended reader’s cognitive attributes, such as reading comprehension ability, prior subject knowledge, motivation, and interest. What predictions about distance learning performance can be made based on these factors? Is a multiple regression formula approach appropriate? Would such a regression formula essentially replicate earlier attempts such as those by Gray and Leary, 1935, cited earlier, to arithmetically gauge or predict learning performance? According to Touchstone Applied Science Associates (2002): 114 “On Standard DRP tests, a regression equation is used to produce forecasts of comprehension success on text at given levels of readability. A similar regression equation could be derived to predict a reader’s likelihood of success in reasoning with prose. While the previous analyses demonstrated that the difficulty of Advanced DRP test items is related to the difficulty of the text in which they appear, further analyses are required to develop a stable regression equation for describing this relationship” (p. 42). Can cognitive processes that are engaged in reasoning and comprehension be isolated? If so, can these processes then be expressed arithmetically for use in regression equations such as readability formulas? Studies are needed to answer the question of how well the reader’s comprehension levels on general subjects enable predictions of comprehension levels on specific subjects, for example, tourism subjects? Is there a relationship between an individual’s reading comprehension ability and an individual’s writing ability? Once an individual’s reading comprehension ability has been assessed, would samples of passages written by that individual, then scored using a readability formula, measure at approximately the same level? In the construct of communications, do individuals receive and send at the same level? The potential under-utilization of these CES tourism bulletins is an issue that needs to be addressed. This belief stems from a sense gained during this study of a general lack of awareness of the existence of this body of knowledge not only within the population of intended readers but also among Extension educators. The problem is compounded by funding issues that have “dried up” the authoring of new bulletins and suppressed staffing of Extension agents who are knowledgeable in tourism and 115 10. business subjects. What would the findings be of a survey of a target populations’ usage of CES bulletins in the past year? Is there a more appropriate instrument that could be administered to assess reading comprehension? Consider the selection criteria used in this study and the findings of scores that approached the ceiling scores for the Advanced DRP test. Does the medium of the educational materials make a difference in learning performance? One approach might involve pre- and post-testing one reading group’s learning performance on reading printed versions of the material against a second group’s performance on electronic versions. Studies are needed to compare the design and layout effectiveness of educational materials. Pre- and post-testing of the learning gain of a control group of readers exposed to text-only versions of materials should be compared with learning gains from a treatment group exposed to versions of the same materials embellished with adjunct comprehension aids. More research is needed on the subjects contained in these CES tourism bulletins and their relevance to the intended reader’s present or future occupations. More research is needed to benchmark and compare the readability of other documents, such as correspondence and trade journal articles, that are encountered in any given occupation. How do the readability levels of these materials compare with the readability level of educational materials? The authoring process currently in place for these CES tourism bulletins needs to be reviewed. The approach currently being used to create electronic “on-line” versions could be described as “cut and paste” of printed materials. This approach is quick 116 and relati'rel} improremeut as the use of 11. Rigorous stu abilities oft” a small com demonstrate lerels 611’. needed are tourism ind Gaining th' instruction materials. ll. and relatively inexpensive. The trade-off is the potential loss in learning performance improvements that are possible through careful consideration of enhancements such as the use of adjunct comprehension aids and interactive learning designs. Rigorous studies are needed to more accurately assess the reading comprehension abilities of the intended audiences of CES educational materials. In the present study, a small convenience sample of nineteen readers was assembled primarily to demonstrate the methodology of the study. Limited studies have assessed reading levels of US. adults (Mavrogenes, 1988; Klare & Buck, 1954; Chall, 1983). What is needed are studies that assess intended readers of specific populations, for example, tourism industry owners and managers, with statistically significant sample sizes. Gaining this understanding will minimize assumptions that are now being made by instructional designers and authors about the intended audience of CES educational materials. 117 APPENDICES 118 APPENDIX A The National Extension Tourism Database - Printed Version 119 Appendix A The National Extension Tourism Database - Printed Version The National Extension Tourism Database is an on-going effort of Michigan State University Extension Tourism Area of Expertise and the National Tourism Education Design Team. The project began in 1991. The purpose of the database is to provide a comprehensive inventory of Extension resource materials related to tourism education. By knowing what already existed throughout the US, Extension Educators could conveniently use those resource materials from other states. Also, gaps in subject areas would be identified which could encourage new publications to be produced. Currently the database contains over 250 Extension resource materials from 35 states. Over 90 documents are on-line in full text. The database is on the Internet at two addresses: www:tourism.ttr.msu.edu http://www.msue.msu.edu/msue/imp To add resource materials to the tourism database, send the publication, text on a disk, or Internet link address to: Phil Alexander MSU Extension 800 Livingston Blvd. - Suite 4A Gaylord, MI 49735 Criteria: 6 Produced by Extension - Resource materials from USDA, Ag Experiment Stations, and Land Grant Universities will be considered on an individual basis. 9 Resource materials should be current. 1985 is the general cut off date but older materials will be considered on an individual basis. 9 Resource materials include Extension bulletins, research reports, videos, training guides and notebooks, posters, and PowerPoint slide shows. Source: Michigan State University Tourism Area of Expertise Team. Tourism Educational Materials. (1998, September). East Lansing, MI: Michigan State University. 120 APPENDIX B The National Extension Tourism Database - Electronic Version 121 Appendix B National Extension Tourism Database Welcome to the National Tourism Database! The Michigan State University Extension Tourism Area of Expertise and the National Tourism Education Design Team make this service possible. The purpose of the database is to provide a comprehensive inventory of Extension resource materials related to tourism education and to make this information conveniently available. Currently the database contains over 250 Extension resource materials including bulletins, research reports, videos, and training programs. Nearly 100 documents are on-line in full text. List of Full Text Articles Links to What’s New - Publications added within the past 6 months Links to Other Tourism web sites Archived Publications About This Database of Tourism Educational Materials Search This Database by Topic: Ag/Tourism Bed and Breakfast Brochure Development Coastal Tourism Community Tourism Planningand DevelOpment Cultural/Historical Tourism Economic Impact Tourism Ecotourism/Nature-Based Tourism Employee Management Exhibit Development Extension Resource People Financial Manpgment Food Service Handicgpped/Disabled Travelers Hospitality/Customer Service 122 APPENDIX C An Example of a CES Tourism Bulletin Downloaded into Microsoft Word 123 Appendix C An Example of a CES Tourism Bulletin Downloaded into Microsoft Word Michigan State University Extension Tourism Educational Materials - 33200016 08/26/00 Tourism: Greeting the Guest Tom Quinn Michigan State University Extension Bulletin E-l381 January 1986 Reprint Tourism is a people-pleasing business. Beautiful lakes, forests, parks, rock formations, historic sites, resorts, museums, and recreation facilities are of little value unless the people visiting them feel welcome and are treated courteously. Tourism is people oriented and people dependent. Visitors must be pleased with what they see and experience in their contacts with local people. The name of the tourism game is HUMAN RELATIONS. A people failure in any tourist related business spells disaster. The finest motel, restaurant, gift shop or ski resort cannot survive if its employees have a negative attitude toward tourists. Visitors expect a pleasant experience. A positive attitude of the local people toward visitors, and their courtesy, warmth, friendliness and sincere willingness to serve are the basis for that pleasant experience. People remember their travel experiences for a lifetime, often as their fondest memories. It is the job of the tourist business employee to make these memories as pleasant as possible. Attitude, technical competence, appearance, and personality are four important qualifications that a good tourist business employee must possess. This bulletin briefly outlines each of these qualifications. 124 APPENDIX D Procedure for Calculation of Readability Statistics 125 Appendix D Procedure for Calculation of Readability Statistics 1. Using a personal computer, prepare Microsoft Word for analyzing readability. [Microsoft Word for Windows 95 Version 7.0c was used by the author]. 0 From the desktop screen, Open Microsoft Office and Microsoft Word. 0 Using the Tools/Options/Grammar tab: 0 Set (off) to check spelling 0 Set (on) to show readability statistics 0 Setup a custom setting via “customize settings”: 0 Grammar: turn all off 0 Style: turn all off 0 Under catch, set all to “never” 0 Set Sentence containing more words than _ to ‘ 100’. 0 Click OK - 0 Upon returning to the Grammar tab, set new custom setting (e.g., customl) under use grammar and style rules. Click OK and proceed with first bulletin. 2. Select bulletin(s): 0 Access National Extension Tourism Database website on the Internet. 0 http://www.msue.msu.edu/msue/imp/modtd/mastertd.htrnl 0 Go to “full text articles.” 0 Select a bulletin. 3. Repeat the following steps for gtc_h bulletin: A) Using edit/select all, copy and paste the entire bulletin to a new Microsoft Word file. B) Record the following information from Microsoft Word on a spreadsheet: 0 Publication ID (e.g., 33420040) Variable name = PUBID 0 Title: The first 10 characters of the publication title; variable name = TITLE 0 Year of Publication (DATE). Enter n.d. if no date. 0 Source: Authoring Source (SOURCE); See Appendix. C) Delete “boiler plate” text at end of bulletin (MSU, EEO information). D) Run readability statistics on the bulletin. 0 Use Tools/Grammar and custom settings set above. 0 Readability statistics will display. Record on a spread sheet: 0 Words: Length of bulletin in words (variable name = WORDS) Flesch Reading Ease Score (F RESCORE) Flesch-Kincaid Grade Level (FKGL) Coleman Liau Grade Level (CLGL) Bormuth Grade Level (BGL) When last bulletin has been copied to Microsoft Word, reset Tools/Options to turn Spelling back on, then exit the website. 126 APPENDIX E An Example of Calculating Readability Level of Text Material 127 Appendix E An Example of Calculating the Readability Level of Text Material Michigan State University Extension Tourism Educational Materials - 33200016 Tourism: Greeting the Guest Tom Quinn Michigan State University Extension Bulletin E-1381 January 1986 Reprint Tourism is a people-pleasing business. Beautiful lakes, forests, parks, rock formations, historic sites, resorts, museums, and recreation facilities are of little value unless the people visiting them feel welcome and are treated courteously. Tourism is people oriented and people dependent. Visitors must be pleased with what they see and experience in their contacts with local people. The name of the tourism game is HUMAN RELATIONS. Once you know people are interested in what you are offering for sale, ask questions of them and listen intently to their answers. This will help you sense their buying motives, purchasing ability, and real interests. It would be futile to try to convince someone who is afraid of heights to climb to the top of the fire tower south of town. Likewise, if people cannot afford luxury accommodations, don't suggest lodging in that price range. Visitors must be satisfied with what they see and experience in their dealings with local people. The tourist industry is people oriented and people dependent. After all--what is a lodging facility without people? 128 08/26/00 Appendix E (cont’d) Readability statistics: Counts: Words 186 Characters 991 Paragraphs 3 Sentences 12 Averages: Sentences per Paragraph 4.0 Words per Sentence 15.5 Characters per Word 5.2 Readability: Passive Sentences 33% Flesch Reading Ease 43.7 Flesch-Kincaid Grade Level 11.0 Coleman-Liau Grade Level 12.8 Bormuth Grade Level 10.6 129 APPENDIX F Calculating Readability Using the Flesch Reading Ease Formula 130 Appendix F Calculating Readability Using the Flesch Reading Ease Formula 1. Select samples - start at the beginning of a paragraph, three to five samples of an article. 2. Count the number of words - 100 word samples are sufficient. 3. Calculate the average sentence length. Divide the number of words in the combined samples by the number of sentences. 4. Count the syllables - Divide the number of syllables by the number of samples. 5. Count the “personal words” (i.e., first, second, and third-person pronouns; gender references; and group words such as people). Divide the total number of “personal words” by the number of samples. 6. Count the “personal sentences” (spoken sentences, questions, exclamations, grammatically incomplete sentences) and divide by the number of sentences. 7. Calculate the Reading Ease Score. Average sentence length in words x 1.015 _ Number of syllables per 100 words x .846 _ Add Subtract this sum from 206.835 This is the Reading Ease Score __ Source: Flesch, R. (1949). The art of readable writing. New York: Harper & Row. pp. 213-216. 131 APPENDIX G Participant Profile Form 132 Appendix G Participant Profile Form The following information will be used to insure your confidentiality and anonymity: Last name: First: Middle Initial Contact phone number: - _ Please indicate the month _ _ and day _ _ of your MOT HER’s birthday. Briefly describe your (or your employer’s) business (for example, retail clothing store). What is your position (for example, owner, manager, etc)? Please indicate your highest level of formal education attained: _ _ XX Grade level completed (for example, 07 for seventh grade) 12 Completed high school 13 Some college 14 Associate’s degree 16 Bachelor’s degree 17 Some graduate work 18 Graduate degree Thank you for taking the time to participate in this study! Do not complete the following. This information is for research purposes. ID Raw In Is F 133 BIBLIOGRAPHY 134 BIBLIOGRAPHY Achterberg, C., VanHorn, B., Maretzke, A., Matheson, D., & Sylvester, G. (1994). Evaluation of dietary guideline bulletins revised for a low literate audience. Journal of Extension. 32 (4). Anderson, R. C. (1977). The notion of schemata and the educational enterprise: General discussion of the conference. In Anderson, R. C., Spiro, R. J ., and Montague, W. E. (Eds.). Schooling and the acquisition of knowledge. Hillsdale, NJ: Erlbaum. Archer, B. B. (1972). Florida A & M programs annual extension report. Tallahassee, FL: Cooperative Extension Service. Ary, D., Jacobs, L. & Razavieh, A. (1996). Introduction to research in education. (5th ed.). Fort Worth TX: Harcourt Brace College Publishers. Balachandran, B. (1997). Readability standards of newsletters. (Master’s thesis, California State University - Fresno, 1997). Master’s Abstracts International, 36, 01, (1997): 0005. Barteaux, J. A. (1990). Development of an evaluation methodology for health education materials. (Master’s thesis, Dalhousie University, 1990). Master’s Abstracts International, 30. 04, (1990): 0974. Baxter, K. M. (1992). Reader-text match: The interactive effect of reader ability and text difficulty on comprehension monitoring. (Doctoral dissertation, University of Northern Iowa, 1992). Dissertation Abstracts International, 53, 08A, (1992): 2751. Betts, E. A. (1946). Foundations of reading instruction. New York: American Book. Binkley, M. R. (1988). New ways of assessing text difficulty. In Zakaluk, B. and Samuels, S. (Eds). Readability: Its past, present and future. Newark, DE: International Reading Association. Blankenship, J. C., Colvin, R. J. & Laminack, L. L. (1993). Tutor: A collaborative approach to literacy instruction. (7th ed.). Syracuse, NY: Literacy Volunteers of America, Inc. Bly, M. G. (1994). The annual report as a public relations marketing tool: A qualitative and quantitative study of a selected sample to test readability. (Master’s thesis, Central Missouri State University, 1994). Master’s Abstracts International. 33, 04, (1994): 1030. 135 Boone, K. M., & Smith, K. L. (1996). Clients reach higher levels of cognition through publications. Journal of ExtensionL34 (4). Bormuth, J. R. (1967). Comparable cloze and multiple-choice comprehension test scores. Journal of Reading, 10, 291-9. Bormuth, J. R. (1971). Development of standards of readability: Toward a rational criterion of passage performance. Chicago: University of Chicago. (ERIC Document Reproduction Service No. ED 054233). Bransford, J. D., & Johnson, M. (1972). Journal of Verbal Learning and Verbal Behavior, 1, 717-726. Bransford, J. D. & McCarrell, N. S. (1974). A sketch of a cognitive approach to comprehension. In W. B. Weimer and D.S. Palermo. (Eds). Cognition and the symbolic processes. Hillsdale, NJ: Erlbaum. Brown, J ., Fishco, V. & Hanna, G. (1993). Nelson-Denny Reading Test. Itasca, IL: Riverside Publishing. Burrill, L. (1987, March). How well should a high school graduate read? NAASP Bulletin, 7L. (497). Chall, J. S. (1958). Readability: An appraisal of research and application. Columbus, OH: Ohio State University. Chall, J. S. (1983). Stages of reading development. New York: McGraw-Hill Book Company. Chall, J. S. (1988). The beginning years. In Zakaluk, B. and Samuels, S. (Eds). Readability: Its past, present and future. Newark, DE: International Reading Association. Chall, J. S., & Conard, S. S. (1991). Should textbooks challenge students? The case for easier or harder textbooks. New York: Teachers College Press. Chall, J. S., Bissex, G. L., Conard, S. S., & Harris-Sharples, S. H. (1996). Qualitative assessment of text difficulty: A practical guide for teachers and writers. Cambridge, MA: Brookline Books. Chapman, A. (Ed.). (1993). Making sense: Teaching critical reading across the curriculum. New York: College Board Publications. (p. x). 136 Chase, N. D. (1984). Text processing and reader response criticism: A constructivist perspective on the reading comprehension process. (Doctoral dissertation, Emory University, 1984). Dissertation Abstracts International. 45, 09A, (1984): 2818. Chiesi, H. L., Spilich, G. J. & Voss, J. F. (1979). Acquisition of domain related information in relation to high and low domain knowledge. J ougal of Verbal Learning and Verbal Behavior, 18, 257-273. Chisman, F. P.(1990). Toward a literate America: The leadership challenge. In Forrest P. Chisman and Associates. (Ed.). Leadership for literacy: The agenda for the 19903. San Francisco: Jossey-Bass Publishers. Danielson, W. A. & Bryan, S. D. (1963). Computer automation of two readability formulas. Journalism Quarterly, 39, 201-206. Denbow, C. J. (1973). An experimental study of the effect of a repetition factor on the relationship between readability and listenability. Unpublished doctoral dissertation. Ohio University. Dooling, D. & Lachman, R. (1971). Effect of comprehension on retention of prose. Journal of Experimental Psychology, 88, 216-222. Duffy, M. M. (1989). The readability of adult health education materials (adult education). (Doctoral dissertation, University of South Carolina, 1989). Dissertation Abstracts International, 51, 04A, (1989): 1124. Dusch, K. J. (1993). Readability of diabetes education materials for elderly persons. (Master’s thesis, Duquesne University, 1993). Master’s Abstracts International, 3_2_, O2, (1993): 0593. Entin, E. B. (1980). Relationships of measures of interest, prior knowledge, and readability to comprehension of expository passages. (Doctoral dissertation, Ohio University, 1980). Dissertation Abstracts International, 41, 08B, (1980): 3214. Entin, E. B. & Klare, G. R. (1978). Some interrelationships of readability, cloze, and multiple—choice scores on a reading comprehension test. Journal of Reading Behavior, 10, 417-436. Entin, E. B. & Klare, G. R. (1985). Relationships of measures of interest, prior knowledge, and readability to comprehension of expository passages. In B. Hutson (Ed.). Advances in reading/language research. Volume 3. Greenwich, CT: JAI Press. Flesch, R. (1949). The art of readable writing. New York: Harper & Row. 137 Flesch, R. (1951). The art of plain talk. New York: Collier Books. Flesch Reading Ease formula. Version 7.00. In Microsoft Word for Windows 95. Fraenkel, J. R. & Wallen, N. E. (1996). How to design and evaluate research in education. (3rd ed.). New York: McGraw-Hill. Fry, E. B. (1963). Teaching faster reading. London: Cambridge University Press. Fry, E. B. (1977, Dec.). Fry’s readability graph: Clarifications, validity, and extension to level 17. Journal of Reading. 21 (3), 242-252. Fry, E. B. (1988). Writeability: The principles of writing for increased comprehension. In Zakaluk, B. and Samuels, S. (Eds). Readability: Its past, present and future. Newark, DE: International Reading Association. Funkhouser, A. R. & Macoby, N. (1971). Study on communicating science information to a lay audience, phase 2. Report NSF 92-996. National Science Foundation, Institute for Communication Research, Stanford University. Gleitman, L. R. & Rozin, P. (1977). The structure and acquisition of reading 1: Relations between orthographies and the structure of language. In Reber, A. S. and Scarborough, D. L. (Eds). Toward a psychology of reading: The proceedings of the CUNY conferences. Hillsdale, NJ: Lawrence Erlbaum Associates. Gillet, J. W. & Temple, C. (1990). Understanding reading problems. (3rd ed.). Glenview, IL: Scott, Foresman. Goodman, K. S. (1968). The psycholinguistic nature of the reading process. Detroit: Wayne State University Press. Goodman, V. M. & C. Burke. (1980). Reading strategies: Focus on comprehension. New York: Richard C. Owen Publishers, Inc. Gray, W. S. & Leary, B. E. (1935). What makes a book readable. Chicago: University of Chicago Press. Harris-Sharples, S. H. (1983). A study of the “match” between student reading ability and textbook difficulty during classroom instruction. (Doctoral dissertation, Harvard University, 1983). Dissertation Abstracts International, 44, 05A, (1983): 1400. Holloway, R. (1983). Evaluating educational materials. In Wilson, John. (Ed.). Materials for teaching adults: Selection, development, and use. New Directions for Continuing Education, No. 17 . San Francisco: Jossey-Boss, Inc. 138 Huggins, A. W. F. & Adams, M. J. (1980). Syntactic aspects of reading comprehension. In Spiro, R. J ., Bruce, B. C. & Brewer, W. F. (Eds). Theoretical issues in reading comprehension. Hillsdale, NJ: Erlbaum. Iser, W. (1978). The act of reading: A theory of aesthetic response. Baltimore: Johns Hopkins University Press. Jenkins, J. (1981). Materials for learning: How to teach adults at a distance. London: Routledge & Kegan Paul Ltd. Johnson, E. & Verma, S. (1990, Spring). Are Extension publications readable? Journal of Extension, 28 (1). Johnson, R. (1976). Elementary statistics. (2nd ed.). North Scituate MA: Duxbury Press. Johnston, P. H. (1983). Reading comprehension assessment: A cognitive basis. Newark, DE: International Reading Association. Kaestle, C. F. (1991). The history of readers. In Kaestle, C. F. (Ed.). Literacy in the United States: Readers and reading since 1880. New Haven: Yale University Press. Kintsch, W. (1979). On modeling comprehension. Educational Psycholggist, 14, 3-14. Kintsch, W. (1987). Contributions from cognitive psychology. In Tierney, R. J ., Anders, P. L., & Mitchell, J. N. (Eds). Understanding readers’ understanding: Theory and practice. Hillsdale, NJ: Lawrence Erlbaum Associates. Kintsch, W. E. & Vipond, D. (1979). Reading comprehension and readability in educational practice and psychological theory. In L. G. Nilsson. (Ed). Perspectives on memory research. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 329-3 65. Klare, G. R. (1974, October). Assessing readability. Reading Research Quarterly, 10 (1), 362-363. Klare, G. R. (1988). The formative years. In Zakaluk, B. and Samuels, S. (Eds). Readability: Its past, present and future. Newark, DE: International Reading Association. Klare, G. R., & Buck, B. (1954). Know your reader: The scientific approach to readability. New York: Hermitage House. 139 Klare, G. R., Mabry, J. E., & Gustafson, L. M. (1955). The relationship of style difficulty to immediate retention and to acceptability of technical material. Journal of Educational Psychology. 46. 287-295. Levin, H. & Kaplan, E. L. (1970). Grammatical structure and reading. In Levin, H. and Williams, J. P. (Eds). Basic studies on reading. New York: Basic Books, Inc. Liptak, C. (1991, Winter). Improving readability of Extension materials. Journal of Extension, 29 (4). Lorge, I. (1939). Predicting reading difficulty of selections for children. Elementam English Review, 16, 229-233. MacGinitie, W., MacGinitie, R., Maria, K. & Dreyer, L. (n.d.). Gates-MacGinitie Reading Test. (4th ed.). Itasca, IL: Riverside Publishing. Martin, J. C. (1992). A study of the required, self-perceived and assessed basic skill needs for personnel within a paper mill industry. (Doctoral dissertation, University of Arkansas, 1992). Dissertation Abstracts International, 53, 08A, (1992): 2647. Mavrogenes, N. A. (1988). Reading and parent communications: Can parents understand what schools write to them? Reading Horizons. ()OGX) (1), 5—12. McLaughlin, H. (1969). Smog grading: A new readability formula. Journal of Reading, 2, 639-46. Meyer, B. (1977). The structure of prose: Effects on learning and memory and indications for educational practice. In Anderson, R., Spiro, R., & Montague, W. (Eds). Schooling and the acquisition of knowledge. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 179-214. Microsoft Word for Windows 95. [Computer software]. (1996). Version 7.0c. Redmond WA: Microsoft Corporation. Microsoft Excel for Windows 95. [Computer software]. (1996). Version 7.0. Redmond WA: Microsoft Corporation. Miller, J. & Kintsch, W. (1980). Readability and recall of short prose passages: A theoretical analysis. Journal of Experimental Psychology: Human Learning and Memog, 6, 335-354. Misanchuk, E. R. (1994). Print tools for distance education. In Willis, B. (Ed.). Distance education: Strategies and tools. Englewood Cliffs, NJ: Educational Technology Publications. 140 Moore, M. (1991). Correspondence study. In Galbraith, M. W. (Ed.). (1991). Adult learning methods: A guide for effective instruction. Malabar, FL: Krieger Publishing Company. Moynahan, D. L. (1991). An analysis of workplace literacy requirements, management perceptions, and basic skill levels of selected stamping plant metal workers. (Doctoral dissertation, West Virginia University, 1991). Dissertation Abstracts International, 53, 01A, (1991): 0045. National Extension Tourism Database. Available online at http://www.msue.edu/msue/imp/modtd/mastertd.html. Nehiley, J ., and Williams, R. (1980, November/December). Targeting extension publications. Joumfi of Extension, 11. New York State Learning Standards. (1997, October). Washington - Saratoga BOCES. Palloff, R. M. & Pratt, K. (1999). Building learning communities in Cyberspace: Effective strategies for the online classroom. San Francisco: Jossey-Bass Publishers. Pearson, P. D., Hansen, J. D., & Gordon, C. (1979). The effect of background knowledge on young children’s comprehension of explicit and implicit information. Journal of Reading Behavior. 11, 201-209. Pride, J. (1987). The readability of selected textbooks and the reading abilities of freshman students at a community college. (Doctoral dissertation, The University of Mississippi, 1987). Dissertation Abstracts International, 48, 03A, (1987): 0618. Reber, A. S. & Scarborough, D. L. (Eds). (1977). Toward a psychology of reading: The proceedings of the CUNY conferences. Hillsdale, NJ: Lawrence Erlbaum Associates. Risdon, P. (1990). Writing to teach. Journal of Extension, 28 (1). Roe, B. (2002). Bums-ROE Informal Reading Inventory. (6th ed.). Itasca, IL: Riverside Publishing. Roswell, F. & Chall, J. (n.d.). Diagnostic Assessments of Reading. Itasca, IL: Riverside Publishing. Roswell, F. & Chall, J. (n.d.). Diagnostic Assessments of Reading with Trial Teaching Strategies. Itasca, IL: Riverside Publishing. 141 Rumelhart, D. E. (1977). Understanding and summarizing brief stories. In LaBerge, D. and Samuels, J. (Eds). Basic processes in reading: Perception and comprehension. Hillsdale, NJ: Lawrence Erlbaum Associates. Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In Spiro, R. J ., Bruce, B. C., & Brewer, W. F. (Eds). Theoretical issues in reading comprehension. Hillsdale, NJ: Erlbaum. Rye, J. (1982). Cloze procedure and the teachingof reading. London: Heinemann Educational Books. Schmitz, K. J. (1994). An evaluation of the readability and acceptability of maternal nutrition education materials for expanded food and nutrition education program clients. (Doctoral dissertation, Michigan State University, 1994). Dissertation Abstracts International, 55, 12B, (1994): 5291. Simeral, K. D. (2001). Keeping a traditional program-delivery method in an “E” world. Journal of Extension, 39 (4). Simpson, S. (1988). The effects of the readability levels of college textbooks on the academic performance of students enrolled at a four-year, urban institution. (Doctoral dissertation, The Texas Southern University, 1988). Dissertation Abstracts International, 50, 02A, (1988): 0374. Singer, H. (1975). The SEER technique: A non-computational procedure for quickly estimating readability level. Journal of Reading Behavior, 7, 255-267. Singh, J. (1994). Development of an alternative methodology for determining the readability of text (comprehension). (Doctoral dissertation, Virginia Commonwealth University, 1994). Dissertation Abstracts International. 5 5, 11A, (1994): 3462. Spiro, R. J. (1977). Remembering information from text: The state of schema approach. In R. C. Anderson, R. J. Spiro, & W. E. Montague (Eds). Schooling and the acquisition of knowledge. Hillsdale, NJ: Erlbaum. Stevens, K. C. (1980). The effect of background knowledge on the reading comprehension of ninth graders. Journal of Reading Behavior, 12. 151-154. Sylvester, E. L. (1981). Effects of prior knowledge and concept-building on good and poor readers’ comprehension of explicit and implicit relations. (Doctoral dissertation, University of Minnesota, 1981). Dissertation Abstracts InternationaL 42, 10A, (1981): 4381. 142 Taylor, B. M. (1979). Good and poor readers’ recall of familiar and unfamiliar text. Journal of Reading Behavior, 11, 375-3 80. Thomas, D. S. (1993). A workplace literacy audit. (Doctoral dissertation, University of Oregon, 1993). Dissertation Abstracts International, 54, 09A, (1993): 3316. Thompson, A. D., Simonson, M. R., & Hargrave, C. P. (1992). Educational technology: A review of the research. Washington, DC: Association for Educational Communications and Technology. Thompson, C. & Davis, P. (1984). Readability: A factor in selecting teaching materials. Illinois Teacher of Home Economics, v (4), 156-160. Thorndike, R., Hagen, E. & Sattler, J. (1986). Stanford-Binet Intelligence Scales. (4“1 ed.). Itasca, IL: Riverside Publishing. Thornton, L. J. (1981). Procedural yields in assessing readability of secondary and post- secondary carpentry literature: Curricular and occupational relationships. (Doctoral dissertation, The Pennsylvania State University, 1981). Dissertation Abstracts Internationalk42, 01A, (1981): 0189. Threlkeld, R. & Brzoska, K. (1994). Research in distance education. In Willis, B. (Ed.). Distance education: Strategies and tools. Englewood Cliffs, NJ: Educational Technology Publications. Touchstone Applied Science Associates, Inc. (n.d.). TASA Web site. Available online at http://www.tasaliteracy.com. Touchstone Applied Science Associates, Inc. (1995a). Advanced Degrees of Reading Power Test. Brewster NY: Author. Touchstone Applied Science Associates (1995b). DRP handbook: G & H test forms. Brewster NY: Author. Touchstone Applied Science Associates (2001). DRP program: The readability standard. Brewster NY: Author. Touchstone Applied Science Associates, Inc. (2002). Advanced DRP handbook: T&U test forms. Brewster NY: Author. Tuinman, J. J. (1986). Reading is recognition when reading is not reasoning. In deCastell, 8., Luke, A., & Egan, K. (Eds). Literacy, society, and schooling: A reader. Cambridge: Cambridge University Press. 143 Valencia, S. & Pearson, P. D. (1987, April). Reading assessment: time for a change. m Reading Teacher, 40 (8), p. 728. Vick, R. D. (1985). A comparison of selected stylistic features in technical and non- technical writing. (Doctoral dissertation, Drake University, 1985). Dissertation Abstracts International. 47, 05A, (1985): 1717. Vygotsky, L. (1978). Mind in society. Cole, M., John-Steiner, V., Scribner, S., & Souberman, E. (Eds. and Trans). Cambridge, MA: Harvard University Press. Washburne, C., & Morphett, M. V. (193 8). Grade placement of children’s books. Elementary School Journal, 38, 355-364. Welch, A. W. (1981). Readability of vocational horticulture instructional materials. (Doctoral dissertation, The Ohio State University, 1981). Dissertation Abstracts International, 42, 07A, (1981): 2978. Wilkinson, G. (1993). Wide Range Achievement Test. (3rd ed.). Itasca, IL: Riverside Publishing. Williams, J. (1977). Building perceptual and cognitive strategies into a reading curriculum. In Reber, A. S. and Scarborough, D. L. (Eds). Toward a psychology of reading: The proceedings of the CUNY conferences. Hillsdale, NJ: Lawrence Erlbaum Associates. Williams, J. P. (1970). From basic research on reading to educational practice. In Levin, H. and Williams, J. P. (Eds). Basic studies on reading. New York: Basic Books, Inc. Woodcock, R. (1991). Woodcock Language Proficiency Battery - Revised. Itasca, IL: Riverside Publishing. Woodcock, R. (1997). Woodcock Diagnostic Reading Battery. Itasca, IL: Riverside Publishing. Woodcock, R., McGrew, K. & Werder, J. (1994). Mini-Battery of Achievement. Itasca, IL: Riverside Publishing. Woodcock, R., McGrew, K. & Mather, N. (2001). Woodcock-Johnson III Tests of Achievement. Itasca, IL: Riverside Publishing. Yundt, C. L. (1985). Factors relating to the understanding of college and university financial statements for nonfinancial university personnel. (Doctoral dissertation, The University of Alabama, 1985). Dissertation Abstracts International. 46. 07A, (1985): 2003. 144 Zakaluk, B. & Samuels, S. (Eds). (1988a). Readability: Its past, present and future. Newark, DE: International Reading Association. Zakaluk, B. & Samuels, S. J. (1988b). Toward a new approach to predicting text comprehensibility. In Zakaluk, B. and Samuels, S. (Eds). Readability: Its past, present and future. Newark, DE: International Reading Association. 145 .. z 3.. . 132513»