lllllllllllllllllllll ' I in . Ill!!!“Will!!!lfllflllclfllw‘lfllllllsllljlillfll This is to certify that the thesis entitled PERCEPTIONS OF SECONDARY SCHOOL TEACHERS AND ADMINISTRATORS OF THE SUITABILITY OF FORMATIVE EVALUATION PROCEDURES FOR ADAPTATION IN SECONDARY SCHOOLS IN IMO STATE OF NIGERIA presented by Hyacinth Ibe Dike has been accepted towards fulfillment of the requirements for Ph, D degreein Educational Systems Development ..- (I . flfiln—Z’W Major profe Date 4/16/91 0-7639 LlhuWRI Y Efid@3.§1$ £332 $332338 4 Umfivea’ei 3y OVERDUE FINES: 25¢ per day per item ‘ t ’I ‘ , 1111 lifi‘fifi | , RETURNING LIBRARY MATERIALS: fi: _=;..-'-",,',"- ' Place in bookire retumt remove charge franc circulation records PERCEPTIONS OF SECONDARY SCHOOL TEACHERS AND ADMINISTRATORS OF THE SUITABILITY OF FORMATIVE EVALUATION PROCEDURES FOR ADAPTATION IN SECONDARY SCHOOLS IN IMO STATE OF NIGERIA BY Hyacinth Ibe Dike AN ABSTRACT OF A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Educational Systems Development 1981 QLHJ‘VJ‘V ABSTRACT PERCEPTIONS OF SECONDARY SCHOOL TEACHERS AND ADMINISTRATORS OF THE SUITABILITY OF FORMATIVE EVALUATION PROCEDURES FOR ADAPTATION IN SECONDARY SCHOOLS IN IMO STATE OF NIGERIA BY Hyacinth Ibe Dike This study was conducted primarily to determine the perception of secondary school Principals and Teachers of the suitability of adapting existing formative evaluation procedures in secondary schools in Imo State of Nigeria. Ten extant formative evaluation models were analyzed for procedures used for conducting formative evaluation. Since the introduction of formative evaluation into the educational system of Imo State of Nigeria is viewed as an instructional innovation, factors that could facilitate or hinder the adaptation of innovations were also identified through a literature review. These factors and the formative evaluation procedures formed the basis for developing the questionnaire used for this study. Two pilot studies were conducted to validate the ques- tionnaire. In the first, 10 Nigerian students doing their post graduate studies in Michigan State University were used Hyacinth Ibe Dike while in the second, 3 teachers and 3 administrators in the State were randomly selected. Forty-two Secondary School Administrators and 285 Teachers were randomly selected and provided with question- naires for the study. Out of these 42 Principals, 25 (59.5%) completed and returned their questionnaires. Of the 285 Teachers, 181 (62.9%) completed and returned their question- naires. Two assistants who received no formal training helped the researcher in collecting the completed questionnaires. Major Findings Most of the procedures considered essential for forma- tive evaluation by authorities in the field were perceived as suitable by teachers and administrators in Imo State of Nigeria. Although all Administrators perceived themselves as possessing selected skills for formative evaluation, only a moderate percentage of teachers perceived themselves as possessing some of these skills. Respondents identified fac- tors that could hinder or facilitate the adaptation of forma- tive evaluation in their school systems. As regards the 3 approaches for formative evaluation, an equal number of school principals preferred the Large Group and the Small Group Approaches. Only one principal preferred the Tutorial Approach. Among the 181 teachers, 16 preferred the Tutorial Approach, 51 the Large Group Approach, while 114 preferred the Small Group Approach. Even though a low percent of teachers Hyacinth Ibe Dike and administrators preferred the Large Group Approach an interesting finding was in their opting to use the inter- viewing and observation techniques with the Large Group Ap— proach. Interviews and observations are characteristics of the Tutorial and the Small Group Approach. Based State of Imo: 1. on these findings, it is recommended that the Establish an evaluation unit in the Ministry of Education. Adopt a strategy for enSuring administrative support of formative evaluation. Make provision for inservice training of teachers and administrators in formative evalu- ation techniques. Make use of the training program to raise the competence of faculty in conducting formative evaluation. Implication for further research include: 1. A study to determine the extent to which teachers and administrators possess the necessary skills for specification of behavioral objectives and construction of valid test instruments and the extent to which these are made manifest in their teachings{‘ A comparative study of; prototype to determine which of the 3 formative evaluation approaches is most suitable for secondary schools in Imo State. Hyacinth Ibe Dike 3. A study to determine the minimum level of formative evaluation sufficient to improve instructional materials. To my parents Mr. and Mrs. L.I. Dike and to my cousin Mr. V.S. Ebereonwu ii ACKNOWLEDGEMENTS I wish to express my appreciation to individuals whose contributions made this achievement a reality. First to my parents late Mr. Luke I. Dike and Mrs. Cathrine N. Dike and to my cousin Mr. V.S. Ebereonwu, I am grateful for their unshaking belief and for affording me the opportunity to go to school. I am also grateful to the Federal Government of NngFia for sponsoring me to undertake this program. Special appreciation and heartfelt thanks go to my major professor, Dr. Castelle Gentry, for his constant guidance through my Masters and Ph.D. programmes. Without him, this dissertation could not have been completed. To the other members of my guidance committee, Dr. James Page, Dr. Richard Gardner and Dr. Felipe Korzenny, I am also appreciative of their guidance and support.. This note of appreciation will be incomplete without a mention of my sister Mrs. Maria Nnodim and my brother Eugene Dike who shouldered the burden of my family in Nigeria while I was here in the 0.8. To my in-laws and friends, Joachim Odemena and Dennis Nwachukuru I am grate- ful for all their innumerable helps. Finally, special gratitude is extended to my wife, Mrs. Angy Dike, for combining her own academic program iii with her household duties making it possible for me to have the time to concentrate. To her and my 2 kids, Ike and Kelechi, I am thankful. iv TABLE OF CONTENTS LIST OF TABLES . O O O O O O O O O O O O O O O 0 LIST OF FIGURES. O O O O O O O O O O O O O O O 0 CHAPTER I. II. STATEMENT OF THE PROBLEM . . . . . . . . . . Background of Study. . . . . . . . . . . . . Research Evidence in Support of Formative Evaluation . . . . . . . . . . . . . . . . . Formative Evaluation by Abedor . . . . Formative Evaluation by Light & Reynolds Formative Evaluation by Roebeck. . . . . Formative Evaluation by Montgomery & VanderMeer . . . . . . . . . . . . . . . Formative Evaluation by Gropper, Lumsdaine and Shipman. . . . . . . . . . Need for the Study ... . . . . . . Relevance of the Study . . . . . Generalizability of the Study. . Limitations of the Study . . . . Research Questions . . . . . . . Definition of Terms. . . . . . . Overview . . . . . . . . . . . . . . . . . . o o o 0 REVIEW OF LITERATURE . t,,_. . . . . . . . . Types of Data for Formative Evaluation . . . Formative Evaluation Techniques. . . . . . . Prerequisites for Formative Evaluation . . . Selection of Size and Sample for Formative Evaluation . . . . . . . . Tutorial Approach . . . . . . . . . . Large Group Approach. . . . . . . . . Small Group Approach. .-. . . . . . Specification of Administrative Rules. . Specification of Behaviorial Objectives. Construction of Criterion-Referenced Test Items . . . . . . . . . . . . . . . Data Collecting Instruments. . . . . . . . . Pre Test. . . . . . . . . . . . . . . . Post Test. . . . . . . . . . . . . . . . ix xi PAGE 11 11 15 16 16 18 21 22 24 25 26 27 3O 32 32 37 37 39 39 44 46 48 48 52 57 57 58 CHAPTER Interim Tests. . . . . Student, Teacher and Consultant Comments‘. Tryout Monitor Observation Tryout Monitor Interview . Materials Revision Techniques. Analysis of Data . . . Revision of Data . . . Comparison With Matched Groups validation O O O O O 0 Factors Essential for Adaptation of Innovations. Implications of the Review of the Present Study. III. DESIGN OF THE STUDY . . . Introduction . . . . . . . Research Questions . . . . The Population . . . . . . The Sample . . . . . . . The Selection of the Sample. Source of Data . . . . . . Development of the Questionnaire PilOt Testing a o o o '0 0 Administration and Collection of Data Analysis. . . . . . . IV. ANALYSIS OF RESULTS. . . . Percentage of Responses. . Research Question 1 . . . Research Question 1.1. Research Question 1.2. Procedures Which a High. Procedures Which a High Percent of Percentage of Teachers and Administrators Perceived as Suitable for Formative Evaluation. . Procedures Which a Moderate Percent of Teachers and Administrators Perceived as Suitable for Formative Evaluation. . Administrators but a Moderate Percent of Teachers Agreed With . Research Question 2 . . . Skills a High Percentage of Administrators and Teachers Perceive That They Possess. . Skills a High Percentage of Administrators but a Moderate Percentage of Teachers that they Possess Perceive Research Question 3 . . . vi 0 72 72 73 74 75 75 77 77 81 84 85 88 88 92 92 94 97 99 99 100 100 100 101 CHAPTER PAGE Factors Essential for the Adoption of Innovation Which Respondents Per- ceived as Existing in Their School Systems. . . . . . . . . . . . . . . . 103 Factors Essential for the Adoption of Innovation with Which a Moderate Percent of Administrators and Teachers Agreed . . . . . . . . . . . . . . . . 106 Factors Essential Which a Moderate Percent of Administrators and a Low Percent of Teachers Agreed With. . . . 106 Attributes of Formative Evaluation Highly Perceived as Facilitating Its Adoptation . . . . . . . . . . . . 106 Attributes of Formative Evaluation a High Percent of Administrators but a Moderate Percentage of Teachers Per- ceived as Facilitating its Adoptation. 107 Factors Considered as Hinderances with which a High Percentage of Administra— tors and Teachers Agreed . . . . . . . 107 Factors Considered as Hinderances with which a Moderate Percentage of Admin- istrators and Teachers Agreed. . . . . 108 Research Question 4. . . . . . . . . . . . 108 Research Question 5. . . . . . . . . . . . 109 Responses of Teachers and Administrators that Could Lead to Possible Modifica- tions in the Present Educational Organ- ization to Accommodate Formative Evaluation . . . . . . . . . . . . . . 111 SUMMARY, CONCLUSIONS AND RECOMMENDATIONS . 112 Summary of Statement of the Problem. . . . 112 Limitations of the Study - . . . . . . . . 113 Methodology for the Study. . . . . . . . . 114 Findings From the Study. . . . . . . . . . 115 Procedures Essential for Formative Evaluation Highly Agreed With By Respondents. . . . . . . . . . . . . . 116 Procedures Essential for Formative Evaluation Which Respondents Moderately Agreed With. . . . . . . . . . . . . . 117 Factors Perceived as Hinderances for Conducting Formative Evaluation. . . . 117 Factors Perceived as Facilitators for Conducting Formative Evaluation. . . . 118 Skills Administrators Highly Agreed they Possessed . . . . . . . . . . . . 118 vii CHAPTER PAGE Skills Teachers Highly Agreed They Possessed. . . . . . . . . . . . ‘ 118 Skills Teachers Moderately Agreed. They Possessed . . . . . . . . . . . . 119 Conclusions . . . . . . . . . . . . . . . 119 Recommendations. . . . . . . . . . . . . . 126 Establishment of an Evaluation Unit in the Ministry. . . . . . . . . . . . 126 Ensuing Administrative Support . . . . 129 Overcoming the Problem of Time and Reward System. . . . . . . . . . . 132 Training Program for Faculty . . . . . 132 Implications for Future Research . . . . . 144 APPENDICES A. Questionnaire for Needs Survey with Educators From Nigeria . . . . . . . . 146 B. Checklist of Administrative Rules for Conducting Formative Evaluation Provided by Robert E. Horn and Allan Joseph Abedor . . . . . . . . . . . . . . . . 148 C. Letter of Authority to Conduct Research From the Ministry of Education Owerri . 154 D. First Questionnaire Given to Nigerians at Michigan State University. . . . . . 155 E. Second Questionnaire Given to 3 Teachers and 3 Administrators in Imo State for the Second Pilot Study. . . . . . . . . 171 F. Final Questionnaire Used for the Study. 182 G. Question Format Used for Oral Interview . . . . . . . . . . . . . . . 197 H. Letter of Introduction by Chairman of Doctoral Committee. . . . . . . . . . . 200 I. Letter to Permanent Secretary Ministry of Education Imo State Requesting for PermiSSion. . O O O O O O O O O I O O O O 201 J. Letter to the Commissioner for Education, Imo State requesting for Permission . . 203 BIBLIOGRAPHY. . O O O O O C O O O O O O O 0 O O 205 viii LIST OF TABLES TABLE PAGE 1.1 Comparison of Results of Formative Evaluation of Three Prototype Instructional Materials by Abedor. . . . . . . . . . . . . . . . . . . 14 1.2 Comparison of Results of Formative Evaluation of Prototype Televised Instructional Material By Gropper, Lumsdaine and Shipman. . . . . . . 20 2.1 Classes of Data and Specific Indicators for Formative Evaluation . . . . . . . . . . . . . 35 2.2 Techniques Common to Formative Evaluation Medals 0 O O O O O O O O O O O O O O O O O O O 38 3.1 Number of Academic Staff Selected Based on the Staff Strength of Schools . . . . . . . 76 3.2 Original and Final Scales Used for Analysis of Responses . . . . . . . . . . . . . . . . . 86 4.1 Percentage of Returned Questionnaires to the Number Distributed . . . . . . . . . . . . . . 89 4.2 Scales Used for Analysis of Results. . . . . . 90 4.3 Perceptions of Administrators of the Suit- ability of Formative Evaluation Procedures (N=25) O O O O O 0 I O O I O O O O O O O O O O 95 4.4 Perception of Teachers of the Suitability of Formative Evaluation Procedures (N=181) . . 96 4.5 Skills Administrators Perceived That They Possess For Conducting Formative Evaluation (N=25) O O O O O O O O O O O O O O O O O O O O 101 4.6 Skills Teachers Perceived That They Possess for Conducting Formative Evaluation. . . . . . 102 4.7 Factors Administrators Perceived Will Facil- itate or Hinder Adaptation of Formative Eval- uation in Their Schools. .. . . . . . . . . . 104 4.8 Factors Teachers Perceived Will Facilitate or Hinder Adaptation of Formative Evaluation in Their SChOOlS O O O O O o o o o o o o o o o o 105 ix TABLE Page 5.1 Table of Specification for In- structional Objectives and Content Areas. 0 O O O O O O O O O O O O O O O O 135 LIST OF FIGURES FIGURES PAGE 4.1 Percentage of Teachers and Administrators Preferring Each of Three Approaches for Formative Evaluation. . . . . . . . . . . . 93 5.2 Zonal System of School Administration in Imo State. . . . . . . . . . . . . . . . 111 Schematic Representation Showing Task Force for Evaluation and Standards in the Ministry Of Education 0 O O O O O O O O I O O O O O O 128 Schematic Representation of a Training Program. 0 O O O O I O I O O O O O O O 0 O 138 xi CHAPTER I STATEMENT OF THE PROBLEM Imo State is one of the 19 States in the Federation of Nigeria. It came into being when the former East Central State of Nigeria was split into Imo and Anambra States. Many Educational Service Units have been established by the Imo State Ministry of Education for the selection, production, utilization and evaluation of Instructional Ma- terials for use in her educational institutions. Good ex- amples of such educational service units are Teachers Re- sources Centers, Audio-Visual Centers, Curriculum Develop- ment Centers and Book Development Centers. Authorities in the field of Instructional development have stressed the importance of formative evaluation for producing "high quality" instructional materials for ef— fective instruction—-Gooler,1 Sullivan,2 Wells,3 1Dennis D. Gooler: "Formative Evaluation Strategies for Major Instructional Development Projects" Journal of In- structional Development, Spring 1980, Vol. 3 No. 3 pp 7-11. 2Howard J. Sullivan. "Objectives, Evaluation and Im- proved Learner Achievement" in AERA Monograph Series on Cur- riculum Evaluation Instructional Objectives by W. James Popham, Elliot W. Eisner, Howard J. Sullivan and Louise L. Tyler. Chicago: Rand McNally and Company, 1969. 3Stuart Wells: Instructional Technologyin Developing Countries: Decision-MakingVin Education. New York. Praeger Publishers, 1976, p.93. 2 . 4 . 5 6 7 . 8 Komoski, Alkin and Baker, Abedor, Tennyson, Scriven, and Yelon? There is considerable research showing that Instruc- tional materials revised through a process of formative eval- uation leads to more effective student learning than mater- ials that have not been subjected to this process - Light 4P. Kenneth Komoski: "An Imbalance of Product Quan- tity and Instructional Quality. The Imperative of Empiric- ism". A.V. Communication Review. Vol. 22, No. 4, Winter, 1974. SEva L. Baker and Marvin C. Alkin: "Formative Evalua- tion of Instructional Development" A.V. Communication Review Vol. 21, No. 4, Winter, 1973. 6Allan Joseph Abedor: Development and Validation of a Model Explicating the Formative Evaluation Process of Multi- Media Self Instructional Learning System. Ph.D. Thesis. East Lansing, Michigan State University, 1971. 7Robert D. Tennyson: "Evaluation Technology in In— structional Development" Journal of Instructional Develop- ment . Fall, 1978, Vol. 2, No. 1. 8Michael Scriven: "The Methodology of Evaluation" as in Blaine R. Worthen and James R. Sanders. Educational Eval— uation: Theory and Practice. Belmont, California: Wads- worth Publishing Company, Inc., 1973, p.62. 9Stephen L. Yelon. Constructive Evaluation: Improving Large Scale Instructional Projects. Lansing, Michigan: 1736 North Hayford Avenue, 1974, p.3. 3 10 11 12 and Reynolds, Robeck, Montgomery and Vander Meer.13 Abedor, The government of Imo State of Nigeria is not unaware of the importance of evaluation for effective instruction. In a report entitled: "Federal Republic of Nigeria National Policy on Education" it is stated that: Government plans that progress along the educational cycle will be based on con- tinuous over-all guidance-oriented assess- ment by teachers and headmasters. However, government recognizes the implication of the implementation of such a measure for teacher education and will accordingly en- sure that programs for pre-service teacher education... and of in-service training in the National Teachers Institute and the InStltUtesof Education will incorporate training in the continuous assessment of pupils. l4 loJudy A. Light and Larry J. Reynolds. "Debugging Prod- uct and Testing Errors: Procedures for the Formative Evalu- ation of an Individualized Mathematic Curriculum". View— points Bulletin of the School of Education, Indiana Univer- sity, Vol._48, No. 4. July 1972, pp. 45-78. 11Allan Joseph Abedor, op.cit. 12Robeck, M.D.: A Study of the Revision Process in Programmed Instruction. Unpublished Master's Thesis. Uni- versity of California. Los Angeles, 1965. 13A.W. VanderMeer and Robert Montgomery. An Investi- gation of the Improvement of Educational Filmstrips and a derivation of Principles relatingito the effectiveness of these media. Phase III Reyision of Filmstrip:7Earth's Satellite- Pennsylvania: The Pennsylvania State University, , p. -22. 14Federal Republic of Ni eria National Policy on Education. Lagos-Nigeria: Fe eral Ministry of Information 1977, p.8. 4 The introduction of a "continuous...assessment" scheme will not be enough to improve the quality of educa- tion. This is because such a "continuous...assessment" scheme refers to teacher-made achievement tests used to grade students. The results of such tests are not intended to be used for improving the quality of instructional materials. For the quality of education to be high, educational pro- grams should be formatively evaluated. The distinction be— tween formative evaluation and achievement tests is that the former determines program adequacy while the latter only determines student proficiency or achievement. Formative evaluation refers to the process of trying out components of prototypes of instructional materials with student(s) and based on feedback from them, revisingthe developingiprogram. This process of revision as a result of feedback continues until the quality of the instructional material is at the desired level of effectiveness and efficiency. The Purpose of this study, therefore, is to contribute toward this "continuous...assessment" program evisaged by the Imo State Government by determining the feasibility of using extant formative evaluation models to determine pro- cedures that can be adapted for improving the quality of in- structional'materials. The study also attempts to determine what factors will facilitate or hinder the adaptation of such a formative evaluation program in the secondary educational system of the State. BACKGROUND OF STUDY Education is regarded as the "biggest industry" in Imo State. According to the Governor for the State: ...It is not an industry in the commercial sense of the word. It is a consumer in- dustry (that) has assumed a magnitude capa- ble of swallowing every Kobo (1.87 U.S. cents) of our revenues. 15 A comparative analysis of enrollment figures for pri— mary and secondary schools in Nigeria reveals that the total enrollment for Imo State is among the highest in the whole federation.16 These high enrollment figures are accompanied by an increasing number of post primary institutions in the State. According to the "Government White Paper on the Education Review Commission": Education in Imo State constitutes presently a gigantic industry with 374 post primary institutions...17 His Excellency, Sam 0. Mbakwe: AQ_Agg1g§§_gn_tne Owerri: The Government Printer, 15 I O .7 .7, O .‘ ‘7- .0 . October, 1980, p.2. 16Federal Republic of Nigeria: Implementation Com- mittee for the National Policy on Education BLUEPRINT, 1978— 19. Lagos: Federal Government Printer, p. 55 and p.64 l7Government White Paper on the Education Review Com— mission in Imo State. Owerri: The Government Printer, January 1980, p.8. 6 The major reason for this large number of post primary in— stitutions is "active community participation in the provision of educational facilities "18 to her citizens. Almost every village in the state wants to establish her own secondary school. According to the Governor of the State: (This active participation) has maximized rather than minimized government financial commitment in education. This is because those schools built by communities have to be approved and operated with staff and materials provided by the government. 19 Unfortunately, the last battle of the Nigeria civil war of 1967-70 was fought in Imo State. The effect of the civil war was a total "destruction of all-basic infrastructural” facilities required for effective learning and teaching in our primary and post primary institutions."20 To replace these "basic infrastructural facilities" that were destroyed during the Nigerian civil war and to provide such facilities to the newly established institutions, the Imo State government has established many educational service units such as Teachers Resources Centers, Audio-Visual 18His Excellency, Sam 0. Mbakwe op. cit., p.2 lgIbid, p.2. 20 Ibid, p.2. 7 Centers and Textbook Development Centers for the Selection, Production, Utilization and Evaluation of instructional ma- terials for use in these institutions. This is in keeping with the direction of the federal government of Nigeria. According to her "Third National De- velopment Plan, 1975-80": The bold program of organization and reform of the educational system envisaged during the Plan period calls for the establishment of a greater number of institutions to pro- vide educational services for the improve- ment of the quality of teaching through ade- quate supply and maintenance of various forms of pedogogical aids and materials. The im— portance of such services and their impact on the educational system are such that government intends to financially assist the institutions handling them at the State level and create national institutions with wide range of operational capacity. 21 This government interest in the production and selection of instructional materials can be compared with what happened in the U.S. during the 1950's and 1960's, a period that is often referred to as the "go-go years" that saw the prolifer- ation of a myriad of instructional materials.22 Unfortunately, such a proliferation of instructional materials in the U.S. was not accompanied by a corresponding 21Federal Republic Of Nigeria, Third National Develgp- ment Plan, 1975-80. Vol. One. Lagos: Federal Ministry of Information, p. 22p. Kenneth Komoski: "An Imbalance of Product Quantity and Instructional Quality: The Imperative of Empiricism" Op. cit. p.357 8 effort to determine or improve the quality of such instruc- tional products. In a paper presented to the first session of the 92nd U.S. Congress, Komoski regrets that: ...50 million school children...1earn from educational materials almost all of which have been inadequately developed and evaluated. 23 Similar views are expressed by Alkin and Baker when they said that: Substantial amounts of funds are wasted each year on the purchase and installa— tion of educational products that later prove to be inappropriate and ineffect- ive. 24 One reason that such materials are "inadequately de- veloped" or "later prove to be inappropriate and ineffective" is because their prototypes were never subjected to Formative evaluation or what Komoski calls "Learner verification"25 before they are introduced into the market. A detailed sur- vey conducted by Komoski with producers and distributors of instructional materials in U.S. revealed that: 23?. Kenneth Komoski: To establish a National Insti- tute of Education: Hearings before the Select Subcommittee on education and labor. House of Representatives, 92nd Congress. First Session. WaShington, U.S. Government Printing Office 1971, p.334. 24Eva L. Baker andMarvin C. Alkin Op.cit. p.389 25P. Kenneth Komoski: "Learner Verification: Touch- stone for Instructional Materials?" Educational Leadership February, 1974, p.397. 1. Of the more than 80,000 16 mm films catalogued by National Information Center for Educational Media (NICEM) fewer than one percent have been revised since their'ofiginal production over 15 26 years ago. 2. Under one percent of the approximately 14,000 textbooks being sold to schools have been sys- tematically shaped through learner tryout and revision process...27 Komoski points to the relationship between the gual- ity of instructional materials available to schools and learner performance. According to him: If higher quality materials are not gen- erally available, neither teachers nor their students can be expected to be held completely accountable for learning fail- ures. 28 26F. Kenneth Komoski: "An Imbalance of Product Quantity and Instructional Quality: The Imperative of Empiricism." op. cit. p.367. - 27P. Kenneth Komoski: "To Establish a National In- stitute of Education" op. cit. p.338. 289. Kenneth Komoski: Ibid, p.335. 10 Alkin and Baker share this view when they said: To prevent such economic and educational wastes (caused by inappropriate and in- effective materials) and the negative ef- fect it could have on the future accept- ance and use of educational products and at the same time to improve the products ultimately produced, developers should engage to a greater extent in formative evaluation of all products. 29 Yelon uses the term "constructive evaluation" as a synonym for formative evaluation. According to him: To produce major changes in the field of - education, instructional developers must create and perfect large-scale instructional projects. And the best way to perfect an instructional project is to employ the pro- cess of constructive evaluation. 30 The fact must be stressed that formative evaluation is not a prerogative of commercial producers of instructional materials. Formative evaluation is also essential for school teachers and administrators during their development of in- structional materials for local consumption in their schools. 29Eva L. Baker and Marvin C. Alkin op. cit. p.389 3OStephen L.Yelon, op. cit. p.3 11 Research Evidence In Support of Formative Evaluation There is considerable research showing that instruc- tional materials revised through a process of formative eval- uation leads to more effective student learning than materials that have not been subjected to this process. Descriptions of five such researches are presented in the following sec- tions. Formative Evaluation by Abedor Abedor31 developed his "MK II model" for formative evaluation-~a model incorporating the small group32 techni- ques for human interaction. This model was used by 3 in- structional developers at Michigan State University to for- matively evaluate their multi-media instructional materials using the "before and after control group experimental design". Criteria for selecting these developers were: 1. their availability to participate in the pro- gram ‘ 2. their willingness to participate 3. they were teaching a course using a multi-media lesson which they had developed personally 31Allan Joseph Abedor: "Second Draft Technology De- velopment and field test of a model for formative evaluation of self-instructional mulLi-media learning systems" Viewpoints Bulletin of the School of Education, Indiana University, Vol. 48, No. 4, July 1972, p. 9-43 321219, p.18 (Also see Chapter II p. 46 for detailed discussion of small group approach.) 12 4. such a prototype lesson had not been previously revised using formative evaluation 5. they were willing to use volunteer students for whom the lessons were meant 6. they all had similar background and amount of experience in multi-media lesson design but came from different academic disciplines. Each developer solicited volunteers from his course. Final selection was based on performance on the Scholastic Ap- titude Test (SAT) which was used to select students of high, medium and low abilities. Students "pre-experimental equiv- alence was substantiated by comparison" of their "pre-test scores". Students were randomly assigned to a control group (N=12) and an experimental group (N=12) such that each group had equal representation of the different abilities. Three 40 minute multi-media self instructional proto- types were developed by faculty A and these prototypes were designated A1, A2, and A3. Faculty B and C developed B1 and Cl Prototypes. Each field experiment consisted of the lesson developer conducting a "tryout and debriefing" on his proto- type using his control group. The responses of the control groups were used to revise the prototypes. The revised ma- terials were next given to the experimental group. According to Abedor, on two trials A3 and C1, responses from control groups showed the prototype materials were adequate and did not require further modification. Thus only prototypes A1, A2, and B1 were used for the final experiments. 13 Four dependent variables were used to assess the ef- fect of the 1. "MK II model": Group Mean Achievement-—This refers to post test measure of achievement. Gain Score-—This refers to mean score differences between pre-test and post test. These were self scoring equivalent forms developed for the forma— tive evaluation tryout by the individual lesson developers. Percentage of Students Achieving "Mastery"-—This variable was used "to determine which treatment enabled a greater number of subjects to achieve a minimum acceptable level of performance, for example, 80 percent or more correct on the lesson post test". Student Attitudes: An immediate post measure of student perception of lesson deficiencies and strengths, measured by a 27-item Likert—type instrument. Feedback from the control group showed many of the achievement tests were defective. These were either deleted or completely revised. In any case, only test items common to control and experimental groups were used to test the statistical significance of differences. 14 According to Abedor: In two experiments (A1 and B ) signifi- cant differences were 1obtainéd (P.01) favoring the experimental (revised) version on all 4 dependent measures. In the third experiment (A ) a signifi- cant difference (P.05)favo%ing the revised version was obtained on the post test measure only. 33 The table 1.1 below shows Abedor's full results. TABLE 1.1 COMPARISON OF RESULTS OF FORMATIVE EVALUATION OF THREE PROTOTYPE INSTRUCTIONAL MATERIALS BY ABEDOR Percent Gain Achieving 80% Student Post Test Score Criterion Attitude Lesson A1 P .01 P .01 P .05 P .01 Lesson A2 P .05 NSD NSD NSD Lesson Bl P .01 P .01 P .01 P .01 (After Abedor, 1972, p.28). The no significant difference in the three dependent measures with lesson A2 is attributed to the fact that "two poorly exposed slides were inadvertently used by the de- veloper in the post test. 33 Ibid, p.27 Students guessed the correct 15 responses in the pretest but "became confused and missed the items on the post-test thus attenuating the gain scores."34 Formative Evaluation by Light and Reynolds Abedor is not the only person that has formatively evaluated instructional materials. Light and Reynolds35 set out to "refine and improve on individualized mathematic cur- riculum in use in an elementary school classroom". Their goal was to evaluate the effectiveness of the curriculum ma- terials and revise those components identified as deficient. In their first revision exercise, students were asked to "write in the missing numbers using the associative princi- ple". A set of problems accompanied this statement. Stu— dents responses were analyzed to find the cause of error. Any of the problems solved correctly by a student was used as a cue for identifying possible sources of error in the missed items. Revisions included providing additional in- formation in new pages or revising the original material to take care of hypothesized cause of discrepancy. Revisions also involved taking a look at the pre-entry skills, in checking the conditions under which materials were used and in checking the test materials themselves. 'Light and Reynolds report that once these discrepancies were corrected, students 34Ibid, p.29 35Judy A. Light and Larry J. Reynolds, op. cit. pp. 45-78. 16 involved, "had no further trouble with these materials and the test for the rest of the year”.36 Formative Evaluation by Robeck Robeck37 revised a prototype programmed text material entitled "English money" by utilizing responses from tests and verbal responses of a single"bright" student. This re- vised version was again given to a second student for further revision. Both the first and second revised versions and the unrevised prototype were presented to equivalent types of students. Robeck found a significant difference in the per- formance of the two revised materials compared with the un- revised material (P .05). There was not much significant difference when the results of the two revised versions were compared (P .01). Formative Evaluation by Montgomery and VanderMeer VanderMeer and Montgomery38 formatively evaluated film- strip materials. The purpose of their study was to "deter- mine the extent to which systematic study of pictorial and graphic materials in filmstrips and their accompanying ver- bal captions could be translated into revisions of these 36Ibid, p.61 37Robeck, M.D. op.cit. 38A.W. VanderMeer and Robert Montgomery op cit p.1-22 17 filmstrip elements in such a way that the revised film- strips would produce significantly more learning than the original.39 Using a filmstrip material entitled "The Earth's Satellite, The Moon," "four choice multiple choice items" were written to cover the verbal and pictorial contents of the filmstrip. Theée were administered to students from grades 5 to 12. Based on their responses, it was possible to hypothesize possible causes of discrepancy and how the original filmstrip could be revised. This involved incor- porating the views of filmstrip producers. The revised filmstrip was next shown to a randomly selected sample from upper elementary to senior high school levels, under con- trolled experimental conditions. The samples took a common post test similar to those administered to the original con— trol group. A comparison of the effectiveness of both the re- vised and the unrevised filmstrips was.made in terms of mean total test scores. In grade 5, there was no significant dif- ference in the mean score achievement and no reason was sug- gested for this. However, in grade 8, there was a difference in mean score in favor of the revised versions at .025 level of significant. In grades 10, 11 and 12 the difference in favor of the experimental group was significant at .01 level of significance. 39Ibid, p.1 18 A comparison was made of the proportion passing each item in both the revised and unrevised filmstrips. In grade 5, the yield was 8 items with significant difference favoring the revised version, 16 for grade 8; and 11 for grades 10, 11 and 12. Formative Evaluation by Gropper, Lumsdaine and Shipman Gropper et al.40 used students responses to achieve- ment tests to revise televised instructional materials en- titled "The Effects of Heat" and "An introduction to chem- istry". Both lessons were integral parts of a junior high school science series presented throughout the year by Metro— politan Pittsburgh Educational Television stations. For each lesson, a "competent junior high school Science teacher" was charged with the task of preparing objectives for the, units after familization with the entire science series. This teacher next prepared a lesson to match his objectives and with the collaboration of TV directors the lesson was re- corded. I Each lesson was given a "preview showing" during "non peak hours" to ensure that future participants did not see the preview. Each lesson was seen by a class of 30-40 students ‘ *\ 40George L. Gropper, Arthur A. Lumsdaine and Virginia Shipman. Studies in Televised Instruction Report Number 1 Im- provement of Televised Instruction Based on Student Response to achievement tests. Pittsburgh, Pennsylvania: Metropolitan Pittsburgh Educational Television Stations WQED, WQEX and American Institute for Research, March, 1961, pp. 2-21 19 "who in terms of ability, were representative of other stu- dents who customarily viewed the science series at its reg- ularly scheduled time." An achievement test covering all aspects of the objec- tives were given to participants. Analysis of these post tests responses revealed points that were not "understood or mis— understood". This analysis involved a review of the test "item by item" and the "filmed lessons part by part" to dis- cover causes of discrepancies. These discrepancies were cor- rected in the revised version. Having done this revision, both the revised and the unrevised versions were telecast simultaneously by the two TV stations participating in the project. Six sample classes from the entire population of the 7th and 8th grade viewers of the Science series were used for the experiment. Three intact classes in three different schools watched the experi- mental lesson (revised version) while three other intact classes from still different schools served as the control. Both the experimental and control classes were matched for grade level and for I.Q. Identical achievement tests were given to both groups. The following table, 1.2, shows the results of the test scores for the experimental and control groups. From these studies, it is apparent that by subjecting instructional materials to a process of formative evaluation it is possible to discover problems in the materials and 20 TABLE 1.2 COMPARISON OF RESULTS OF FORMATIVE EVALUATION OF PROTOTYPE TELEVISED INSTRUCTIONAL MATERIAL BY GROPPER, LUMSDAINE AND SHIPMAN Differ- Mean of ences Class between Means S.D. Means t D.F. P Form A Revised version 16.0 2.1 Preview version 12.6 2.9 3.4 4.5 2 .02 Form B Revised version 14.6 3.4 Preview version 9.4 2.6 5.2 733 2 .01 (After Gropper et al., 1961 p. 12) 21 improve them for a more effective learning. It is hoped this study will yield a formative evaluation program that can be used to continuously improve the quality of instructional materials in the educational system of Imo State of Nigeria. NEED FOR THE STUDY A brief survey of the Need for Formative Evaluation of instructional materials (Appendix A) conducted with Nigerian students at Michigan State University and with some educators in Imo State of Nigeria also provided the impetus for this study. After studying a definition and description of forma- tive evaluation these students and educators were asked whether formative evaluation, as described, exists in their educational system. All those contacted responded in the negative, that such a formative evaluation program is non- ,existent in their educational systems. On being asked if they feel such a formative evaluation is necessary for im— proving the quality of instructional materials they all ex- pressed affirmative views. They also saw the need for for— mative evaluation data to be one of the guiding factors for the selection of instructional materials for the State. This means that before any material is selected it should be sub- jected to such questions as: for what grade level of students is it meant?; Has the material ever been revised with any mem- ber of target population?; or What are the results of this revision exercise? 22 It is planned that thisstudy will provide a framework for developing a formative evaluation program that can be taught through the "preeservice" and "in-service" training programs for teachers for quality improvement of instructional materials. Concommitant with this would be the necessary provision of support systems within the educational system where they teach. Towards this end, the study will also at- tempt to determine organizational and individual factors that may facilitate or hinder the adaptation of such a forma- tive evaluation program in the State. Relevance of the Study The Imo State Government of Nigeria allocates a sub— stantial part of her annual budget on education. The greater part of this amount is used to provide "basic infrastructural facilities" which were destroyed during the Nigerian civil war. Hence the Imo State Government, in keeping with the directives of the Federal Government, has established many educational services units like the Book Development Center, the Audio-Visual Center, the Teachers Resource Center whose duties include: the selection, design, production and evaluation of instructional materials for use in her secondary schools. However, from a needs survey, conducted with a sample of teachers and leading educators from Imo State of Nigeria, it was discovered that practial applications of formative 23 evaluation is not a means of currently used to improve the qual- ity of instructional materials and programs. Many of the educa- tors sampled indicated the importance of such formative evaluation and would support the adaptation of a program developed for its full utilization. This need is demonstrated by the works of authorities on Instructional development like Abedor and Komoski who testify to the importance of formative evalua— tion for quality improvement of instructional materials. The government of Nigeria also realizes the importance of evaluation for effective learning. Hence, she is taking steps to ensure that "continuous...assessment" of pupils is carried out by teachers and headmasters. However, it must be pointed out that such "continuous...assessment" refers to teacher—made achievement tests. Formative evaluation tests program adequacy,while achievement tests only test the stu- dent. Formative evaluation refers to the process of trying out components of prototypes of instructional materials with student(s) and based on feedback from them, revising the original program. This process of revision as a result of feedback from student(s) continues until the quality of the instructional material is at the desired level of effective- ness and efficiency. Specifically therefore, this study is important for the following reasons: 1. Through responses to a questionnaire it will iden- tify the perceptions of secondary school teachers' and administrators' in Imo State of Nigeria as to 24 the suitability of extant formative evaluation procedures for secondary schools in the State. 2. Through responses to a questionnaire it will de— termine the extent to which secondary school teachers' and administrators' perceive themselves as possessing some selected skills for conducting formative evaluation. 3. Through responses to a questionnaire this study will identify secondary school teachers' and ad- ministrators' perceptions of organizational and individual factors that facilitate or hinder the adaptation of formative evaluation program in secondary schools in the State. 4. It will identify what modifications (if any) of extant models of formative evaluation can be made to suit the educational needs of secondary schools in Imo State of Nigeria. 5. It will identify what modifications (if any) can be made in organizational structure of secondary educational systems to encourage the adaptation of formative evaluation in secondary schools in Imo State of Nigeria. Generalizability of the Study Education in Imo State of Nigeria is centrally con- trolled by the State Government. The Ministry of Education is an arm of State Government charged with the responsibility 25 of enunciating educational policies for the State and im- plementing Federal educational policies as these suit the needs of the State. All State educational institutions are under the con- trol of the Ministry of Education which coordinates such acti- vities as recruitment, promotion and discipline of staff and the provision of "basic infrastructural facilities" for use in the teaching-learning process. It is assumed that a study using a randomly selected sample of secondary school teachers and administrators in Imo State can be generalized to all secondary school teachers and administrators in that State. Limitations of the Study The following limitations influenced the course of this study: 1. Because of time, cost and transportation con— straints, this study was limited to a selected sample of secondary schools in Imo State of Nigeria (42 out of 210). 2. The study did not attempt to develop and forma- tively evaluate instructional materials. Rather it is interested in the suitability of procedures for formative evaluation identified from extant formative evaluation models for quality improve- ment of prototype instructional materials. 26 The study did not attempt to draw respondents from commercial producers and distributors of instructional materials. Rather it was limited to secondary school teachers and administrators. This study is designed as an exploratory study attempting to derive base line data for the dee velopment of a formative evaluation program which can be further tested for greater generalization across the target population. Research Questions: Data collected in this study were used to answer the following research questions: 1. What are the perceptions of secondary school teachers and administrators as to the suitability of formative evaluation procedures for secondary schools in the Imo State of Nigeria? To what extent do secondary school teachers and administrators pergeive that they possess some selected skills for conducting formative evalu- ation? What factors do secondary school teachers and administrators perceive will hinder or facilitate the utilization of a formative evaluation model in secondary schools in Imo State of Nigeria? 27 4. Based on secondary school teachers and administra- tors perceptions, what modifications (if any) of existing models of formative evaluation is neces- sary to best serve the needs of secondary educa- ltion in Imo State of Nigeria? 5. Based on secondary school teachers and administra- tors perceptions, what modifications (if any) in organizational structure of secondary educational systems should be made in order to encourage the adaptation of formative evaluation in Imo State of Nigeria? Definition of Terms The definition of some of the terms that are commonly used in this study are presented below: Formative Evaluation: This may be conceptualized as the process wherein developers of prototype instructional systems collect and analyze information for purposes of correcting system deficiencies. To operationally define this concept, techniques must be available which answer three types of questions: (a) how to identify major discrepancies in the prototype via data collection; (b) how to analyze these data and develop revision hypotheses; and (c) how to design, integrate and evaluate the revisions. Abedor?1 41Allan Joseph Abedor, "Second Draft Technology..." op. cit. p.10. 28 Prototype: An experimental or untried model for an instruc- tional system or product to be tested to determine those revisions needed to achieve the terminal objec- tives; it precedes wide-scale use of the system or product for instructional purposes.42 nggram Evaluation: This is defined as data collection and analysis for purposes of making decisions relative to program modification or termination. A program here refers to an infinite number of instructional develop- ment projects aimed at improving the efficiency and effectiveness of learning and teaching. Abedor and Gustafson.43 Instructional Material: A developed unit for instruction whose content is recorded or printed in the form of text- books, slides, audio cassettes, records, 16 mm films, lesson plans, 8 mm films, radio and television pro- grams, games and.simulations, transparencies, etc. for use in a learning-teaching situation --Komoski?4 42Association for Educational Communication and Tech- nology. Educational Technolggy, Definitign and Glossary of Terms Vol. 1. Washington, D.C. AECT, 1977 p. 228 43Allan Joseph Abedor and Kent L. Gustafson. Evalu— ating Instructional Development Programs: Two Sets of Crit- eria. Audio Visual Instruction. December 1971, p. 2. \_ 44F. Kenneth Komoski: "An Imbalance of Product Quan- ity and Instructional Quality: The Imperative of Empiricism" op. cit. p.360 29 Instructional Developer: This refers to a teacher or a member of a staff of an institution charged with the responsibility of designing and producing instructional materials.46 Secondary Education: The form of formal or classroom bound education for pupils within the ages of nine and six- teen years old which comes in between the primary and tertiary levels of education. Secondary educa- tion embraces secondary grammar school, secondary technical school and teacher training colleges. Whenever used in this study, secondary education refers to secondary grammar school. Secondary Grammar School: This refers to an aspect of secondary education that emphasizes acquisition of general knowledge without much focus on the ac- quisition of specific skills.‘ Ministry of Education: An arm of State or Federal govern- ment responsible forenunciating and executing all government policies related to all levels of edu— cation. Permanent Secretary: This refers to a career civil servant in charge of administration in the Ministry of Edu- cation. He sees to theicoordinationof policies and the implementation of such policies as they relate to the Ministry of Education. 46 Allan Joseph Abedor "Development and Validation of ' ' ' E t' P f REFEREE? “i3.$352533?X2..Xi‘%3as§§2.m£8§?3312. 30 Commissioner for Education: In Nigeria, the term refers. to an official appointed by the Governor of a State and charged with the responsibility of directing the educational affairs of that State. The Perman- ent Secretary reports to the Commissioner for Edu- cation. School Administrator: This refers to principals of secon- dary schools used for this study. Instructional Innovation: A planned or unplanned change aimed at introducing something new into an educa- tional system. Feedback: Any information,whether verbal or non verbal, communicated to a developer for formative evalu- ation of a prototype. Revision: Altering or modifying the elements or sequence of an instructional material as a result of feed— back from users. Overview The following format is adopted for this study. Chapter I covers the introduction and statement of the problem, the purpose of the study, the need for the study, the Relevance of the study, Generalizability of the Study, Limitations of the Study, Research Questions and Definition of Terms. 31 Chapter 2 reviews literature pertinent to this study. In Chapter 3 the procedures and methodology for the study are presented. This includes a definition of the population, a definition of the sample, the selection of sample, develop- ment and pilot testing of questionnaire and the administra- tion and collection of the questionnaire. Chapter 3 con- cludes with methods used for analyzing the study. Analysis of data is presented in Chapter 4. Chapter 5 discusses the Summary, Conclusions and Recommendations made on the basis of the findings from this study. CHAPTER II REVIEW OF LITERATURE This review is organized into three sections. Section A focuses on types of data essential for formative evaluation. Section B takes a look at various formative evaluation models for commonality of techniques. Section C considers factors that are essential for the adoption of innovations. Under this section will be discussed organizational and individual factors as well as attributes of innovations that may facil- itate or hinder their adoption. A. TYPES OF DATA FOR FORMATIVE EVALUATION Different authors have used different terminologies to describe the types of data considered essential for for- mative evaluation. In this section some of these descriptors will be examined and their implications for formative evalu- ation analyzed. The importance of identifying the various types of data has been stressed by Canningham47 who came up with a ' 47Donald J. Cunningham: "Comments on the Case Studies of Formative Evaluation—-The Sources of Information". View- points Bulletin of the School of Education, Indiana Univer- sity, Bloomington, Indiana, 1972, p. 112-113. 32 33 model suggesting three major sources of information relevant to formative evaluation. The first is what he calls "inter- nal information" or information that can be obtained about an instructional product by "mere inspection". The second is "external information" or "information concerning the effects of the product or its components on the behavior of students, teachers, parents," etc. The third source is the "contextual information" or information related toathe con- ditions under which the materials are expected to function. Similar categories of data have been suggested by Ellis48 and the Joint Committee on Programmed Instruction and Teaching Machines.49 Cunningham50 points out that many formative evaluators rely more on "external information" to the neglect of the "internal" or "contextual". This view is supported by Alkin and Eeker when they said: . 48Henry C. Ellis: "Judging the teaching effective- ness of Programs" Trends in Programmed Instruction (Gabriel D. Ofiesh and Wesley C. Meierhenry eds.) Washington, D;C.: National Education Association, 1964, p.207. ~ 49Joint Commission on Programmed Instruction and Teaching Machines: "Recommendations for Reporting the Ef- fectiveness of Programmed Instruction Materials." A.V. Com- munication Review. Vol. 14, No. 1, Spring 1966, p. 118-119. ~ 50Donald J. Cunningham, op.cit. p.113 34 Although there have been many recom- mendations regarding the sort of data to collect in program development, research on data gathering have centered on the consideration of cognitive achieve- ment. 51 This is understandable especially when it is realized that proponents of formative evaluation have been those in the vanguard of programmed instruction. According to Ellis52 "the most fundamental kind of data which reflects the teach- ing effectiveness of programs is some measure of gain in achievement." This is not to suggest that other categories of data are unimportant. However, when one considers the limitations for using "internal information" as put forward by Ellis, one can only hope that "future studies will reveal those internal characteristics of programs that correlate highly with desired objectives."53 A more detailed list of categories of data for for- mative evaluation provided by Paulson (1969) is paraphrased by AbedorS4 as Table 2.1 below: ' 51Eva L. Baker and Marvin C. Alkin. op. cit. pp. 394- 395. 52 . . I Henry C. Ellis, op. Cit. p. 207 S3Ibid, p. 207 o 54Allan Joseph Abedor: Development and Validation of a Model Explicating the Formative Evaluation Process of Multi- Media Self Instructional Learning System , op. cit. p.37 TABLE 2.1 35 CLASSES OF DATA AND SPECIFIC INDICATORS FOR FORMATIVE EVALUATION Classes of Data 1. Antecedent data (assess- ment of student entry behavior) Technical data (assess- ment of instructional stimuli quality) Process data (assessment of student behavior dur— ing learning experience) Learning data (assess- ment of student progress towards learning object- ive) Criterion achievement data Attitudinal data Specific Indicators Pretests, General Abilities (Standardized tests) Student comments, Technical Consultant comments Tryout monitor observations and comments Enroute responses and feed— back during lesson Post test, criterion-referenced tests Rating scale, questionnaire, stUdent comment 36 As is obvious from the above table, Paulson also presented a list of instruments for collecting each type of data. This will be the subject of discussion in Section B of this review. Alkin and Bakersgre of the opinion that the following types of data are essential for formative evaluation. 1. Learner criterion test performance 2. Learners within-program error response 3. Learners attitude towards a learning program and, 4. Implementation data during the utilization of a product. It can be seen from these lists that formative eva— luators have a concensus of opinion as to the types of data that are essential for formative evaluation. For in- stance, items in Paulson's and Alkin and Baker's categories are interchangeable and have their counterparts in the lists provided by Ellis, Cunningham and the Joint Commit- tee on Programmed Instruction and Teaching Machines. Identification of these various categories of data has organizational value for this study or for any other work on formative evaluation. First it ensures that im- portant categories of data are not omitted. Second, it . 55Eva L. Baker and Marvin C. Alkin, op. cit. p. 394 37 helps to identify those techniques which can be used to col- lect and analyze such data. B. FORMATIVE EVALUATION TECHNIQUES This section is intended to provide answers on types of techniques that can be used for conducting formative eval- uation. The term "technique" as used in this study refers to "those methods or procedures used to (develop) gather, analyze and report evaluation data" Goolerse. Following a review of some formative evaluation models several techniques considered essential for formative evaluation have been identi- fied. These techniques have been grouped into the following: (1) pre-requisites for formative evaluation, (2) data col- lecting instruments and (3) material revision techniques. Each of these major categories will be split into their com- ponent units for more detailed discussion. Table 2.2 presents a matrix which serves as a basis for identifying techniques that are common to all the models. 1. Pre-requisites for Formative Evaluation: These refer to those conditions a formative evaluator must ensure are satisfied for a successful formative evaluation to occur. Some of these pre-requisites refer to (a) selection of the ‘ 56Dennis D. Gooler: "Formative Evaluation Strategies for Major Instructional Development Projects". Paper de- livered as part of a symposium entitled Formative Evaluation: Issues and Applications . Annual meeting of the Association for Educational Communications and Tédhnology. New Orleans, L.A.: March 4—8, 1979, p.2. 38 TABLE 2.2 TECHNIQUES COMMON TO FORMATIVE EVALUATION MODELS Large Small Tutorial Group Group Approach Approach Approach m 'o H o e >. s. o c o «u m m ea m u E c #3:: a L..g L4 0 :4 u u m L) m n «u m 44 o I! m I: e 3 r4 2 13 n '0 —i.n .4 u c) s c c» m #1 o «a o «a m . m vi a Techniques 0 a: m :r: a n. u: > ..I a: a. Size of Sample (1) Tut- orial Approach X X X X (2) Large Group X X X X X (3) Small Group X b. Specification of Ad- ministrative Rules X X X X c. Specification of Be- havioral objectives X X X X X X X X X X d. Construction of Criterion-Referenced Items X X X X X X X X X X e. Pre Test X X X X X X X X X X f. Post Test X X X X X X X X X X g. interim Test X X X X X X h. StudentaTeachero& Con- sultant Comments X X X X X 1. Try out monitor observation X X X X X j. Tryout monitor interview X X X X X k. Analysis of data X X X X X X X X X X 1. Revision of data X X X X X X x X X X m. Comparison with matched groups X X X X X X X X X x n. Validation of data X X X X X X X X X X 39 appropriate size of sample for formative evaluation (b) spec- ification of administrative rules for formative evaluation (c) specification of behavioral objectives and (d) construc- tion of criterion—referenced test items. (a) Selection of Size of Sample for Formative Evaluation a; A formative evaluator has to decide on the number of subjects to use in his/her revision exercise. There are three main sizes of samples for formative evaluation, namely: (1) the use of one student at a time or the Tutorial Approach (2) the use of more than 20 students at a time or the Large Group Approach and (3) the use of 4-8 students at a time or the Small Group Approach. (1) Tutorial Approach: This involves a situation in which a tutor interacts with one subject at a time as that subject uses a prototype instructional material. This in- teraction involves observing the subject for signs of difficulty' and volunteering solutions. It also in— volves interviewing the subject to find out her problems. There are varying procedures for using the tutorial approach. Some of these are summarized by Susan Markle,57 u 57Susan M. Markle: Empirical Testing of Programs. Programmed Instruction: The Sixty—Sixth Yearbook of the National Society for the Study of Education Part II. Ted) Ehil C. Lange. Chicago, Illinois: The University of Chicago.Press, 1964, p. 40 58 59 Gilbert, Robeck Silberman et al.,60 and Horn 61 have all used the tutorial approach for formative evaluation. In applying this approach Gilbert advocates: Get yourself one student. I repeat one student. You are about to perform an experiment in which you are permitted no degrees of freedom--that is, if the word "self" in self instruction can be taken seriously. Once you have dis- covered an efficient program for one student, you will have described the gross anatomy of the most generally useful program. 62 Robeck63 used a single "bright" sixth-grade student to obtain feedback on a prototype programmed text. Revisions based on this feedback gave rise to a second draft which was 58Thomas F. Gilbert: "0n the relevance of laboratory Investigation of learning to Self-Instructional Programming": In Teaching Machines and Programmed Learning (eds) A.A. Lunsdaine and Robert Glaser Washington, Department of Audio- Visual Instruction. National Education Association, 1960, pp. 475-485. 59Robeck, M.D. Op. cit.. 60Silberman, Harry: Usecof exploratory research and W for the d_eV.elopment of pro- ' d F'nal e ort. NDEA Project, 7-14-000-181. Santa Monica, California:_Systems Development Corporation, 1964. 61Robert E. Horn: Developmental Testing Ann Arbor, Mi. Center for Programmed Learning for Business, I964. 62Thomas F. Gilbert, op. cit. p.479 63Robeck, M.D.: op. cit. among others,- 41 again given to another student and led to the production of a third draft. Both the unrevised version of the proto— type and the second and third drafts were tested on three matched groups of students. The results of the two revised versions were significantly better than the result obtained from the unrevised version. Even though Robeck made mention of the use of ex- perimental and control groups, he did not specify the sampling procedure adopted nor the processes used during the tutorial interaction to identify discrepancies. De- spite these minor shortcomings, Robeck's study does show that data from a single student can be used for prototype revision. Silberman et a1.64 developed a technique which they called "tutorial engineering," an acronym for tutorial ap— proach for formative evaluation. This technique did not differ significantly from the work of Robeck except that the method for analyzing the feedback for consistency was presented. This analysis yielded "gap", "irrelevancy" and "mastery" hypotheses about major instructional problems common to the programs. The "gap" hypothesis refers to a major element of the program that was ommitted; the "ir- relevancy" hypothesis points out what was included that should not have been in the program while the "mastery" 64Silberman, Harry Op. cit. 42 hypothesis stipulates the accepted version of a part of a program which a student must master before proceeding further in the program. This analysis of students' feedback for identifi- cation of discrepancies is similar to the technique used by Light and Reynolds65 to revise an elementary mathematics curriculum. According to them: Each day all tests completed during the class period are examined...for each fail— ure the question was asked, why did this student fail the test associated with these materials?. .. To locate a probable cause of failure, answers to these five questions were always sought by the evaluator: 1. What was similar about the items missed on the test? 2. How did the items missed differ from those...passed? 3. Where in the instructional materials was the content presented? 4. What in the instructional materials could have caused the test failure? 5. How can the hypothesized cause of failure be experimentally tested? It must be pointed out that Light and Reynolds used only analysis of post test data from a large group of stu- dents. However, the formulation of these "hypotheses"by Silberman et al. as an aid for identifying possible causes of discrepancies, was an advancement over the work of Robeck. 66 Horn is of the opinion that enough data can be obtained from three students to make significant revisions 65Judy A. Light and Larry J. Reynolds op cit. p.55 66 . Robert E. Horn, op. c1t. p.5. 43 in any instructional material. This is especially true if these students are carefully selected to represent the "most capable," the ”lower ability" and the "average abil- ity" students. After revision, three equivalent types of students can also be used to test out the revised materials. This is an advancement over Robeck's single "bright" stu- dent or Silberman's unclassified subjects. Horn also pro- vided "administrative procedures" for using the tutorial approach for formative evaluation (Appendix B). For those who advocate the use of the tutorial ap- proach its advantages are based on the premise that use of more than one student is considered cumbersome for a tutor. Again, the large number of students may fail to expose "in- dividual candid reactions" or the "stupid" questions which underliesa major program deficiency-Mark1e67. Some of the major disadvantages of the tutorial ap- proach are: - 1. It is expensive in terms of cost and time (Abedor)68 2. It is susceptible to bias on the part of the evaluator. 3. It may not be representative of the target pop- ulation. 67 Susan M. Markle: op.cit. p.122 68 .Allan Joseph Abedor op. cit. p.31 44 69 feels Despite these disadvantages, Susan Markle~ that the tutorial approach should form an integral part of the "developmental stages" or the "laboratory phase" of instructional development. 70 (2) The Large Group Approach: A study by Dick showed that non professional inexperienced program writers "preferred to base their revision on data from a large sample (N=4O to 50) rather than from an individual student". Given seven types of data that included analysis of post test, error counts, student comments, teacher comments, list of correct and incorrect answers for all test items and page number where a specific item was taught in the text, Dick showed that these non professionals preferred error rate and teacher comments for their revision of a given instructional program. This present study is dif— ferent from the work of Dick in that the respondents will not be presented with any instructional prototype to revise but rather will be requested to select an approach and tech— niques for formative evaluation as they consider suitable for their school system. 69 Susan M. Markle: op. cit. p.123 7OWalter Dick: A Methodology for the Formative Evalu- ation of Instructional Materials. Journal of Educational Measurement, Summer, 1968, Vol. 5, No. 2, pp. 99-102. 45 Paulson,71 VanderMeer et al.,?2 Light and Reynolds Schwen and Keller?4 etc. have revised instructional materials using the large group approach. Paulson defines a large group approach as using feedback from twenty or more stu- dents for the revision of prototype instructional materials?5 Light and Reynolds used the large group approach to "refine and improve on individualized mathematics curriculum em- 76 ployed in an elementary school classroom". VanderMeer et al. used responses from intact classes to revise two lPaulson Casper F. :"Evaluation of Instructional Systems" (ed) Jack Crawford: National Research Training Manual Teaching Research Division of the Oregon System of Higher Education, Monmouth, Oregon, 1969. VanderMeer, A. W., Jack Morrison; Philip Smith: An Investig_tion of the Improvement of Educational Motion Pictures and a Derivation of Principles Relatin ng to the Effectiveness of these Media. Pennsylvania: College of Education, The Pennsylvania State University, University Park, 1965, pp. 10- 17. 3 Judy A. Light and Larry J. Reynolds op cit p.45-77 7€Thomas M. Schwen and John M. Keller:'% Case Study Developing Convergent Formative Evaluation Methodology. Journal of Instructional Development. Vol. 1. No. 1. Fall 1977, pp. 31- 35. 5 Paulson Casper F. op cit p. iv-ZO 76Judy A. Light and Larry J. Reynolds op cit p.45 46 extant teaching films,77 and Gropper et a178 all used the large group approach to revise different types of in— structional materials. Some of the advantages of the large group approach as provided by Paulsofi79are: a. it is easy to obtain intact classes b. the instructional material prototype can be introduced in the class without sensitizing the students. c. using an intact class provides more data base across the class and this increases the pos— sibility of making correct decisions about in- structional deficiencies. Some of the disadvantages of the large group approach have been mentioned in the discussion on the Tutorial Approach. (3) Small Group Approach: Despite the above ad- vantages of the Large Group Approach and the Tutorial Ap- 80 proach, Abebor developed his "MK I model" for formative evaluation and submitted itth seven university and commun— ity college faculty for use in formatively evaluating their 77 ”A.W. VanderMeer and RobertMontgomery op cit, 78 ’George L. Gropper, Arthur A. Lumsdaine and Virginia Shipman, op cit ~ ,, 1 79 1 . , Paulson Casper F. op C1t p. 1v-21 30 Allan Joseph Abedor:"Second Draft Technology op.cit 47 prototype instructional materials.81 The MK I model con- sists of "technical review" by experts, "tutorial tryout" and "group tryout". Abedor found out that "the developers sampled were unwilling to apply the MK I procedures."82 In particular, the concept of "iterative revisions" based on data from "experts, individual students and then large groups appeared totally out of the question because of the time and resources involved." Developers were unwilling to make multi— ple revisions of the whole set of interrelated instructional materials on the basis of a single student. On the other hand, the prospect of revising using the large group approach "seemed more acceptable but posed logistical and sequencing problems..." From a review of literature on small group as "prob- lem solving agencies" Abedor felt that "a more appropriate model for formative evaluation of multi-media lessons was one in which the necessary data were collected by means of face-to-face interaction or debriefing between the lesson developer and a small group of students. The task of prob— lem identification and design of revisions could thus become a lesson developer/student group responsibility."83 8lIbid, p.13 82 p. 15. Allan Joseph Abedor: Second Draft Technology op cit. 83Ibid, p.18 48 Hence, the small group approach or the "MK II model" was developed to be used with 6-10 students during formative evaluation. (b) Specification of Administrative Rules: This involves stating clearly all the activities an evaluator has to perform prior to and during formative evalu— ation. Making these "ground rules" specific ensures that the same activities can be performed during a replicate per- formance. Not all the authors considered provided administra- tive rules for formative evaluation (Table 2.2). However, those who advocate for it, Abedor?4 Horn,85 Dick86 do so on the understanding that its absence can expose the process of formative evaluation, especially the tutorial approach to the whim and caprice of individual evaluators during their interaction with subject(s). Specimens of administrative rules by Abedor and Horn.arepresented as Appendix B. (c) Specification of Behavioral Objectives: Robert F. Mager defines an objective as: a 84Allan Joseph Abedor, "Development and Validation of a Model" op. cit. pp. 191-4. 85 Robert E. Horn; op. cit. p. 6 and p. 12. ' 86Walter Dick, op. cit. pp. 101—102. 49 A description of a performance you want learners to be able to exhibit before you consider them competent. An objective describes an intended result of instruction rather than the process of instruction itself. 87 All the formative evaluators considered in this review (Table 2.2) have testified to the importance of well specified be- havioral objectives for formative evaluation. According to Sullivan, "assessment based upon instructional objectives is a crucial part of well designed formative evaluation."88 An instructional product is developed to enable learners to acquire specific capabilities. The only way to ascertain if these learners have learned is by testing them on the in- structional objectives of the product. Herein lies the importance of clearly specified objectives. Many authors 9 90 91 such as Popham,8 Merrill and Goodman, Tyler _ have ex- plicated procedures for specifying behavioral objectives. . 87Robert F. Mager: Preparing Instructional Objectives 2nd Edition. Belmont” California: Fearon Publisher, Inc., 1962, p.25. 88Howard J. Sullivan: "Objectives, Evaluation and Im- proved Learner Achievement, " In AERA Monograph Series on Curriculum Evaluation Instructional Objectives, by W. James Popham, Elliot W. EEsner, Howard J. Smith and Louise L. Tyler. Chicago; Rand McNally and Company, 1969, p.82. ’ 89W. James Popham: Objectives and Instruction. In W. James Popham, Elliot W. Eisner, Howard J. Sullivan, Louise L. Tyler Instructional Objectives. Op. cit., p.32-52. ' 90David Merrill and R. Irwin Goodman. Selecting In- structional Strategies and Media: A Place to Begi . National Special Media Institutes. 1972, p. l- 196. ‘ 91Louise L. Tyler. "A Case History: Formulation of Ob- jectives from a Psychoanalytic Framework" In W. James Popham, Elliot W. Eisner, Howard J. Sullivan, Louise L. Tyler, In- structional Objectives, op. cit., p. 100-119. 50 Despite the abundance of evidence on the importance of well specified objectives for formative evaluation, the work of Margaret Ammons92 has shown that in practice some school systems rarely relied on well specified objectives to guide their educational programs. Reasons for this re- luctance to use behavioral objectives have been explicated by Eisner93. The most important of these is the philosophical disposition of schools, teachers and administrators with re- gard to their "views about the nature of education". To those who believe in the application of scientific methods of man- agement proposed by Francis Taylor at the beginning of the century, this means breaking down tasks into manageable units that could be taught and evaluated at every step of the pro- duction line. The same can also be said of the proponents of the behavioral school of psychology such as Thorndike, Watson, etc. The science of education and psychology was then evolving. Emphasis here was on empiricism. Thus, "if what education is after is a change in behavior--something you can bring about and then observe" then these behaviors should be stated in terms that could be measured. Such as- sessment of behavior gained impetus from the work of Skinner94 2Margaret Ammons:"An Empirical Study of Progress and Product in Curriculum Development.” Journal of Educational Research. Vol. 27, No. 9 pp 451-457. 1964. 93Elliot W. Eisner: "Instructional and Expressive Edu- cational Objectives: Their formulation and use in curriculum" In W. James Popham, Elliot W. Eisner, Howard J. Sullivan, Louise L. Tyler Instructional Objectives, op. cit., pp. 1-31. 94B.F. Skinner Operant Behavior. American Psychologist Vol. 18 No. 8 August 1963 pp. 503-515. 51 who showed that complex behavior can be taught through his principle of "successive approximation". This involves breaking such complex behaviors into small attainable units and providing immediate feedback all along until the complex behavior is achieved. However, the work of John Dewey95 provided some en- couragement to those who are opposed to this "mechanistic" .view of education. According to John Dewey, man's relation- ship with his environment is transactional. Man is an or- ganism who interacts with his environment. Man is not a mat— ter to be molded but an individual who brings with him needs, potentialities and experiences with which to interact with his environment. What was important educationally for Dewey was for the child to gain increasing intelligent control in planning his own education. To do this, to be a master of his own educational journey required a teacher sympathetic with the child's background and talents. Such education is one concerned neither with molding behavior through extrin- sic rewards, nor with formulating uniform quantifiable ob- jective standardsfor appraising achievement. This view of Dewey is echoed by many proponents of Cognitive Theory of Learning such as Brunner.96 In an effort 95John Dewey as In Elliot W. Eisner, op. cit. 96Jerome S. Bruner : Toward A Theory of Instruction Cambridge Massachusetts, Harvard University Press, 1966, p. 39-72 0 52 to accommodate those who may be reluctant to specify beu havioral objectives Eisner has proposed the term “expressive objectives" to distinguish this from "instructional (behavior) objectives". According to him: An expressive objective does not specify the behavior the student is to acquire after having engaged in one or more learning activities. An expressive objective de- scribes an educational encounter. It iden- tifies a situation in which children are to work, a problem with which they are to cope, a task in which they are to engage, but it does not specify what from that encounter situation, problem or task they are to learn. An expressive objective provides both the teacher and the student with an invitation to explore, defer or focus on issues that are of peculiar interest or import to the inquirer. An expressive objective is evocative rather than prescriptive. 97 Is formative evaluation still possible with such in- structional "situations" where objectives have not been spec- ified behaviorally? This is possible since the "educational encounters”, "situations", "problems or tasks" must be"mean- ingful" to the child to be comprehended. Through formative evaluation, it will be possible to ensure that such materials presented to students are not meaningless. (d) Construction of Criterion-Referenced Test Items This is a form of achievement measure that tends to ascertain an individual's status with respect to some criterion or performance standard. It is a test based on course objec- tives that attempts to assess how far a student has shown 97Elliot W. Eisner, op. cit. 53 maStery over these objectives--Glaser?8 Popham,99 Mehren 100 101 . . . . and Lehmann, Ebel. There is thus an 1nt1mate relation- ship between clearly specified behavioral objectives and criterion-referenced measures for these are cued to ascer- taining if these objectives are being attained. This concept of criterion-referenced measure is dis- tinguished from a second form of achievement measure known as norm-referenced measure. This aims at ascertaining an individual's performance in relationship to the performance of other individuals on the same measuring device. Glaser102 Pophaml.O3 98Robert Glaser: "Instructional Technology and the Measurement of Learning Outcomes: Some Questions." American Psychologist. 1963, Vol. 18, No. 8, p.519. 99W. James Popham: Evaluating Instruction, Englewood Cliffs, Neleersey, 1973, p.25. 100William A. Mehrens and Irvin J. Lehmann. Measure- ment and Evaluation in Education and Psyghology. 2nd ed. New York: Holt, Rinéhart and Winston, 1978, p. 48-60. 101Robert L. Ebel: Essentials of Educational Measure- ment. 3rd Ed. Englewood Cliffs, New Jersey, Prentice-Hall, Inc., p. 10. 102Robert Glaser, op. cit. p. 520. 103w, James Popham: EvaluatingInstruction. op. cit. p.25. 54 The distinction between these two forms of achieve- ment measures may not be very glaring especially when one realizes that both can be based on a given content area and well specified objectives. However, when one compares the various uses to which they are put and how their test items are constructed this confusion tends to disappear. According to Popham, both forms of tests can be used to "make decisions about individuals". However, there is usually a difference in the context in which each decision is made. Generally, norm-referenced measure is used when a degree of selectivity is required; for example, when there is a competition to fill a position and the best candidate is needed. It is critical in such situations therefore that the test measure permit relative comparison among individuals. On the other hand, when we are only interested in whether an individual possesses a particular competence, and there is no constraints regarding how many individuals can possess that skill, criterion-referenced measures are suitable. 104 It is this ability of criterion-referenced measure to ascertain if an individual "possesses" a particular com- petence" that renders it most suitable for its second func- tion--that of helping to determine the effectiveness of an instructional program. Thus according to Popham: 104Ibid, p.26 55 In decisions regarding treatments (pro- grams) we might design a criterion- referenced measure which reflected a set of instructional objectives sup- posedly achieved by a replicable instruc- tional sequence. By administering the' criterion-referenced measure to appropriate learners who had completed the instructional sequences, we could decide the effectiveness of the sequence. 105 Many other authors are in support of this view about the per- tinence of criterion-referenced measures to formative evalu- ation. According to Mehrens and Lehmanns: Employing the individually prescribed instruction or mastery model of learning is not the only use of criterion-referenced measures. One may also use such data to help evaluate (make decisions about) in- structional programs. In order to determine whether a specific instructional treatment or procedures have been successful, it is necessary to have data about outcomes on the specific objectives the program was designed to teach. A measure comparing students to each other (norm-referencing) may not give so effective data as a measure comparing each student's performance to the objectives. 106 The suitability of criterion-referenced measure for formative evaluation is further magnified when one considers the pro- cedures for "item construction" and "item improvement". Ac- cording to Popham: p.52. losIbid, p.26 106William A. Mehrens and Irvin J. Lehmann op cit. 56 When an individual constructs items for norm-referenced tests he tries to produce variant scores so that individual perfor- mance can be contrasted. As a consequence, he makes all sorts of concessions, sometimes subtle, sometimes obvious to promote variant scores. He disdains items which are "too easy" or "too hard". He avoids multiple choice items with few alternative responses. He tries to increase the allure of wrong answer options. He does all these to develop a test which will produce different scores for different people. Sometimes this overriding criterion may reduce the adequacy of the measurement instrument for even irrelevant factors may be incorporated on items just to produce variance. 107 Cn the other hand, the designer of criterion-referenced items is guided by a different principle. His chief purpose is to make sure the item accurately reflects the criterion behavior. Difficult or easy, discriminating or in- discriminate, the item has to represent the class of behaviors delimited by the criterion.108 Can formative evaluation be possible if teachers exhibit tremendous opposition to constructing criterion referenced tests? The construction of criterion—referenced measures may represent an ideal situation. Every teacher has a means of assessing if his or her class is learning what he or she intends them to know. Such-questions, whether criterion- referenced or norm-referenced, can serve a useful purpose for formative evaluation. This can serve as a starting point while teachers can gradually be led through in-service train- ing on how to construct“Criterion-referenced tests. \ 107W. James Popham: Evaluating Instruction. op. cit. p.30 loalbid, p.30 57 2. Data Collecting Instruments The following instruments are commonly used for col- lecting data for formative evaluation (e) Pretest (f) post test (9) Interim tests during a program (h) Student and consultant comments (i) Tryout monitor observation (j) Try- out monitor interview. (e) Pretest All the authors of formative evaluation models re- viewed regard the pre-test as very essential for the process (Table 2.2). According to Light and Reynolds: Valid test results are required for the operation of the curriculum. Through testing, the student is placed at an appropriate level of the curriculum his strengths and weaknesses are deter- mined for his level of the curriculum. 109 The Joint Committee on Criteria for Assessing Instructional Programs has strongly insisted on the necessity for pre- testing before formative evaluation.110 According to Susan Markle: A pretest gives a far more precise meas- ure of the students starting point than do all the achievement and aptitude scores that can be obtained. lll 109Judy A. Light and Larry J. Reynolds, op. cit. p. 48 110Joint Committee on Programmed Instruction and Teaching Machines, op. cit. p.119 111Susan M. Markle op. cit. p.128 58 (f) Post Test Equally important in formative evaluation is the use of post tests. Whether it is during the actual revision exercise or during the validation process of formative eval- uation, post tests are the most important instruments for determining the effectiveness of an instructional material for achieving stated objectives. (9) Interim Tests In addition to the above two types of test instruments many authors are of the opinion that "within program re- sponses" while using an instructional material can equally provide useful information for program revision, Alkin and Baker.112 There should be no difference between such in- terim test items and items used for post tests. In fact, ef- forts should be made to see that all test items are drawn from the same test population. The matrix sampling techni- que introduced by Popham,113 Shoemaker,114 Husek and Sirotnik115 make this possible. "Matrix sampling" or "item sampling" makes it possible for different subjects to complete different test items on a 112Eva L. Baker and Marvic C. Alkin op cit. P- 394‘396° 113Popham, op. cit. \ 114David M. Shoemaker: "Evaluating the Effectiveness of Competing Instructional Programs." Educational Research Vol. 5, No. 5, May 1972, p. 5-8. 115Husek, T.R. and Sirotnik, K. "Matrix Sampling" Eval- uation Comment. Vol. 1, No. 3, pp. 1-4, 1968. 59 given objective rather than completing identical test items. This permits the sampling of more behavior with "shorter tests" and is regarded as more "appropriate for evaluating instruc- tional sequences" than the technique that is based on the principle of “everybody gets the same items" used to make decisions about individuals.116 Popham presents an excellent illustration of how this is possible. Suppose an instructor has an instructional unit with 10 objectives and a pool of 10 test items for each objective. Rather than giving each student in the class this 100-item test, ten different tests could be prepared each with 10 different items. Suppose there are 20 students in the class. It is possible to randomly as- sign 2 students to a test. In the end, the tutor will ob- tain 20 different responses for each objective and thus can count on more information for the revision exercise.117 It can be seen from this exposition that what guides the selection of test items for the formative evaluation of instructional materials are the objectives which the material is supposed to help-in achieving. This also has something to do with the validity of test items used for formative eval- uation. According to Popham: 116James Popham, op. cit. p.39 117Ibid, p. 40—41 60 Criterion-referenced measures are validated primarily in terms of the adequacy.w1th which they represent the criterion. A carefully made judgement based on the test's apparent relevance to the behavior delimited by the criterion is the best procedure for validating criterion-referenced measures. Measurement experts refer to this judgement-based oper- ation as content validity. The more precisely instructional objectives can be explicated, therefore, the more accurately we can reach judgements regarding a test's content validity.118 Content validity is not the only type of validity essential for item construction but it is the most important for forma- tive evaluation. Others are "predictive validity" in which predictions made by a test are confirmed by the later be- havior of the subjects,or "construct validity" which is the extent "to which a particular test can be shown to measure a hypothethical construct" like "intelligence, anxiety, 119 These are regarded as "hypothetical con- creativity." structs" because they are not directly observable but rather are inferred on the basis of their observable effects on be- havior. According to Mehrens and Lehmann: Construct validity is the degree to which the test scores can be accounted for by certain explanatory constructs in a psycho- logical theory. 120 Herein lies the importance of obtaining data from other sources outside students used for formative evaluation. Such other sources of data have been provided in Table 2.1 of this study and techniques for collecting them are discussed below. 118Ibid,p.36 119Walter R. Borg and Meredith Damien Gall Educational ,Rgseargn, New York, Longman, Inc., 1979, p.216 120WilliamA. Mehrens and Irvin J. Lehmann op cit. p.114 61 (h) Student, Teacher and Consultant Comments In Table 2.1 (Abedor), several types of indicators were identified as being useful for the revision of prototype in- structional material. This view is supported by Cunningham,121 Alkin and Baker,122 Ellis,123 Vanderschmidt.124 Abedor125 usedihLikert-type scale to obtain additional data about at- titudes of students who participated in formative evalua- tion; Horn obtained additional data through a "dialogue" with his students. It is apparent that these types of instruments can provide additional data for formative evaluation. (i) Tryout Monitor Observation Observation is another means that can be used for for- mative evaluation. It entails observing a subject as he uses an instructional material and providing assistance whenever he or she shows any sign of confusion or difficulty. Most authors that use the tutorial or the Small Group Approach see this as a very valuable means for data collection. 121Donald J. Cunningham, op.cit. p.112 122Eva L. Baker and Marvin C. Alkin, op. cit. p.404 123Henry C. Ellis, op. cit. p.209 124Hannelore Vanderschmidt: "Validation Data for Programmed Tests: A Checklist for Evaluation of Testing" In Trends in Programmed Instruction, op. cit. p. 211 125Allan Joseph Abedor, "Second Draft Technology..." op. cit. p.27 62 (j) Tryout Monitor Interview Subjects used for formative evaluation can be inter- viewed to find out their attitudes towards the instructional material and to find out the appropriateness of the sequence of the content of instructional materials. Alkin and Baker,126 Susan Markle,127 Mager.128 3. Material Revision Techniques Included in this category are such subunits as (k) Analysis of Data (1) Revision of Data (m) comparison with matched groups (n) Validation of Instructional Materials. (k) Analysis of Data The various criterion-referenced test items are given to the selected sample of students after they had been exposed to the instructional material. The results are analy- zed so as to discover causes of discrepancies and to look for ways of remedying such discrepancies. Silberman et al.129; Light and Reynolds130 have provided excellent procedures for this analysis of post test results. The result of other in- struments are also analyzed and their findings incorporated for the revision exercise. 126Eva L. Baker and Marvin C. Alkin op. cit. p.404-405 127Susan Markle, op. cit. p.122-123 128Robert F. Mager: "On the Sequencing of Instructional Content". Psychological Reports. 1961, Vol. 9, pp.405-413 129 130 Harry Silberman, op. cit. p. Judy A. Light and Larry J. Reynolds op cit. p.55 63 (1) Revision of Data All the authors also agree that a revision exercise is essential in order for the formative evaluation process to be complete. However, not all of the models explicate the manner in which the results of these analyses can be inte- grated with the original material, Abedor.131 Neither is there a consensus of opinion as to the number of revisions that may take place before a material is considered effective. -This is left to the whim and caprice of an individual eval- uator. However, herein lies the importance of well spec- ified objectives and performance standards for evaluating such objectives. Such performance standards can serve as a good yard stick for knowing when to stop the revision ex- ercise. Cost is another factor that may determine the num- ber of revisions that may take place during formative evalu- ation. (m) Comparison With Matched GroupS‘ All authors agree that both the revised and unrevised instructional materials should be tested with matched groups of students using the same test instruments for purposes of finding out if there is any significant gain in performance when the two are compared. (n) Validation Validation testing in the strict sense intended by the recommendations of the Joint Committee on Criteria for 131Allan Joseph Abedor, op. cit. p.26 64 Assessing Instructional Programs should be followed by pub- lication of the results of the revised materials and not by any further revision of the program so tested. Its purpose therefore is to precisely describe to the prospective user the performance characteristics of the instructional material. Such performance characteristics should be obtained under clearly specified conditions. Validation data is meant to provide an answer to the question: "Who learns what under what conditions in how much time?" As such both producers and users are expected to provide a validation report about an instructional materials they produced or have used--Joint Committee on Programmed Instruction and Teaching Machines, Horn,132 C. Factors Essential for Adoption of Innovations Since the need for, and the absence of formative evalua- tion was determined in the preliminary survey, the implementation of such a formative evaluation model in the educational system of Imo State of Nigeria is viewed as an instructional innova- tion. Havelock defines innovation as. Any change which presents some- thing new to the people being changed. 133 132Robert E. Horn, op. cit.“p.2 133Ronald G. Havelock: "The Change Agent's Guide to Innovation in Education" 4th ed. Englewood Cliffs, New Jersey Educational Technology Publications. 1978, p.4. 65 An instructional innovation therefore is any novel idea in- troduced to an educational system to enhance teaching and learning. Formative evaluation will be a novel idea in the secondary school system in Imo State of Nigeria. As some- thing new, one cannot be sure it will receive general ap- proval by teachers and administrators. This is why this study includes a strategy to find out factors that may hin- der or facilitate adoption. Rogers and Shoemaker (1971) identified one such factor that could affect the rate of acceptance of innovation as the "attributes" of the innovation itself. According to them, there are five such "attributes" namely: 1. The relative advantage of the innovation compared to what it intends to replace 2. The compatibility of the innovation with the ex- isting practice 3. The complexity of the innovation 4. The trialability of the innovation in the system prior to full scale adoption . 5. The observability of the results of the innovation which shows it to be an improvement over that which it intends to replace.134 Rogers and Shoemaker feel that "individual perceptions" of these attributes can be used in predicting the rate of adop- tion of an innovation. This is why prospective users of 134Everett M. Rogers with F. Floyd Shoemaker. Com- municationpgf Innovation: A Cross-Cultural Approach. 2nd ed New York: The Free Press, 1971, pp. 138-156 66 formative evaluation have been asked to specify their per- ceptions using these "attributes" as guidelines in developing the questionnaire for this study. Authorities in innovation have also lauded the usefulness of using "the survey feedback method" to bring about speedy acceptance of an innovation. According to Huse: The survey feedback method is a standard- ized questionnaire instrument used to identify data within organizations and to have teams within the organization work on their own data to bring about planned change and development. 135 Even though such teams could not be assembled to discuss perceptions of "attributes" of formative evaluation, it is hoped that responses to the questionnaire will provide a useful data base for future studies. It is not only the characteristics of an innovation that can influence its rate of adoption. According to Evans 136 137 and Leppman and Abedor and Sachs individual attitudes, 13SEdgar F. Huse, Organizatigp Development and Changg. Los Angeles:West Publishing Co., 1975, p.164-167. 136Richard I. Evans and Peter K. Leppmann: Resistance to Innovation in Higher Education. San Francisco, Jossey- Bass, Inc., Publishers, 1968, p.16. 137Allan Joseph Abedor and Steven G. Sachs: "The Relationship between faculty development (FD) Organizational Development (OD) and Instructional Development (ID): Readi- ness for Instructional Innovation in Higher Education" In Bass Ronald L.; Lunsden, Barry D.: and Dills Charles (eds) Instructional Development: State of the Art. Columbus Ohio: Collegiate Publishing Inc., 1978, p 7 67 values, beliefs, skills and knowledge can go a long way in determining if an instructional innovation will be accepted or not. Attitudes refer to how positive an individual feels towards self, teaching and the proposed change. Values re- fer to the amount of importance an adopter attaches to teaching and student learning while the amount of knowledge of subject matter, of innovations and teaching methods pos- sessed by an individual can help that individual in his bid to adopt an innovation. One,however, feels it is possible to find out about "personality variables" associated with innovativeness by finding out individual perceptions of the attributes of an innovation. Rogers and Shoemaker cite the work of Harp (1960) who "feels that the inclusion of per- sonality variables in analyses of innovativeness will con- tribute 1ittle." Harp is of the opinion "that if other socio- logical variables are included in investigations of innova- tiveness, the effect of personality" may disappear.138 That is, however, an empirical question that needs further analy- sis (Rogers and Shoemaker). The "attributes" of an innovation may be perceived as favorable by prospective adopters who may possess the pre-requisite skills and knowledge but if the organization is not "ready" for the innovation, it has limited chances of acceptance. Abedor and Sachs define "readiness" as "that critical combination of characteristics pre-requisite to the 138Everett M. Rogers and F. Floyd Shoemaker op.cit p. 187 139 adoption of an innovation." Organizational readiness is a variable defined as a combination of characteristics which influence the acceptance or tolerance of an innovation in the organization. ‘The following are organizational char- acteristics which appear to favor easy acceptance of in- structional innovation. 1. Structure, which allows open and free communica- tion and group problem solving 2. Rewards for teaching or related activities 3. Norms that support innovation 4. Resources to support innovation 5. Policies that permit trial of innovation.140 According to Abedor and Sachs: Unless the structure permits open and free communication, there will be resistance to the innovation because faculty are not aware of the potential benefits and have inaccurate information about it. Or, if the norms do not support innovation in general, the intro- duction of innovation will be controlled by a few senior faculty acting as gatekeepers. The existence of restrictive policies and/or lack of resources are likely to constrain acceptance of instructional innovations. Lack of rewards for teaching-related activities will probably have a negative influence on faculty who other- wise might explore instructional innovations. 141 139Allan Joseph Abedor and Steven G. Sachs, op. cit. 14°Ibid, p.8 141 Ibid, p. 8-9 69 These three elements of attributes of instructional innovation, individual factors and organizational factors are very central to the concept of "faculty renewal" as pro- posed by Jerry Gaff (1979). Faculty renewal is an effort to improve the quality of instruction through the introduction of innovative activities and raising the "level of readiness" of both the individual and the organization in order to en- able that innovation to flourish. These three elements have been discussed as "Organizational Development," "Instructional Development," and “Faculty Development" (Gaff,142 Bergquist and Phillips,143 Huse144 Abedor and Sachs145 Bass, et a1.146 From this review, it can be seen that these three ele- ments are closely related. .Introducing an innovation without ensuring that prospective users have favorable attitudes 142Jerry G. Gaff: Toward Faculty Renewal San Fran- cisco: Jossey-Bass Publishers, 1978. 143Bergquist, W.H. Phillips, S.R., and Quehl, G.: A Handbook for Faculty Development. Washington, D.C.: Council for the Advancement of Small Colleges, 1975. 144Edgar F. Huse, op. cit. pp. 61-82. 145Allan Joseph Abedor and Steven G. Sachs, op. cit. p. 2-5 146Bass, Ronald K.; Charles R. Dills and D. Barry Lunsden: "Instructional Development: The State of the Art" In Bass et al. (eds) Instructional Development: The State of the Art. Columbus, Ohio: Collegiate Publishing Inc., 1978 70 towards it or that the organizational climate will be suitable for its adoption will not augur well for that innovation. This is why this study has gone a step further to determine what factors will facilitate or hinder the adoption of for- mative evaluation in the secondary school system of Imo State of Nigeria. Implications of the Review on the Present Study This study attempts to determine the perceptions of secondary school teachers and administrators about the suit- ability of extant formative evaluation models to the secon- dary schools in Imo State of Nigeria. Towards this end a review of works on formative evaluation by different authors has been done. This has led to the identification of types of data considered essential for formative evaluation and the procedures for collecting and analyzing these data. It has also led to the identification of three types of approaches for formative evaluation. These elements formedthe basis for developing the questionnaire for this study. Basically-each respondent was requested to select a formative evaluation approach con- sidered most suitable for his/her school. This was followed by questions aimed at finding out their perceptions of the various procedures for their suitability for formative evalu- ation in their school as well as questions to determine the 71 extent to which they perceive themselves as possessing some skills pre-requisite for formative evaluation. Since the ultimate goal of this study is the imple- mentation of a continuous formative evaluation program in the secondary school system, respondents were requested to identify factors they perceive will facilitate or hinder the adaptation of such a formative evaluation program in their schools. It is hoped that such responses will pro- vide baseline data that can further be tested before being introduced into the secondary school system. CHAPTER III DESIGN OF THE STUDY Introduction Many educators in Imo State of Nigeria have commented on the importance of high quality materials for effective instruction. They share the view articulated by instructional 147 Brethower et a1.148 and Abedor”9 developers like Horn, that high quality materials can be attained if the prototypes are revised based on formative evaluation. The results of a needs survey conducted with educators in Imo State of Nigeria shows that formative evaluation is seldom conducted during instructional material development (See Appendix B). While the concept of formative evaluation is known, there is no formal operationalization of it in Imo State of Nigeria. ‘The aim of this study was to identify procedures for formative evaluation that can be appropriately adapted for use by educators of Imo State. Towards this end, a list of procedures was extracted from extant formative evaluation models. These procedures formed the basis for developing the 147Horn, op.cit. 148Dale M. Brethower: David G. Markle; Geary R. Rummler: Albert W. Schrader; Donald E.P. Smith. Programmed Learning: A Practicum. Ann Arbor, Publishers 1967 149Abedor op.cit. 72 73 final questionnaire (Appendix F) used to determine the per- ceptions of secondary school teachers and administrators of the suitability of the procedures. The study is also aimed at identifying factors that will hinder or facilitate the adaptation of formative evaluation in the secondary educa- tional system of Imo State. Research Questions Data collected in this study were used to answer the following research questions: 1. What are the perceptions of secondary school teachers and administrators as to the suitability of formative evaluation procedures for secondary schools in Imo State of Nigeria? To what extent do secondary school teachers and administrators perceive that they possess some selected skills for conducting formative evaluation? What factors do secondary school teachers and ad- ministrators perceive will hinder or facilitate the utilization of a formative evaluation model in secondary schools in Imo State of Nigeria? Based on secondary school teachers'and administra- tors'perceptions, what modification (if any) of existing models of formative evaluation is neces- sary to best serve the needs of secondary educa- tion in Imo State of Nigeria? 74 5. Based on secondary school teachers'and administra- tors'perceptions, what modifications (if any) in organizational structure of secondary educational system should be made in order to encourage the adaptation of formative evaluation in Imo State of Nigeria? The Population The population for this study comprised approximately 8,000 secondary school teachers and administrators in ap- proximately 210 secondary schools in Imo State. All these teachers and administrators are employed by the Ministry of Education which is also responsible for their promotion and discipline. All the administrators are college graduates with long years of teaching experience. Some have acquired ad- ditional professional qualifications. The teachers represent a more heterogenous population. Most of them are college graduates while others hold the National Certificate of Education--a three year program at the Advanced Teachers Colleges of Education for training professional teachers. BecausecfiFa shortageof teachers, a few of the teachers fall in the category of "auxillary teachers". These represent unqualified teachers who lack the pre-requisite qualifica- tions and experience to teach in secondary schools. ‘ 75 The Sample The sample that was studied was drawn from 42 out of the 210 secondary schools in the State. Out of the 42 schools 285 teachers and 42 school administrators were randomly sel- ected for the study. The Selection of the Sample A list of all secondary schools in Imo State was ob- tained from the Ministry of Education. A table of random numbers was used to select the 42 schools and the 285 teachers used for the study. Not all schools had the same number of teachers. There was a tendency for older institutions to have more and better qualified (graduate) teachers while the reverse was the case for younger institutions. In order to have a more represen- tative sample of all categories of teaches from all schools selected for the study, the researcher arbitrarily decided to randomly select a specific number of teachers from each school based on an arbitrarily chosen ratio of staff strength of a school. (Table 3.1). Thus for schools with a maximum staff ratio of 1-12, the number of teachers randomly selected was four. There were sixteen such schools used for the study. The total number of teachers used from this category of schools was 64. Many of the newer schools belong to this category. For schools whose staff strength fell within the range of 13-24, seven teachers were randomly selected from each school for the study. While for those schools whose staff 76 strength fell within the range of 25 and over, 10 teachers were randomly selected from each for the study. All the principals of randomly selected schools were used for the study. A letter from the Ministry (Appendix C) was used to gain access to each school. On arrival, the letter was pre- sented to the Principal who then directed the researcher to either the Vice Principal or to the Dean of Studies. The staff registry containing the names of all teaching staff was used in the random selection of teachers. No attempt was made to select teachers on the basis of teaching experi- ence or other qualifications. TABLE 3.1 NUMBER OF ACADEMIC STAFF SELECTED BASED ON STAFF STRENGTH OF SCHOOLS Academic Number of Number of Approx. Total Total Grand Staff Teachers Schools Total # of # of Total Strength Selected Used # of Teachers Princi- Teachers Used pals Used 1-12 4 16 192 64 16 80 13-24 7 13 312 91 13 104 25-Over 10 13 325 ‘130 13 143 42 829 285 42 327 77 The grand total number of respondents used for the study was 327 teachers and Administrators inclusive (see Table 3.1). This represents 36.4 percent of the approximate total number of respondents (teachers and Administrators in- clusive) making up the population from which the sample was drawn. Source of Data The data used in this study was collected through a questionnaire (Appendix F) responded to by the randomly selected sample of classroom teachers and administrators. Development of the Questionnaire In order to develop the final questionnaire for this study (Appendix F) the researcher identified procedures used by 10 authors to conduct formative evaluation (See Table 2.2). Most of the instruments used by these authors to collect data were in the form of criterion-referenced test items. Only Abedor150 developed a Likert-type scale for "debriefing" or finding out the attitude of students during the "post-tryout interview". Horn's151 "post-tryout interview" was in the form of a "dialogue" between the "programmer" and the student to determine "special difficulties encountered in the program". 150 op cit. p. 151 Joseph Allan Abedor, "Second Draft Technology..." Robert E. Horn, op. cit. pp 18-19 78 Light and Reynolds152 provided a checklist of questions for use in analyzing post test results for discrepancies. Dick153 provided a list of 7 items to instructional developers for them to select the ones they most preferred for formative evaluation. It is apparent that none of these instruments wholly met the aim of this study which is to determine the percep- tion of teachers and administrators of the suitability of formative evaluation procedures for secondary schools in Imo State. But ideas were abstracted from each for the de- velopment of the questionnaire for this study. Borg and Gail point out that: The student who is planning to collect information about attitudes should first search the literature to determine whether a scale suitable for his purpose had al- ready been constructed. If a suitable scale is not available, it will be neces- sary to develop one. 154 From the analysis of these 10 models (Table 2.2) 3 major "approaches" for conducting formative evaluation were identified, namely, the Tutorial Approach, the Large Group Approach and the Small Group Approach. The characteristics, advantages and disadvantages of each approach was specified as well as data collecting instruments common to all of them. 152Judy A. Light and Larry J- Reyfi°1ds °P Cit' 956 153Walter Dick op cit. p.100 154Walter R.Borg and Meredith D. Gall op cit. p.299 \ 79 These formed the basis for developing this questionnaire. The aim is to determine the extent to which teachers and administrators perceive these approaches and the various data collecting instruments as suitable for their school systems. The questionnaire (Appendix F) is organized into five sections. In Section 1, the three approaches were presented to respondents for them to select opp considered suitable for their schools. Section 2 contains statements aimed atfjnding (nu: ghy a respondent selected a preferred approach. This means that some of the characteristics, advantages and disadvantages of the approaches formed the basis for developing this sec- tion. For instance, item 9 in section 2 is to find out'if a respondent thinks the possibility that face-to-face inter- action will yield more data about program deficiency influ- enced his/her choice of approach." Some items in this section were repeated in a different form in Section 5. For instance, item 11 in Section 2 is related to items 41 and 43 in Section 5. Analysis of these items will show if there is any con- sistency in responses. Section 3 contains statements about procedures for formative evaluation. These procedures are related to course objectives, selection of samples, the place of observation, and interview during formative evaluation and the place of test instruments. The aim is to determine the extent to which 80 respondents considered these procedures as suitable for for- mative evaluation. Section 4 contains statements aimed at determining the extent to which respondents perceived themselves as possessing some selected skills for formative evaluation. Section 5 contains statements about factors respond- ents perceive may facilitate or hinder the adaptation of formative evaluation in their school system. To generate a statement in Section 2 through 5 in the questionnaire, the researcher first of all wrote down the characteristics, advantages, disadvantages and the various procedures for formative evaluation. Statements were next generated and cued to these characteristics and procedures. Originally, personal pronouns were used to start each state- ment (See Appendix D). This personalization of the state- ments was dropped because respondents during the pilot study felt "threatened" by it. Borg and Gall point out that when a respondent "received a questionnaire containing threatening items," they seldom return it and when they do, little con- fidence can be placed in the accuracy of his reply because of his ego involvement in the situation.155 Each respondent was requested to rate each statement in Section 2 to 5 based on a Likert-type scale of Strongly Disagree, Disagree, Agree and Strongly Agree. Several lssIbid, p.312. 81 different procedures have been used to develop measures of attitude. Title and Hill "compared the effectiveness of these attitude scales (Likert, Guttman, Semantic Dif- ferential, Thurstone, Self-Rating) in predicting objective indices of voting behavior and found the Likert-type scale superior to all others."156 The decision to use 4 response alternatives of "Strongly Disagree, Disagree, Agree and Strongly Agree" instead of 5 such as Strongly Disagree, Disagree, Undecided, Agree, Strongly Agree as proposed in the Likert-type scale arose from analy- sis of results of the pilot instrument. None of the respon- dents checked the Undecided category. Responses to statements in Section 1-3 in the question- naire are aimed at providing answers to research questions 1 and 4 in this study. Responses to statements in Section 4 are aimed at providing answers to research question 2 while responses to statements in Section 5 provide answers to re- search questions 3 and 5 in the study. Pilot Study, Two pilot tests were conducted for this study. The first was with 10 Nigerian students doing their post gradu- ate studies at Michigan State University. These 10 Nigerians were teaching in various secondary schools in Nigeria prior 156Charles R. Title and Richard J. Hill, "Attitude Measurement and Predictions of Behavior: An Evaluation of Conditions and Measurement Techniques", Sociometry. Vol. 30 (1967): pp. 199-213. 82 to their coming to Michigan State University. While this group may not be representative of the actual population in Imo State (especially since their training at Michigan State University may have influenced their responses to the pilot questionnaire), their comments were still useful in modifying ambiguous terminology and instructions in the questionnaire. The first questionnaire used with these 10 Nigerians is pro- vided as Appendix D. Operating on the assumption that the concept of for— mative evaluation used in the questionnaire may be unfamiliar to respondents in Imo State, a letter explaining the three ap- proaches for formative evaluation, the advantages and dis- advantages of each accompanied the second questionnaire (Pages 1, 2, 3 of Appendices E and F). Since the responses of the original pilot group might possibly have been influenced by their education at Michigan State University and therefore may not provideam essential level of unbiased information for revising the questionnaire, the doctoral committee recommended that a further pilot study using teachers and administrators in Imo State of Nigeria was necessary. The second questionnaire (Appendix E) was thus further pilot tested using 3 classroom teachers and 3 admin- istrators in Imo State. The second pilot study was in the form of ah oral interview and the revised questionnaire. The researcher had prepared a question format from which questions were posed to respondents (See Appendix G). During 83 the interview, the aim of the study and the characteristics of the 3 formative evaluation approaches were explained to respondents. The questions in Appendix G were next posed to respondents. These questions were related to specific re- search questions in the study and were aimed at obtaining additional data to be included or used to modify the ques— tionnaire. The 6 respondents used for this second pilot study were next given the second questoinnaire (Appendix E) to com- plete. Two additional sections 6 and 7 were provided to them. In Section 6, respondents were requested to indicate any ad- ditional factor(s) that may hinder or facilitate the adapt- ation of formative evaluation by teachers and administrators in their schools while Section 7 requested them to review the questionnaire to see how it could be improved. All the respondents used in this second pilot study commented on the comprehensiveness of the questionnaire. However, this pilot test did not generate significant new facts that would warrant revising the original questionnaire. However, some of the respondents in response to Section 6 suggested that such terms as behavioral objectives and criter- ion-referenced measured be further explained in the question- naire. The revised or final questionnaire (See Appendix F) was typed into stencil and duplicate copies were produced. Some photocopies of this revised questionnaire were also produced for distribution. 84 Administration and Collection of Questionniare Prior to visiting Nigeria, the researcher posted a letter to the Permanent Secretary of the Ministry of Educa- tion in Imo State. A similar letter was also sent to the Commissioner for Education (Appendices I and J). In this letter, the researcher requested permission to use selected secondary schools in the State to conduct this research. To ensure that these letters got to their destinations, sim- ilar copies were also sent by hand through a Nigerian travel- ling_to Imo State. On getting to the Ministry of Education, the researcher presented a photocopy of this letter to the Permanent Secre- tary who gave his approval and directed one of his Chief Edu- cation officers in charge of Academic matters to be of assis- tance. After explaining the research to this officer, a letter (Appendix C) was drafted and typed into stencil. Copies of this letter were duplicated and addressed to Principals of the randomly selected secondary schools. Copies of these letters were also forwarded to Area Inspectors of Education for their attention and cooperation. _ The researcher personally distributed the questionnaires to each school. Formative evaluation was explained to members of the staff prior to the distribution of the questionnaire. Two assistants also helped the researcher to collect the com- pleted questionnaires from some selected schools. These 85 assistants received no special training for this collection exercise. In any case, the principals had been informed of this arrangement in which the assistants were to collect the questionnaires from their schools. The researcher also re- ceived co-operation from his colleagues in his undergraduate University in Nigeria, most of whom were teachers in these secondary schools. Of the 327 questionnaires distributed, 206 or about 63% were completed and returned to the researcher or to his two assistants. All of the questionnaires collected were fully completed. Data Analysis The data collected from Nigeria was hand coded by the researcher and sent to the Scoring Center, Michigan State University for key punching. The punched cards were later sent to the Computer Center at Michigan State University for analysis using the Statistical Package for the Social Sciences (SPSS). This analysis was in the nature of descrip- tive statistics which was used to describe the frequency and percentage of the responses to the various statements (num- bered 4 through 51 in Sections 2 through 5 including responses to the 3 approaches in Section 1) covered in the questionnaiare (Appendix F). Data analysis was organized to provide answers to the research questions used for the study. Part of research question 1 was to find out which formative evaluation approach teachers and Administrators preferred using in their 86 schools. The frequencies and percentages of Teachers and Administrators preferring each approach were calculated. A bar graph was used to present these preferences (See Figures 4-1). Based on the type of formative eval- uation approach selected, the frequencies and percentages of teachers and administrators Strongly Disagreeing, Dis- agreeing, Agreeing or Strongly Agreeing with each statement 4 through 51 in Sections 2 through 5 of the questionnaire were also calculated. Realizing that the type of formative evaluation program selected by teachers and administrators would not be affected by the degree of disagreement or agreement to the statements in the questionnaire, the research col- lapsed the rating scales from 4 to 2 as shown in Table 3.2 TABLE 3.2 ORIGINAL AND FINAL SCALES USED FOR ANALYSIS AND RESPONSES Original Scale Final Scale Strongly Disagree DISAGREE Disagree Agree AGREE Strongly Agree 87 This means that for each approach chosen by teachers or administrators, the frequency and percentage of those who disagreed or strongly disagreed were combined into the new category of Disagree. The frequency and percentage of those who disagreed with each statement was calculated. The fre- quency and percentage of respondents (teachers and adminis- trators) opting for each approach who agreed or strongly agreed with each statement were also combined into the new category of Agree. These findings are presented as Tables 4.3 through 4.8 in this study. To interpret the results of responses, a decision rule was chosen such that any statement with which 70-100 percent of respondents agreed was viewed as highly suitable for conducting formative evaluation in their schools; any statement which 50-69% respondents agreed with was regarded as being moderately suitable for formative evaluation; any statement which only 0-49 percent of the respondents agreed with was perceived as not suitable for formative evaluation in their schools. The perception of teachers were compared with those of administrators to determine any consistency in re- sponses with regard to each statement. Responses to state- ments that are related were also compared for consistency. Finally, these responses were compared with what obtains in the literature for consistency. This formed the basis for making inferences and recommendations in Chapters 4 and 5. CHAPTER IV ANALYSIS OF RESULTS Introduction: The purpose of this study is to determine the percep- tions of secondary school teachers and administrators of the suitability of extant formative evaluation procedures for secondary schools in Imo State of Nigeria. The study also attempts to find out what factors can facilitate or hinder the adaptation of such formative evaluation procedures in the secondary school system of the State. In order to determine the perceptions of teachers and administrators, a review of the works of ten authors on formative evaluation was con- I ducted and procedures were identified which formed the basis for developing the questionnaire for this study. This chapter contains the analysis of responses to this questionnaire. Percentage of Responses Of the 327 questionnaires distributed for this study, 206 or 63.1 percent were completed and returned to either the researcher or his representatives. All the returned question- naires were fully completed. 88 89 The following is a distribution of the responses from the two groups -- administrators and teachers used in the study. TABLE 4.1: PERCENTAGE OF RETURNED QUESTIONNAIRES TO THE NUMBER DISTRIBUTED Number of Type of Questionnaires Number Respondents Distributed Returned Percentage Principals 42 25 59.5 Teachers 285 181 62.9 TOTALS 327 206 63.1 Out of 42 Administrators used for the study only 25 or 59.5 percent completed and returned their questionnaires. Of the 285 secondary school teachers used, only 181 or 62.9 per- cent completed and returned their questionnaires. The questionnaire was divided into five sections. Sec- tion 1 of the questionnaire contains the three approaches for conducting formative evaluation. Respondents were requested to select an approach they considered suitable for conducting formative evaluation in their schools. Based on their choice of formative evaluation approach, respondents were to rate each statement in Sections 2, 3, 4 and 5 of the questionnaire (Appendix F) in accordance with the rating scale provided. Sections 1, 2 and 3 of the questionnaire provided data for 90 research questions 1 and 4; section 4 of the questionnaire provided data for research question 2 while section 5 pro- vided data for research questions 3 and S. The following revised rating scale was used in this analysis. Rating scales of strongly disagree and disagree were combined into a new category of Disagree while rating scales of agree and strongly agree were combined into the new category of Agree (See Table 4.2). The decision to col- lapse the rating scales from 4 as in the questionnaire to 2 was based on the realization that the type of program for formative evaluation to be developed for use in Imo State of Nigeria, would not be affected by the degree of agreement or disagreement to statements in the questionnaire. For example, if a respondent agrees that she cannot construct valid test items and another strongly agrees with this statement, this will not lead to the production of 2 differ- ent programs to raise their competence. TABLE 4.2 SCALES USED FOR ANALYSIS OF RESULTS Original RatingiScale Revised Rating Scale Strongly Disagree Disagree Disagree Agree Agree Strongly Agree 91 Analysis of these responses is in the nature of de- scriptive statistics which is used to describe the frequency and percentage of the responses to the various statements covered in the questionnaire. The following decision rules were used for interpreting the percentage of the various responses. This is presented as Column 4 in Tables 4.3 through 4.8. l. Statements in the questionnaire in which 70-100 percent of respondents agreed were regarded as of high priority in their perceptions. 2. Statements in the questionnaire in which 50-69 percent of respondents agreed were regarded as of moderate priority in their perceptions. 3. Statements in the questionnaire in which 0-49 percent of respondents agreed were regarded as of low priority in their perceptions. This means the percentage of respondents from the Tutorial, Large Group and Small Group Approaches who agreed or dis- agreed with statements 4 through 51 in the questionnaire will be summed up and provided as Column 4. For example, in Column 4 of Table 4.3, 76 percent of administrators agreed that the ease of obtaining subjects influenced their choice of formative evaluation approach. Using the decision rule, this means that a high percent of administrators agreed that it is easy to obtain subjects for formative evaluation and that this influenced their choice of formative evaluation 92 approach. The percentage calculation for each approach is based on the number of respondents opting for each approach. Research Question 1: What are the perceptions of secondary school teachers and administrators of the suitability of formative evaluation procedures for secondary schools in Imo State of Nigeria? Sub-Research Questions: Data for research question 1 will be arranged to re- spond to the following sub-research questions: 1.1 What are the perceptions of secondary school teachers and administrators of the suitability of formative evaluation approaches for conducting formative evaluation in their secondary school systems? 1.2 What formative evaluation procedures do secondary school teachers and administrators consider suit- able for conducting formative evaluation in their school systems? Research Question 1.1 What are the perceptions of teachers and administrators of the suitability of formative evaluation approaches for con- ducting formative evaluation in their secondary school systems? The bar graph (Figure 4.1) depicts the percentage of teachers and administrators preferring each of three formative evaluation approaches as perceived suitable for their school systems. 93 100 KEY V Administrators: 45222532: Teachers: ‘ ‘ 90 l I 80 70 F m ) 8 60 c o u o :1 a 50 .; o A O 7' égzaé m 5", ///// o g 40 / m .p c ////X 8 / u 30 ., / 8 TM? / 2. / A :L/ 10 // / utorial Large Group Sma Group Approach Approach Approach FIGURE 4.1 Percentage of Teachers and Administrators Pre- ferring Each of 3 Approaches for Formative Evalu— ation. 94 Using the decision rules for interpreting the results of this study, it can be said that a moderate percent of teachers (63 percent) preferred the Small Group Approach while a low percent preferred the Large Group Approach (28.2 percent) or the Tutorial Approach (8.8 percent). On the other hand, a low percent of administrators preferred the Small Group Approach (48 percent) and the Large Group Ap- proach (48 percent) while only 4 percent preferred the Tutor- ial Approach. Research Question 1.2 What formative evaluation procedures do secondary school teachers and administrators consider suitable for conducting formative evaluation in their school systems? Tables 4.3 and 4.4 present the responses of secondary school teachers and administrators on statements regarding the suitability of the various procedures for formative eval- uation. These secondary school teachers and administrators have been grouped in accordance with the type of formative evaluation approach preferred. Column 4 of Tables 4.3 and 4.4 present the priority of rating of each statement based on the decision rules for interpreting a response. This priority rating scale depicts the summation of the percen- tage of administrators from each approach agreeing or dis- agreeing with statements used for answering research ques- tion 1.2 For example, in statement 4 of Table 4.3, 76 per- cent of administrators agreed that the ease of obtaining 95 TABLE 4.3 PERCEPTIONS OF ADMINISTRATORS OF THE SUITABILITY OF FORMATIVE EVALUATION PROCEDURES (N-ZS) _______JL 144_ 3 4 TUTO L . LARGE GROUP APPROACH SMALL GROUP APPROACH Final S Rating C5139 Disagree Agree Disagree Agree Disagree Agree DeCision Rules '70- Statements on Procedures Freq S Freq X Freq X Freq 1 Freq % Freq % 100% high, 50-69% moderate, 0-49: low) DISAGREE AGREE Choice of formative evaluation approach is influenced by: 4. The ease of obtaining subjects 0 0.0 1 4.0 4 16.0 8 32.0 2 8.0 10 40.0 24 5. Ability of Approach to avaid biases 0 0.0 l 4.0 0 0.0 12 48.0 2 8.0 10 40.0 8 92 6. Similarity of Appraoch to the type used in my school 1 4.0 0 0.0 3 12.0 9 36.0 L2 48.0 0 0.0 64 36 7. Approach selected is less com- plex than others 0 0.0 1 4.0 3 12.0 9 36.0 3 12.0 9 36.0 24 76 8. Approach has capability for obtaining attitudinal data ‘ro' subjects 0 0.0 1 4.0 0 0.0 12 48.0 1 4.0 11 44.0 4 96 9. The possibility of face-to-face interaction using selected approach. 0 0.0 1 4.0 S 20.0 7 28.0 4 16.0 8 32.0 36 64 10. The possibility of administra- tive support 0 0.0 1 4.0 6 24.0 6 24.0 4 16.0 8 32.0 40 60 11, Thg IVIllibllltY of rgsourcgs 1 4.0 0 0.0 6 24.0 6 24.0 3 12.0 9 36.0 40 60 Procedures essential for formative evaluation are: 12. BehaVior objectives 0 0.0 l 4.0 3 12.0 9 36.0 0 0.0 12 48.0 12 88 13. Formative evaluation is possible even if behaVioral objeCtives are not speCified 0 0.0 1 4.0 9 36.0 3 12.0 Ll 44.0 1 4.0 80 20 14. Select students of varying abilities for rev1Sion exerCise 0 0.0 1 4.0 1 4.0 11 44.0 0 0.0 12 48.0 4 96 15. Students should be selected , . randomly v 0.0 1 4.u a 12.0 v 36.0 1 4.0 ii 44.0 16 64 16. Observe and interview students during gown”. .vuuuion 0 0.0 1 4.0 o 0.0 12 48.0 o 0.0 12 48.0 o 100 During an interview, students can be asked: ,. 0 comment on clarity of . statements 0 0.0 1 4.0 0 0.0 12 48.0 1 4.0 11 44.0 4 96 18. To comment on clarity of illustrations 0 0.0 l 4.0 0 0.0 12 48.0 2 8.0 10 40.0 8 92 19. To comment on appropriateness of the sequence 0 0.0 l 4.0 4 16 0 8 22 0 l 4 0 ll 44 0 20 80 20. To comment on how boring a material is 0 0.0 l 4.0 6 24 0 6 24 0 3 12.0 9 36 0 36 64 21. To enc1rc1e difficult terms 0 0.0 l 4.0 3 12 0 9 36 0 1 4.0 11 44 0 16 84 During their use of instructional mg;p;;plg_gppggh£s can be observed for: ZZT'Difficult in operating .quipm.nt 0 0 0 1 4.0 2 8.0 10 40.0 1 4.0 11 44.0 12 88 23, Frown; on their faces 0 O O 1 4.0 2 8.0 10 40.0 4 16.0 8 32.0 24 7 24, Student. should be pretgsted 0 0 3 1 4.0 4 16.0 8 32.0 2 8.0 10 40.0 24 76 25. Posttest scores can be used to determine level of understand- ing 0 0.0 1 4.0 0 0.0 12 48.0 1 4.0 11 44.0 4 96 26. Short written quizzes should be given during a losson 0 0.0 l 4.0 0 0.0 12 48.0 1 4.0 11 44.0 4 96 Results of Posttest should be 522$¥%21_£2_£i£9_%2£i_ I. at was simi at about items missed 0 0.0 1 4.0 1 4.0 11 44.0 0 0.0 12 48.0 4 96 28. How items missed differ from those p.ss.d O 0.0 1 4.0 0 0.0 12 48.0 0 0.0 12 48.0 0 100 29. What in the material could have c.u’ed Ch. failur. 0 0.0 1 4.0 0 0 0 12 48 0 0 0.0 12 98.0 0 100 30. How to reCtify cause of failure 0 0.0 1 4.0 0 0 0 12 48 0 0 0.0 12 48.0 0 100 96 TABLE 4.4 PERCEPTIONS 0F TEACHERS OF THE SUITABILITY OF FORMATIVE EVALUATION PfiOCEDURES (H.191) l 2 3 4 TUTORIAL 4”MACH LARGE GROUP APPROACH SHALL caoup APPROACH Final 1 Patino Disaqree Agree Diaadree Agree Disagree Agree Using Dec1510n Statements on Procedures Freq i Freq I Freq S Freq S Freq % Freq 1 Rules (70-1001h10h 50-69: moderate. 0-491 Low) DISACREE AfiPEF Ch01re of IOFIAlIVQ evaluation approach is influenced by: 4.The ease of obtainina sub3ectn 8 4.4 8 4.4 19 10.5 32 17.7 28 15.5 86 47.5 30 70 5. Ability of approach to avaid biases 7 3.9 9 4.9 7 3.9 44 24.3 21 11.6 93 51.4 19 81 6.51m11ar1ty of approach to the type used in my school 10 5.5 6 3.3 23 12.7 28 15.5 87 48.0 27 15.0 66 34 7.Approach selected is less com- plex than others 9 4.9 7 3.9 23 12.7 28 15.5 52 28.9 62 34.3 46 54 8.Approach has capability for obtaining attitudinal data from subiects 4 2.2 12 6.6 l 0.6 50 27.6 7 3.9 107 59.1 7 93 9 The nossxbility of fare-tn-face Interaction unlnq selected approach 7 3.9 9 4.9 36 19.9 15 8.3 37 20.4 77 42.5 44 56 10.The possibility of administrativ supDUlV T 2.8 11 6.0 16 8.8 35 19.4 48 26.5 66 36.5 38 6% 11.Thc availabilitv of resources 5.5 6 3 3 29 16.0 22 12.2 45 24.9 69 38.1 46 54 r0 Procedures essential for formative nvn]Ud?th are: F77B3EFTT5?3T_FB)nctivcs 0 0.0 16 8.8 2 1.1 49 27.1 6 3.3109 59.7 4 96 13.Fnrmat1ve evaluation is possible even if behnvnoral fib39c1100a ar not speCified 34 7.7 2 1.1 45 24.0 6 3.3 98 54.1 16 8.8 87 13 14.Selnct students of varyinn abilities for reVisinn exercise 4 2.7 12 6.6 l 0.6 50 27.6 14 7.7100 55.3 10 9" 15.Students should be selected randnnly < 2.8 11 6.0 10 5.5 41 22.7 22 12.1 97 50.9 20 80 ’.Ohserve and intorV1~H students durinu formative evaluation 7 1.1 14 7.7 2 1.1 49 27.1 9 6.9 105 58.1 7 “3 Duxinq an 1nterv1eu students can be asked 1:.To comment on clarity of state- ments 0 0.0 18 8.8 4 2.2 4/ 26.0 23 12.7 91 50.3 15 8S 18.To consent on clarity of illustrations 5 2.8 11 6.0 5 2.8 46 25.4 22 12.1 92 50.9 18 82 I9.Tu Consent on appropriateness of the sequence 7 3.9 9 4.9 22 12.2 29 16.0 48 28.5 66 36.5 43 57 20.To comment on how boring a . material is 5 2.8 11 6.0 30 16.0 21 11.8 38 21.0 76 42.0 40 60 21.10 enc1rcle difficult terse 2 1.1 14 7.7 4 2.2 47 26.0 15 8.3 99 54.7 12 an airing their use of instructional materials can be observed for: 22.01tf1cultv in operating equip~ sent |o 0.0 16 8.8 3 1.7 48 26.5 8 4.4 106 56.0 e 94 23.Frouns on their faces 6 3.3 10 5.5 13 7.2 38 21.0 41 22.7 73 40.3 13 07 24.5tudents should be pretested 6 3.3 10 5.5 11 6.0 40 22.2 28 15.5 86 47.5 25 75 25.Posttest scores can be used to deter-inelevuluofuhderstnndinq 6 3.3 10 5.5 1 O o 50 27 6 15 8 3 99 54 7 12 88 26.5hort written quizzes should be given during a lesson 0 0.0 16 8.8 3 1.7 48 26.5 9 4.9 105 91.1 7 93 Results of Post tests should be a z. n n out: . moout itees sinned 2 1.1 14 7.7 7 3.9 44 44.3 18 9.9 90 53.1 15 as 28.now itess missed differ (to. those passed 2 1.1 14 7.7 2 1.1 49 27.1 17 9.4 97 53.6 12 an 29.8hat in the Iat0rldl could have caused the failure 4.4 8 4.4 3 1.9 48 26.5 22 12.2 92 50.8 18 8: 30.nou to reCtify cause of failure 5 2.8 11 6.0 0 0.0 51 28.2 11 6.0 103 59.0 9 91 97 subjects influenced their choice of formative evaluation approach. Using the decision rule, this means that a high. percent of administrators agreed that it is easy to obtain subjects for formative evaluation and that this influenced their choice of formative evaluation approach. Below is a summary of perceptions of teachers and administrators as to the suitability of formative evaluation procedures. I. Procedures Which a High Percentagg of Teachers and Ad— ministrators Perceived as Suitable for Formative Evaluation (a) Specification of Behavioral Objectives (b) (c) (d) 80 percent of administrators and 96 percent of teachers perceived well specified behavioral objectives as very essential for formative evaluation. Selection of Students of varying Abilities 96 percent of administrators and 90 percent of teachers favored selecting students of varying abilities for formative evaluation. Observation and Interviewing of Students 100 percent of administrators and 93 percent of teachers agreed with the need to observe and interview studénts during formative evaluation. Comment on Clarity of Written Instruction by Students During Interviews 96 percent of administrators and 85 percent of teachers favored the use of student comments on the clarity of written instructions during formative evaluation. (e) (f) (g) (h) (i) (j) 98 Comment on Clarity of Instructional Illustrations by Students 92 percent of.administrators and 82 percent of teachers agreed that student comments during inter- view on the clarity of instructional illustrations can yield useful data for formative evaluation. Identification of Difficult Terms by Students 84 percent of administrators and 88 percent of teachers agreed on the importance of encircling difficult terms they do not understand during for- mative evaluation. Qbservation of Student Facility in Using Instructional Equipment 88 percent of administrators and 96 percent of teachers agreed that students should be observed for problems while using instructional equipment. Collection of Entry Behavior on Students 76 percent of administrators and 75 percent of teachers favored the pretesting of students prior to formative evaluation. Collection of Interim Test Data on Student Learning 100 percent of administrators and 93 percent of teachers favored using interim tests during forma- tive evaluation. Use of Post Test Scores 96 percent of administrators and 88 percent of teachers favored the use of post test scores for formative evaluation. 99 96 percent of administrators and 85 percent of teachers favored analysis of post test scores for identifying what was similar about items missed; 100 percent of administrators and 88 percent of teachers favored identifying how items missed dif- fered from those passed; 100 percent of administra- tors and 82 percent of teachers favored analyzing instructional materials for what could have caused the failure. Procedures Which a Moderate Percent of Administrators and Teachers Perceived as Suitable for Formative Evalu— ation. (a) Collection of Student Comments on how Boring/ Involving an Instructional Material is: 64 percent of administrators and 60 percent of teachers agreed with this procedure. Procedures Which a High Percent of Administrators but a Moderate Percent of Teachers Kgreed with: (a) (b) Collection of Student Comments on the Appropriate- ness of the Seguence of Instructionaf11aterials. 80 percent of administrators and 57 percent of teachers agreed on the importance of using such comments for formative evaluation. Observing Students for Frown on Their Faces While Using a Material 76 percent of administrators and 67 percent of teachers agreed on observing students for frown on their faces as source of data for formative evalu- ation. 100 Research Question 2 To what extent do secondary school administrators perceive themselves as possessing some selected skills for conducting formative evaluation? Tables 4.5 and 4.6 present the frequency distribution and the percentages of teachers and administrators agreeing or disagreeing as to whether they perceived themselves as possessing some selected skills for formative evaluation. These results are summarized under the following head- ings} 1. Skills a high percentage of teachers and admin- istrators perceive that they possess for forma— tive evaluation. 2. Skills a high percentage of administrators but a moderate percentage of teachers perceive they possess. 1. Skills A High Percentage of Administrators and Teachers Perceive That They Possess (a) 80 percent of administrators and 73 percent of teachers perceived themselves as possessing the skills for specification of behavioral objectives. 2. Skills A High Percentage of Administrators But A Moderate Percentage of Teachers Perceive That They Possess. (a) 100 percent of administrators but 68 percent of teachers perceived themselves as possessing the skills for constructing valid criterion test in— struments . mm m o.ov oH o.m ~ o.mv NH o.o o o.v H o.o o uuonnam m 36H>umucH 0cm m>uomno >H0>Huomnno cmo H .mm m“ ooH c o.mv NH c.o Q o.mv HH o.o o o.v H 0.0 o mucosauumcw and» coHuOuHuu l OHHm> uuzuumcou cmu H .Nm om om o.o« as o.m u c.ov ca o.m N o.o o o.« a mm>auuoflno Hmuow>mnmn >mwumnm cmu H .Hm mmmc< mmm0M m>HBHmUmmm mmOBdmamHszo< deme um.v meu0ucH pcm w>twmno >Hw>Huuonno cmu H .mm we NM H.mv mh m.mH m: c.H~ mm N.> MH m.m h m.v m mucwezuumcH umwu coHu0UHuu DHHm> uusuumcoo :mv H .Nm an hN m.hv mm m.mH mA o.H~ mm ~.> mH v.v m v.v m mo>Huuwnno HauoH>m£on >wwuwam cmo H .Hm mmz0< mmzoN m>HBHm0mmm mzmzuuno on HHHa :oHunsHa>o o>Huoau09 .Hm 9s v~ 9.99 9 9.99 9 9.99 9H 9.9 n 9.9 H 9.9 9 on: o» >uao nH cHuasHaso oaHuquou .9m 9» 99 9.99 n 9.99 9 9.99 «H 9.9 9 9.9 9 9.9 H acouuuovea 0» oHacHu nH coHunsHaao 0>Huoluou .9H 99 99 9.9H v 9.99 9 9.9 ~ 9.99 9H 9.9 9 .a.v H use >9» 0» >uno on yo: . HHHz coHuaan>9 oaHualuou .9. ~9 9 9.99 9H 9.9 u 9.9. ~H 9.9 9 9.9 H 9.9 9 nauoc o» naucsou can you HHH) coHuaanao osHucluou .H' 99 99 9.99 9 9.9H v 9.99 ~H 9.9 9 9.9 H 9.9 9 nooauca>vnnHo nuH nonosuso :oHuuaHn>o O>HH oaIuOH no caucus-spa 0:9 .9, 9 '9 on o.o~ o n..u 9 o.o. oH o.9 H 9.9 o c.¢ H . euca- . ououmoa nan 9.uooHuuo 99 .99 9H 99 9.11 HH 9.. H 9.v~ 9 9.v~ 9 9.9 H 9.9 9 cause: II‘ avouo 90 ~40» 9.900H o :9 .vv "co Gonna 9H Hoozum 9H9» 9H coHHOIOum ~a 9 a... HH c.. H a... HH 9.: o 9.9 e a.. H auacH-uaxaonuxuo: uou >uHcauuoaao no x009 .nv 9p 99 9.99 HH 9.. H 9.~H 9 9.9H v 9.9 9 9.9 H 99.». voHuHHaac «0 goo; .~q 99 v 9.99 9H 9.9 9 9.99 HH 9.v H 9.9 H 9.9 9 ouH>uon-cH now >uHcsuuoaao no 9099 .H9 99 NH 9.99 9 9.9H v 9.~n 9 9.9H v 9.9 H 9.9 9 oIHu o xuaa . v u n vouovch on HHHa :oHuasHa>9 o>Hunsu09 um 99 9.v~ 9 9.v~ 9 9.v~ 9 9.v~ 9 9.9 H 9.9 9 :oHunuHcoouo 0H9» 9H .0909 gun» a nunHuo chock .9H «9 9 9.99 HH 9.. H 9.99 HH 9.. H 9.9 H 9.9 9 coHuoan>o oaHualuOH usonn ooauoucH >HuaIOua on HHH: mango-oh .99 «n 99 9.9 H 9.vv Hi 9.9~ n 9.9~ 9 9.9 9 9.9 H :oHuasHuao oaHuasuOH 90 unocouaza novcH: HHH: 90H» IssuancH no uHoccasu >caz .sn .9 on c.9n a o.~H H o.9~ . o.o~ m o.o o o.. H ..ouauuo vegan. unozuaz 0:» >3 Gannon (can on yuan :oHuaaHn>o CsHunluou .voouuan OP .9H 99 vv 9.99 9 9.99 p 9.99 9 9.~H n 9.9 9 9.v H unavaoH coHcHao >n noun: :HoHuo on an:- 90000:: 0» :oHuaaHoso oaHualu09 .99 v9 9H 9.99 9H 9.9 N 9.v9 HH 9.v H 9.9 9 9.v H coHuauchOuo quu :H anon-0H coHcHao ou-Huo chock :oHuauch no Hoomww o» acHuaHoa mucuuau mw¢ uuao¢mHa .aoH nov-o .ouauovo- mcopuN m>HFHmU¢mm m¢0P<¢PWHthO¢ WCOFUuoano 0H >990 an “HHHI :oHHaaHa>o o>Hualuom .H9 99 99 9.99 99 9.99 99 9.9H 99 9.9H99 9.9 9H 9.9 9 on: 0» >nao 9H :oHucchao 9>Hunluoa .99 99 99 9.99 H9 9.9H 99 9.9H 99 9.9 9H 9.9 9 9.9 HH ccaunuovca cu oHas_a 9H :oHHa:H~>o o>Huaeuoa .99 99 99 9.99 99 9.99 99 9.HH H9 9.9H99 9.9 9 9.9 9 use >uu cu 9mm. 09 so: HHH: coHunaHn>9 o>HHaluo9 .99 99 9H 9.99 H9H 9.9 9H 9.99 99 9.9 9 9.9 9H H.H 9 wind: 00 naucaou 9:» uoc HHHJ :oHHm:H~>o o>HHaluom .99 S 2 9.3 2 9.: 3 9.99 S ~.~ 9 a; a a.“ a «33:233.. 3. agoHozuao coHuusHa>o o>Hu aaIuOu no novauca>vn och .99 99 99 9.99 99 9.99 on 1: 2 1: 99 H.H 9 9.9 9H 35... -009uoa non 9.900Huuo :9 .99 c9 .0 . a. up a a. 99 r :9 99 9,9 9. 9.9 9H 9.9 9 :oHuasonuU no uao> c.000Humo an .99 "co 00999 9H Hooguo anu :H :oHHOIOHQ 99 9H 9.99 99H 9.9 9H 9.99 99 9.9 9 9.9 9H H.H 9 uncHIonxcczaxuoa 009 >HHcsuuoaao 90 9094 .99 99 99 9.99 99 9.9H 99 9.9H 99 9.9 9H 9.9 9H 9.9 9 uuaua uoHuHHasv Ho 9094 .99 H9 9H H.99 99 9.9 9H 9.99 99 9.9 9H 9.9 9H H.H 9 00H>u09-:H 009 >uHcsuuoaao no :09; .H9 99 99 9.99 99 9.99 H9 9.9H 99 9.9H.99 9.9 9H H.H 9 01H» o 9099 u A couwvcHz on HHHz :oHuaaH9,9 o>HH1Iuo 99 99 H.99 99 9.99 99 9.9 9H 9.9H99 9.9 9 9.9 9 coHuauHcoOHO anu :H 00909 999» a numeo 00059 .99 99 HH H.99 99H 9.9 9 9.99 99 9.9 HH 9.9 9H H.H 9 . coHuqun>o O>HuacuOu uaona pennancH >Huac009 on HHH) 99020909 .99 99 99 9.HH H9 9.H9 99 9.9 9H 9.9999 H.H 9 9.9 9H coHua=H9>o o>Huauuou 90 990900939 unficH: HHHJ coHu udluOucH uo aHoccucu >99: .99 99 99 H.99 99 9.99 99 9.9H 99 9.9 9H 9.9 9H 9.9 9 auouHuuo 009:9» guano”; ecu 9: vouuoa .939 on anal :OHHaaH:>o o>Huaa909 . 000039 ch .99 99 99 9.99 99 9.99 99 9.9H 99 9.9H 99 9.9 9 9.9 HH nuooaoH coHcH 0 >9 noun: IHOHuo on anal 00000:. 00 coHuaan>9 o>Hunluom .99 99 99 9.H9 99 9.HH H9 9.9H 99 9.9 9H 9.9 HH 9.9 9 coHuaNH ucaouo 9H2» 9H nuooaoH coHcHao nuono 09029 .9 coHHa9Hca 00. . Hoozum ca acHuaHom nuouumk 99909 mu¢0a 2>H99tzca 90 23H?H90929 nzmzuueoauuooam mN N m e m m use: ea m N v e v cOwuouoxm «N v v o m m cowuoscoudmm memuH awesomuoum madmaocaum mammocoo muomm pence mou< Hobos ucoucou mo Tmcepcmumuocca m>fiuOOnn HmcoavoauumcH m¢mm¢ BZNBZOU ozm mm>HBUMHmO JtZOHBUDmBmzH mom ZOHB¢UHEHUmmm ho manta H.m mqm<9 136 the container. Another technique that can be used is the stratified sampling technique. This technique is useful when it is essential to select a certain proportion of sub- groups in the population in relation to their number in that population. (4) Interviews and Observation The essense of this training is to enable the inter— viewee to avoid "inadvertent teaching" during the inter- view. Horn172 provides an illustration of "inadvertent teaching" during formative evaluation. The following heuristics are also pertinent as a guide during an interview: 1. Do not provide answers that will discourage stu- dents from making future comments about a program 2. Establish an informal atmosphere that will create a cooperative attitude on the part of students. 3. Orientate students on the importance of their in- formation and solicit that they be as frank as possible without any inhibition. 4. Ensure that a conduc ive environment is available for the interview especially ensure that a replica of the environment in which the instructional ma- terial will be used is available. As regards observation of subjects,an evaluator ought to position herself where a subject will not be aware he is being 1”Robert E. Horn, op. cit. p.2. 137 observed. The evaluator should note down what was observed and use this information for discussion during the interviews. The first section of this training program had been concerned with the organizational aspects. This second sec- tion is concerned with the actual carrying out of formative evaluation by selected faculty. Figure 5.2 is a schematic representation or flow chart model of a program for this actual practice of formative evaluation. It consists of processes and decision points. Each process is an activity while emanating from each decision point are alternative acti- vities that will lead to effective formative evaluation. The discussion of Figure 5. 2that follows is based on the numbering in the rectangular blocks used in the flow chart. 1. Enter. This signals the commencement of the for- mative evaluation process. 2. Faculty are selected for formative evaluation training program. 3. This decision point is aimed at finding out those faculty members who view as essential the specification of course objectives in behavioral terms. 4. Those faculty members who are opposed to the spec- ification of behavioral objectives are encouraged to state goals/aims of their instruction, instead. 5. This decision point is to find out if those faculty members who support the specification of behavioral objectives have the necessary writing skills. 138 Figure 5.2 Schematic Representation of a Training Program for Formative :valuation. State doals/ Aims of instruction A Faculty are selected for Traininz 2 tecify Objectives? 3 No Can Write? Yes 1 V Orzanize Rork- -es shop for specifying Goals/Aims 8 Yes Behavioral Can onstruct Criterion/ efer ‘ 0‘ Q s' 9 Yes Faculty generates test items 11 Organize nork- shoe for soeci win: beha- vioral obiectiveq 6 j Organize «ork- snoo for test construction 10 Provide Assistance 13 Have Prerequisite Skills? 1a Train Facultv in Approaches for Formative “'1 B Evaluation 16 :hNo Orzanize workshon f or ranoon selection, ocserv vation and interViewing 139 Faculty selects Formative Evaluation Approach Facultv selects subjects for Formative Evaluation ——{ Match Objectives/ Goals and Tests with Conditions for Evaluation Approach 19 Above Conditions matched” .....___:L_______.. Try-out Observe Subject Prototype while using Instructional Prototype Materials 22 21 JL—— ~e Provide F- Assistance Any Sign of difficulty? Take none of subject 26 lbo Observe Subjects Present Test Items 27 28 ign of Difficulty? 29 subjects Collect Revise 36 Select Hatched Groups 37 Try Revised : Validati.. Subject 32 ccccc 10. ll. 12. 13. 14. 15. 16. 141 For those who lack the skills a workshop is organ- ized to provide them with this training. This decision point is to find out if faculty has the necessary skills for specifying goals/aims. For those who lack the skills a workshop is organ- ized to provide them with this training. This decision point is to determine if faculty mem- bers have the skills for constructing criterion and norm referenced test items. For those who cannot, a workshop is also organized to provide them with this training. Faculty members now generate Pre, Post and interim test items for their prototype. This decision point is to determine if test items are congruent with the aims/objectives of the in— struction. If there is lack of congruence, faculty members are provided with assistance. This decision point is to find out if faculty mem- bers possess the skills for random selection of subjects and for observing and interviewing sub— jects. If faculty members can perform these tasks, they are ready for the next phase of the training which is learning about formative evaluation ap— proaches. If faculty members lack these skills, a workshop is provided to raise their competence. Faculty members are exposed to the 3 approaches for Formative Evaluation. This consist of 17. 18. 19. 20. 21. 22 . 23. 24 . 142 presenting them with works of authors who had used the various approaches for formative evalu- ation. The characteristics, advantages, and dis- advantages of each approach will be elaborated. Faculty members are now ready to tryout what they have learned in a real situation. Symbol indicating off page connector Faculty members now select formative evaluation approach. Subjects to be used for formative evaluation are also selected. Faculty members match course objectives and test items with specific conditions for a formative evaluation approach. This decision point is to determine if the con- ditions for use of prototype are congruent with conditions for the selected formative evaluation approach. If there is no congruence, this is checked and rectified. If there is congruence, the prototype is presented. Subjects are observed as they use the prototype. This decision point is to determine if subject(s) exhibit any sign of difficulty while using the prototype. This decision point is to determine if subject(s) request(s) for assistance as a result of difficulty encountered while using the prototype. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 143 If subjects request for assistance this is pro- vided without "inadvertently teaching" them. If subject does not signal for assistance, faculty member takes notes of subject and notes down what was observed. Symbol indicating off page connector Test items are next presented to subjects. Subjects are also observed as they use test items. This decision point is to find out if subjects are encountering any difficulty. This decision point is to find out if subjects are in need of assistance. If subjects request for assistance, this is to be provided without "inadevertently teaching“ them. If subjects do not signal for assistance, take note of what was observed. Collect and analyze prototype material and reponse to test items. Faculty member interviews subjects Faculty member(s) gather(s) additional information from subject matter experts Instructional materials are revised using feedback from interviews, post test scores and experts 37-38. Faculty members randomly selected matched groups 39. for trying the unrevised prototype and the revised instructional material. Faculty member writes a validation report specifying the nature of formative evaluation that was con- ducted and the results. 144 Implication for .Further Research This study has concentrated on determining the Per- ceptions of secondary school teachers and administrators of the suitability of formative evaluation procedures for adaptation into their schools in Imo State of Nigeria. lt_ must be stressed that one's perceptions over matters that are attitudinal may vary with circumstances. Thus, even though administrators and teachers used 'for this study perceived the various procedures for con- ducting formative evaluation as suitable for their school systems, the following additional research are considered essential in order to find out if there is any congruence between what is perceived and what obtains in practice. 1. A study to determine the extent to which teachers and administrators possess the necessary skills for specification of objectives and construction of valid test instruments and the extent to which these are made manifest in their teaching. A network analysis to identify channels of com- munication, methods of dissemination of informa- tion and causes of problems associated with flow of information related to formative evaluation in the educational system of Imo State. A study to identify successful and unsuccessful application of formative evaluation in_the secondary educational system of Imo State of Nigeria, their originatbrs, characteristics and consequences. 10. 145 A study to determine evaluation strategies and reward systems operating in secondary school systems in Imo State. A comparative study of three prototypes to deter- mine which of the 3 formative evaluation approach- es is most suitable for secondary schools in Imo State. A study to compare the effectiveness of instruc- tional materials formatively evaluated using feedback from "experts" and target users. A study to determine the minimum level of forma- tive evaluation sufficient to improve instruc- tional materials. A study to validate a formative evaluation training program developed for use by adminis- trators and teachers in secondary schools in Imo State of Nigeria. A study to determine the extent to which secon- dary schools in Imo State of Nigeria use forma- tive evaluation information in their selection of commercially produced instructional materials. A study to determine the extent to which com- mercial producers conduct formative evaluation in the process of producing instructional materials. APPENDICES APPENDIX A 146 Division of Educational Systems Design College of Education Michigan State University East Lansing, Michigan 48824 May 27, 1980 Dear Educator: LETTER TO EDUCATORS ON THE NEED FOR FORMATIVE EVALUATION I am a Nigerian studying for my Ph.D. in Educational Systems Development (Educational Technology) at Michigan State University. After graduating from the University of Nigeria, Nsukka in 1972, I taught Biology at Federal Government College in Maiduguri; Kano Teachers' College, Kano and worked as an education officer in the UNESCO division of the Federal Min- istry of Education, Lagos. I am conducting a survey research for my Ph.D. disser- tation. My topic of interest is: Perceptions of secondary school teachers and school administrators of the suitability of formative evaluation procedures for adoption in Imo State of Nigeria. By formative evaluation I mean the procedure of giving a prototype instructional material to a student or a class of students, testing them to find out about their performance, interviewing them to find out about their difficulties while using the prototype instructional materials and based on feed- back from this student or group of students, correcting and revising the prototype material until its quality is improved to the desired level of effectiveness. The purpose of this letter is two-fold and are embodied in the three statements below. You can help me to fulfill this purpose by rating the statements according to the following scale: 1 stands for Strongly Disagree; 2 stands for Disagre ; 3 stands for Agree; 4 stands for Strongly Agree. 1. Formative evaluation as defined above exists in my educa- tional system 147 Page 2 2. Formative evaluation is necessary for improving the qual- ity of instructional materials. 4 SA 1 SD Ute new 3. Information on formative evaluation of an instructional material should guide the selection of such materials 3 4 SA 1 SD UN 3’ This brief survey will help me ascertain if the need exists for this dissertation. Your prompt response to the above statements will therefore be highly appreciated. A self-addressed and stamped envelope has been enclosed to expedite your return of your completed response. Thanks for your.co-operation. Your Sinc Hya int I. APPENDIX B 148 - CHECKLIST FOR THL FIRST TRYOUT SESSION READ THE CHECKLIST NOW, BUT DO NOT ATTEMPT To MEMORIZE ANY or- IT. YOU ARE EXPECTED To BE FAMILIAR WITH US CONTENTS IN THE MATERIAL WHICH rouows. The programmer should first explain to the tryout student that the material: he is. to be given are intended to help him learn subiect matter designated in the title. The programmer should emphasize that the role of the student is to help the programmer evaluate some new educational materials. Comments and suggestions that the student makes will help the. programmer make revisions. he programmer should then explain that he has to know how much the student already knows about the subject matter and whether or not the student has all of the prerequisites to learn from the materials. He should then give the student the pre-test (always) and the prerequisites test (if required)* timing the. student on both . Both of these 'may be done when the test subiects are being selected. When the tests have been completed, the programmer should show the student the program and explain again that it is the material, not the student, that is '0 be tested from now on. This is an especially important point about which tin.- student should have no question. (a The student should be given a ball point pen with which to write his answers. (This will prevent him from erasing potentially valuable information for revising the program.) He should be provided with answer sheets, if any. Tell the student to put an "X" next to the items he thinks he got wrong after he has checked his answer. It the program contains Open-ended questions, tell the student about this. Explain to the student that if he doesn't know an answer, he should take a guess and write "guess" on the answer sheet. If he simply can't think of an answer, he should leave the answer blank and place an "X_" next to the item on the answer sheet. Till the student the time limits placed on the tryout session and that he can tqu- a breul: whenever he feels like stopping. ""-'.°mpltosl7f: that any comments he wants to write or express to the programmer ..ili he UStl-Iul and wercomed . 149 IO. Then ask the student to commence with the materials. (If the student asks what he should do or asks if he's doing it right, the programmer should gently insist that all the directions necessary are given in the materials. It is important to try out the directions, too.) II. The programmer should note carefully the time at the beginning and end of each tryout session and keep track of "break time" . ...—.- "‘You give a prerequisites test if the program assumes skills such as mathematics or vocabulary knowledge that the students might not have. For example, a statistics program should have an arithmetic and algebra prerequisites test. You do not need a prerequisites test if the population can be assumed to have the required background. For example, management programs will not generally 'require a prerequisites test on company organization because a knowledge of this is assumed. II. III. 150 "AGENDA" FOR MK II TRYOUT/DEBRIEFLNG Instructional Develgpment Tryout Session Preflight Facility: Check software installation and operation in each carrel. Check for required number of workbooks, pre- and post-tests, answer sheets, keys, data matrices, reactionnaires, audio recording equipment and problem posting flip chart, and refreshments. Student Arrival: l. 2. Pass out name tags Create atmosphere of informality and low threat Students have volunteered for this session and are unsure as to whether this will adversely affect their grade in the course, future employment, or other more horrible reprisals. They must be put at ease or very little constructive criticism will be forthcoming. Therefore, wear informal clothes (the student will) and make small talk as students arrive. Intrbductgry Remarks: 1. Welcome: Thank students for their willingness to help you revise your "first draft" materials. Assure them that their frank and honest opinions are of crucial importance and that nothing they say will in any way affect their grade, job, or pose other threats. It is the author and the program which is un- der the gun--not them. Role 9j_Students: To help you identify weaknesses in the materials, pro- cedures, or exams. and to make comments and/or suggestions for improvement. You are looking for comments pro and can on "rele- vance," "redundancy," "boredom," "obscurity," "clarity of visuals," "needless make work," poor exam questions, etc. Role gj_Author: Your role is to gather data and suggestions for revising the materials and to provide tutorial assistance to the stu- dents on any aSpect of the lesson. IV. 151 Overview of the Procedure: The tryout will be in with a pre-test (to assess how much they know to start with?; then use the lesson materials: then a post—test (to determine how much they have learned from the materials); followed by an Opinionnaire and then a break, with refreshments. After the break will be a group debriefing. General Instructions: l. Test Scoring: Both pre-test and post-tests are self-scoring; students score their own. Please mark incorrect answers on the answer key--not in the test booklet. Scores do not count towards a grade; they are for your in- formation and to show us weaknesses in the lesson. g3 Honest: Don't look at the answer key before or during the exams. If you artificially inflate your score, we don't really know how good (or bad) the lesson is. Guessing: Guess at the answers you don't know, and place a question mark after your answer on the test booklet. If you don't understand the question, place a question mark in front of the question in the test booklet and the answer key. Ask for Help: If you have problems during the lesson, raise yourThand and I will come over. 00 not talk to your neighbor. Write Down Your Problems: When you have a problem, write it down in the workbook} Reactionnaire: We need your opinion on several critical aspects of the lesson design. Be frank and honest as you fill this out. Break: Have a coke and donut and don't go away. We need you for the debriefing. Debriefing: We will reconvene to discuss the lesson, using exam scores, reactionnaire data, and your notes and comments to organize the discussion. Remember, any comments you make will be useful. AH lll AGENDA I. 152 "CHECKLIST" FOR MK II TRYOUT AND DEBRIEFINB Instructional Development SLATE Tryout Procedure 2l October l97O INTRODUCTCRY REMARKS l. Welcome: thank students for their willingness to participate in the tryout. Introduction: doctoral research experimenter and AH grad assis- tants. gsms_lsas? Pass out name tags and explain they will help identi- 1cation throughout the session. Role of Student: to help designer identify weaknesses in the set of new materials. Comments and suggestions WILL be utilized for revisions. ' Overview of Procedure: a. Pre-test: We must find how much you already know about the subject matter to determine how much you have learned tonight and see how good or bad the materials are--hence the pre-test. b. Sure or Unsure Measure: we need to know if you "really know" something or if you were a good guesser. Circle S or U on tests. it. Take the Program: again reiterate it is the materials not the students being evaluated. . d. During the Program: designer will circulate to answer questions. ‘ I. Do not talk to each other--ask the designer. 2. Write your comments/questions in the margin of the workbook "not clear," "too fast," "irrelevant," "busywork," etc. 3. Raise your hand and designer will come to you. THESE COMMENTS AND QUESTIONS ARE CRITICAL--SO DON'T BE SHY 4. You may smoke, or take a short (l-2 min.) break when you want to. e. f. 153 Post-test: same as the pre-test, and will give us a measure of the teaching effectiveness of the materials--weaknesses. Reactionnaire: immediately after post-test, while your memory is fresh, answer several questions about how you felt about important design aspects of the materials. Break: l5 minute, coffee and coke, donuts supplied by the house. Debriefing: very critical discussion following the break to eXplore your questions and comments, and obtain your recom- mendations on what and how to revise the materials. APPENDIX C 154 GOVERNI’ZBITT OF IMO STATE OF NIGERIA MINISTRY OF EDUCATION OITERRI. MOEZINZSECIhZVol.II[h31 2nd October, 1980 The Principal, eb...'o.‘-.ec.eeeeeee.. eeeeeeeeeeeeeeeeeeeoe Regnest for Permission to Conduct Research in Secondary Schools in Imo Stat: I am directed to inform you that permission has been granted to Mr. Hyacinth 1. Dike, a Research Student on a.Doctoral Programme at Michigan State University U.S.A. to give out his questionaire to be completed by teachers and the school Principal. 2. You are therefore requested to give him maximum co-operation. ,,¢1:._ ‘ fins,» / (- I " NJ ’ I - 3W", "i“vE. C. Nwokoma for Permanent Secretary .MOELIr/SEC,QZVo;,II[531A Ministry of Education 2nd October, 1980 Copy to: Area Inspectbriof'Education Ares.Inspectorate.0ffice.. eeeeeeiede..eeeeeeeeeeeeebe Abdve fer information please. ’ .(--/"/‘J ‘A. .‘L -\" (I s—~»"15CC”N§oEoma for Permanent Secretary Ministry of Education. APPENDIX D 1‘55 "PERCEPTION CF SECONDARY SCHOOL TEACHERS ON THE SUITABILITY OF FORMATIVE EVALUATION PROCEDURES FOR ADOPTION IN NIGERIA" 156 Michigan State University College of Education Dear Educator, The purpose of the enclosed questionnaire is to get your response on the relevance/suitability of procedures for conducting formative evaluation for secondary education in Nigeria. These procedures are derived from a review of existing formative evaluation models. Your responses will be used to develOp a program to be used in conducting formative evaluation of instructional programs and for future adOption of such a program in our secondary educational system. What is formative evaluation?: This is a process of subjecting a freshly prepared lesson plan or any instructional product to a student or a group of students and based on feedback from them, revising this original /fresh material. This revision process can continue until the instructional material is of high quality. Procedures: The following techniques stated below are widely used for formative evaluation. Below each technique is a brief description of what the technique implies. IhiS‘Will be followed by some questions. The spaces provided are for you to rate your reSponses according to the importance you attach to the questions using tne numerical scales provided. It would be appreciated if you could return the questionnaire using the self addressed and stamped envelope. Procedures for formative valuation ;157 Tutorial Approach— Definition: This is the process of using a single student for conducting formative evaluation. Instruction: Equate each activity according to the following scale and check. your reSponse oy the apprOpriate number. Scale: One stands for Strongly disagree 2 for Disagree; 3 for Do not understand; h for Agree 3 S for Strongly Agree. 1. In collecting data for revising my lesson plan or for any instructional material I consider the use of a single student appropriate l :2. 3 L. ' 5 SD D DK A SA 2. I consider the use of a single student appropriate because of the ease in obtaining one 3. I consider the use of a single student apprOpriate because the face-to-face interaction leads to finding out the exact problem in a lesson plan l l 3 - ‘3 Sb LD DK ‘2" SA h. I consider the use of a single student inapprOpriate because a single student is not representative of the class l a. s q. 5 Sb D bk A SA 5. I consider the use of a single student inapprOpriate because it is time consuming \ a. 3 u. s D IDK A SA 158 6. I consider the use of a single student inappropriate because it is subject to the idiosyncratic responses of individual learners and tutors \ I 3 L‘. 5 Sb b Dig A SA I_._ar_gevGroup Approach: This entails obtaining feedback from uO—SO students for revision of a lesson plan or instructional material 7. in collecting data for revising a lesson plan I will prefer the use of a large group. I a. . 3 k- s so 3 3K A. SA 8. I will prefer the use of a large group because this can be obtained from an intact class 4‘- \ I i 3 5 513 b DK 2: sq 9. I will prefer the use of a large group because this provides more data about program deficiency l a. 3 u. 5 Sb 3) DK A SA 10 The use of large group will not be preffered because the absence of face- to-fsce interaction‘makes the identification of exact problems difficult \ . E .. 3 I; b b . DR A 5A 1.1. The use of large poup approach is prefesred because it is economical for an instructional material made up of munits -1—- T ' .... Combination of Tutorial and Large_GrogLapproachz This is 1the use 051' both ak 1 single student and an intact class 12. In collecting data for revising a lesson va will prefer the use of s Wed approach of single and large -students I a. 3 w- s 3 DK A SA 13. A combined approach is preferred because it takes care of deficiencies of using a single student and a alrge group sh bl ask ”A 5'“ 159 Selection Procedure: Rate the following selection procedures as vou would prefer to use them in conducting formative evaluation 1h. In selecting :w'sample for revisinc my lesson plan or instructional material I would prefer using random selection technique l i 3 u. 5 Sb ID by; A as 15. In selectinz my sample for revising mv lesson plan or instructional material I would prefer picking any student(s) that I see \ l 8 H ‘5 SD 43 .DK A 3A 16. In selectinz my sample for revising mv lesson plan or instructional material, I would prefer using students of 2 High subject matter competence '3: ’b got “7* g“ Average subject matter competence ._._. “Er" 5 '50 15’ OK 33 «3K Low subject matter comprtence ‘53 1. . '1g: "3” 17. In selecting student(s) for revising mv lesson gIan or instructional ma erialfu~ I would prefer using student(s) of high ability, another of average abilitv, and another of Low abilitv in that order as b . EL. it ‘gh . 18.Why have you resoonded the wav vou did in the above questions on Selectiofi Specification of course objectives: 19. Well Specified course obiectives help in the selection of teaching aids for assisting instruction l a. ,3 _ s 20. Well specified course obiectives help in the selection of teaching method for instruction .— T SD I> s am ”An ‘5 hr 160 21. well specified course couectives help in stating learning activities for achieving learning objectives | l 3 s» 5 ‘30 D av. A SA S 22. sell specified course objectives can help in evaluating learning outcome l 'l g H- ‘5 3.) I) 3K A- 3A Conments 23. Whv did vou reSpond the way you did in the questions above? Use of Error Counts: 2h. Error counts or mistakes in a students performance can be used to ascertain if course objectives are being achieved \ a. '1 " q. 5 31> 3 .DK A m 25. Students mistakes or error counts Inn be used to revise a lesson plan or instructional material l 1 3 I.- S 35’ «D .bK. ‘4 SA 26. Students (an be asked how they like a subject matter or whv they do not like a subject matter and this information can be used to revise a lesson plan l 1. 3 u. ‘5 it 1: DR A SA 27. Student(s) can be asked to comment on the clarity of statements, illustrations in a lesson plan or instructional material and this information can be used to revise a lessodplan \ a. 3 a 2: 3K Jrf v: J> 161 28. Student(s) can be asked to comment on the apprOpriateness of the sequence of the content of a lesson plan and this information can be used for revision _\ l 3 u. ‘5 as» b. .Dsq A» SA 29. Student(s) can be asked to encircle vocabularies they do not understand and this information can be used to revise a lesson plan or instructionAI material "T'-"' "'?E"_' ‘—'Ef"' '7;7"" :5 A. 33 I) IHQ .SA Comments 30. why did vou respond the way you did in the above questions? Observation and Tnterviewing of students 31 In collecting data for revising my lesson plan or instructional materials, I would observe student(s) and use their feedback for revision” l 1. 3 \+ 45 Se 3 3K A so 32. Interviewing student(s) during their use of instructional material can provide feedback for revision 162 Certain skills are essential for conducting formative evaluation. The questicins below are to find out your competence with these skills Specification of course objectives 33. In Specifying a Course objective, I always state in writing the audience for whom the objective is meant l l 3 w 9 so .3 DR A SA 3b. In specifying a course objective, I always state the conditions under which learning is to occur \ l. 3 H- 5 a 3 LR A 5A 35. I always specify my course objectives in measurable behavioral terms \ a. '3 H— E5 35 1) EMK F\ ‘SA 36. In specifying my course objectives, I always state the criterion for assessing student performance \ a. 73 He 5 Criterion-referenced test: This is a test based on course objective that attempts to assess how far a student has shown mastery of these objectives. It is different from normsreferenced tests which attempts to assess a student's performance relative to other students in the class 37. I always assess my students based on stated course objectives, the conditions for attaining the objectives and the stated criterion for assessment \ :L 3 R- £5 So a 3K A SA 38. I can use error counts or mistakes in astudents test performance to to revise a lesson plan \ o. 3 n M’UI It 163 39. I can use students comments on how they like a subject matter to revise my lesson plan a ‘1‘» 5‘ be A as t SD bu) $ hO. I can use students comments on the clarity of statements or illustrations in a lesson plan for revision g; .3 lug SA bl. I can use students comments on the apprOpriateness of the sequence of instructional content to revise my lesson plan I 3— ‘3 hr ‘3 so. . £5 JbuL 1% .St; h2. I can use students comments on difficulty of vocabularies in a lesson plan for revision T l 3. '3 H— v :5 b on It 59‘ Comments h3. Why did you reSpond the way you did in the above questions? Skills for interviewing and obscrving students uh. In observing student(s), I would look for frowns on their faces ‘ :L 3 '1' 5' Sb 3 M A SA LS. Briefly describe what you wpuld do if you observe frowns? hb. During an observation, I would watch out for difficulty in Operating an equipment SD £37. b8. 149. 50. 51. 164 How would you use such information for revision? During an observation, I would watch for distractions in using an instructional material . z 3 SD 5 5K 2” VJ ? How would you use such information fer revision? During an interview, I would ask student(s) to comment on the appropriateness of cues in a lesson plan or instructional material j a. 3 u. ‘5 Sb 3 3K. A SA During an interview, I would ask students to comment on the clarity of statements, illustrations in a lesson plan ‘1'" "'5?" ‘3‘" A M J>49 v>0l 165 FOUR 52. During a revision exercise of aw lesson plan or instructional material: One revision exrcise is enough ,- 9, T T x ‘57 T. Two revision exercises are enough . 50 .I— 1:. 7": Kore than two revision exercises are enough __ __ __ - 3 up '7 . W a) 53. Why have you responded the way you did on the above question; 6" “ 3‘3 5h. Diring a revision exercise, I would prefer: (Checlflhe one(s) you prefer) Use of Pretest[)Post testc )InterviewUObservationL)Students commentsfl ) Teachers coments‘bperts comentsfi) 55. Revision can be performed by the: Instructor alone so . .1: T _._5 Instructor and subject My} - " ' H- Subject matter expert alone so 3 i‘ * T“ l L 3 ‘k ¢ so e an n so 56. After revision, the revised material should be tried with: The same group of students that provided the original feedback — ‘5'- II' ‘3 S Ma : SA A different group of student(s) altogether l ‘3- 3 -_ s A diéPerent but eunralent group of’gtudenfls) K “A ss :3‘ 35 5 57. Which of the approaches would you prgfer: A ‘“ terial a roach '1'“. PP so 3 ix “1 3 Large Group approach 3A ' 3‘ is 5 A combination of tutorifi and Large cup "‘ A; 1‘ . x _ — e S 53. Why did you respond the way you did above? 3" 9 Is A M 166 THREb The following characteristics (1-3) are peculiar to organizations. Each characteristic will be defined and this deiinition will be followed by a set of questions for finding out characteristics in your Ministry. Structure: Every organization has a hierarchy of status which indicates how interactions occur among members of the organization: 58. «ell specified official ranks and duties associated with each rank is characteristic of my organization : '1 2 ' q. 5 Sb .5 ‘33; ‘A 3P\ 59. It is not possible to obtain -official information from another officer in my unit without getting clearance from an immediate bOSS' ‘—T—" —T‘— ”'5‘— ‘T 6: Sb 6 15K A SA 00. Only Heads of divisions can discuss official information at meetings : “‘2'"— “'57—" H- 5- 9D A 3K A SR 61. There are too many channels of communication for information to be used in my organization 1 a. 3 q. :5 so 5 3K A 3A 02. I am always promptly aware of major developments in my Ministry : ‘ i. 3 — 3b b by; k 3R 03. I always recieve information on policies right on time ‘ x ——'s "T ‘5‘ ‘9 b an a an 6h. There exists a group of officers whose Opinions are highly respected in my Ministry v N r +‘ W, | Sb 3 bk; 167 05. For a proJect to succeed, it must be supported by these highly respected opinion leaders £5 3 ‘DVK 1L 5”! 00. For a project to succeed, it must be originated by the highest placed officers \ 5- § q- 5' so ‘3 bu. Ar 53* 07. There exists a task force in the Hinistry responsible for ensuring that problems are solved expeditiously : a. ‘3 it :5 an a but A. .85\ 68. Officers in the ainistry are encouraged to volunteer policy suggestions for delibration : a. 5 3; '5 Sb ‘3 JHK ‘sa o9.Policy suggestions are freely discussed in open sessions before decision is taken in the Hinistny \ a. u- 5' Sb :3 aka «A 3A. 70. I am not aware of the origin of policies/projects in this Ministry in: :3 abs. 7‘ :5: 71. I am motivated to work hard in this ministry because most of the time, I feel happy as a member "v S ‘A 19} ‘ a. 3§ ) 33K 72. because I am happy in this Ministry, I always put in my best effort in NY"Iork. : a. '3 E? ‘0 Lb 5K. iii 5“ 73. I am.notivated to work hard in this Ministry because most of the time, I derive satisfaction from my work \ . a. 3 ‘e 57 $3 5 (K 7h. 75. 76. 77. 78. 168 I am motivated to work hard in this ministry because I find the reward system encouraging fl "r ‘3 k F: 34:. | 1 SD .5 bun I am motivated to work hard in this Ministry because my Opinions/suggestions are given fair consideration \ 1- 3 7R 5' SD 1> :b“\ SR Lack of facilities/resources is a hinderance to effective implementation of projects in this Ministry ; i, '3 9' g; :3 D bwt ‘\ 5F\ Lack of opportunity for in—service training is a hinderance for implementation of projects in this Ministry t 1 3 1+ 5. 35 3 DR A SQ Lack of qualified staff is a hinderance to ef”ective implementation of projects in this Ministry ...,- 1. ID ED 169 FIVE 79. 80. 81. 82. 83. 8h. I would want the present organizational structure in my Ministry to be modified ‘ i 3 SD .3 3‘. pr? Why have you reaponded the way you did above? I would want a more open system that allows a free flow of information to be created in this Ministry l t 3 ' u. 6' Sb 3 AK A SA I would prefer an organization in.which my opinions are respected and encouraged . T— ‘3— ‘“:- s so 3 bk 1: m I would prefer an organization in which officers are rewarded for being innovative. \ 5: ‘3 ‘5 D 15K X in Please supply the following information Name of place in which my school is located Qualification(s) Years of teaching eXperience Sex of school(Boys or Girls?) .170 Dear colleague, I have to thank you immensely for completing this pilot study questionnaire. I recognize that no human being is ever perfect. For this reason, please feel very free to criticize this questionnaire as much as you can. Specifically, please state: 1. Which questions you consider ambigouos, Irrelevant, repeatitive or you do not understand z. Do vou consider it prOper to define some of the procedures/techniques before presenting the questions. In other words, do you think such definition "sensitizes" you and thus biases your resodnses? 3. Do vou think the questions are too long? h. no vou think the questions are time consuming? Please Specify the amount of time it took you to complete the questions 5. What other improvements would vou recommend? Dlease use the attached plain sheets for your reactions. Once aeain, many thanks for alotting some of your time to me H.I. DIKE APPENDIX E 171 Division of Educational Systems DevelOpment College of Education Michigan State University East Lansing, Michigan #882# September 10, 1980 Dear Colleague, DOCTORAL DISSERTATION QUESTIONNAIRE Each year the government of Nigeria invests a sub- stantial amount of her annual budget on education. This is based on the belief that education can help her citizens to acquire the knowledge and skills which they can use for improving their environments. . Part of this government expenditure is used to pur- chase instructional materials and to develop new ones for use in our schools. There are research evidences to show that these materials are seldom tried out and revised be- fore utilization. If instructional materials are seldom tried out and revised before utilization, one can hardly avouch for their quality and effectiveness. In other large industrial establishments, products are first tried out and revised with feedback from users before they are mass pro- duced for consumption. This try out and revision process is known as FORMA- TIVE EVALUATION. There are three different types of forma- tive evaluation procedures namely: (1) The Tutorial A - proach or the use of one student at a time (25 The Lar e Group Approach or the use of #0 or more students and 135 The Small Group Approach or the use of h-8 students at a time. Each of the three types are described below: 1. In using the Tutorial Approach, the tutor selects .his student, gives him a pre-test to determine his entry level, lets him go through the mater- ial Lnotes of lesson, films, cassettes, trans- parencies, etc.) and gives him a post test. The tutor then revises the original material using the post test scores. While the student is using the material the tutor gives him short written quizzes to find out his difficulties. The tutor can also interview and observe this student to discover problems this student is 172 encountering. Using these feedbacks. the tutor-revises the original material. ‘The re- vised material is again tried to see if it is effective. If it is not, the process is re- peated. ‘ An advantage of the tutorial approach is that the’ face-to-face interaction between a tutor and a student helps in identifying more detailed deficiencies about a material. Its disadvantage is the use of one student and the likeli- hood of introducing bias during the interaction. 2. It is for this reason that the Large Group Approach is used by some evaluators. Inf this approach only the pre—test and the post test are used. There is no face-to-face interactions as we have during interviews and observations of subjects. An advantage of the Large Group Approach is that data is collected from many students while its big dis- advantage is the absence of face-to-face interaction. 3. To overcome these disadvantages of both the Tutorial and the Large Group Approaches some evaluators use the Small Group Approach. This involves using the face-to-face interaction as in tutoruu. approach as well as obtaining ob- servational and written feedback from 8-8 stu- dents. I am conducting a survey research for my doctoral diasertation entitled: "Perceptions of Secondary School Teachers and Administrators of the Suitability of Formative Evaluation Procedures for Adaptation in Secondary Schools in 1 Nigeria." My aim is to find out how suitable the various Formative Evaluation Procedures used in other countries can be for our educational system and to identify factors that may facilitate or hinder its adoption. Using the elements for conducting formative evalua- tion identified from past research, I have develOped a questionnaire aimed at identifying teacher and administrator ,perceptions of the suitability of these various elements for secondary education IN NIGERM. 173 ' You.have been randomly selected fbr this study. Since the ultimate goal of this research is to develop a formative evaluation program for the State, your honest and sincere responses to the questionnaire will be highly appreciated. ‘ Complete anonimity will be maintained in this re- search. Towards this end, no name is in any way required on the pugsLtionnaire. Thanks for your co-Operation. Yours faithfu ' -? / Hyaéinth Ibe: Dike ' 'r'" all.-- .IIII: w canes» oswumaco ameadoa woos a some mane eoaamuce once ho coauood case on» on used coo vouoonoo seconded use .w IIIIIII I.!II IIIIII IIIII mozomoannm nozuo nag» xoansoo mood ea pouoodom zomohnnn one .5 aluwoozom as ea vouosecoo ccaumsqa>o osdamsoou mo nah» use so aa~uaue an aoaoouom zomouanm 0:9 .0 oodozo as voocosau 1'74 new enaouoxo canoes son a mousse mommmp meaoscoav:« vuoss o» seaoemeo second . -aa can do auaaaa< .m «oboe coaamsam>o esma unsuou uo ooaozo at poocosfluca causes and ucumuco no ammo one .s cm < 3 am mmmo< flumc< unautha ummuuumsoou medaospcoo Lou wcauooaou Lopmmcoo vapor so» nozomoundm couacsum>e osaaseaou weasonaou oz» mo cows: rodeo waxes ecu as one c“ .x. newness an xoego bemoan .aoaehu deacon use» he menocuauuouc: use» one coauusqa>c osmumeaou you mambo: sous» ozu acmzmmadxo can sewauaoneo ohmmccouanosm was» umuncmmsooom heaved on» no venom _ .w.zmmmmwm .lmumfilammmmumm use auoa a:« no case do ecouudooaea assumes «Coeoazou senescence neuronneq och mmmqmumfiwm l‘:l ' on <=q<>u Hazlzoh can mamacuoo 1'75 It. "p‘ oo Lou madame a wcauoonom cm Illlu .Iltll. IIIIIII mo>~uoonno omosoo on» coax no: poop house a as case sacuoeon nu cowumsamso o>uamsuom lllll llllll Illlllcoauasamso osuumsaou weaposecoo you ”squamous hues mad mosuuoonno owcsoo eouuqooao was: a on < flmzuc mamou s=9.a~ IIIII nlllllll. IIIIIII sonogo as oeocosnuca compound mun» asap: sou «mousse osaomaumwzwsum do saaaananaoa uga.o nllll Illlllll llnluau seqozo as voocesaucm conceded mug» undo: canes hocoaoau :ep seaweed «sons were one: anew» nude segues oueacw eonuooa:eoeu a «use auaaaaaauoa use .m < a on mumo< muzo«u on passes aucovsum .comaoH a unuuso connou a rd random cums oz» penumuoocs. son» u« «so send a» pen: on cue anon anon a CO flOhOOD five—0630” coaaosauncm sou nauseous: uHHuxm haves and unease; zoza a“ use scam ou comma» use; on vasozm mucousuw Hauaoums use gear zuaaowauup uo swan m an noosu Laos» co nzroam < a an muzv< uuzo ansouuuuv odouuoco as .«w n« ”success ecu weapon to: co ucoeeoo ow .om eammueuma Hmcodaosuuu sun go nacoacoo ho access use use no emocoauaudoun nae oz» co acosaoo op .ma ocoauaeuusaAL do manumuo one no acosaoo as .QH mucosoaauo uo hauomao on» so «cessoo ea .5” .ooxnm en cmo messages to show: as .cuus aoe ago on cue canoeswm .wa : 2m Ezugun uu¢0auoonao o» manage «concedes» use: H .nn assuaoonno voumum uo ucoao>onzos «causes use essense an cease mucosauumza and» puam> oescaocoo coo H .Nn name» «maoasazon :M uo>auoonno ceases amaooao coo H .an 32=o=hm voauuuoou on cue ousaamu has» uo ensue oz» to: .On 0H5 andau ecu mousse use: canoe gaseous: fleeces -osuaoca as“ as «as: .o~ < . a nu auxc< mumuo esaumsuou uo oaasnoa\naoouuo on» osuou -no an ass. on “an: «H .Hn Edema» Hoozoo as ed on: on ammo oeuspoooaa couumsumso e>auesoou aeoqocoo a .om < . a an” fluxcc uu¢ce o>mumsuou ”nausea :coo ho eoumvcusem one exam» co coca: shown-a n.uoo«uuo c< coauuseaau Mo aloa eahoo‘auo c< .co we as a : «acumen as «an <6 ¢ 0 an exact ncxcc nuzcznun uflzuuee-:a how auacsuuoaae He xemq can» do some comes: uns>e o>uamlaeu one e» so: he obese on we: Haas H e: madcomoh daemon mess: aces :mseaza seemed . n: cemussaeuca emseoem .an .eaeouuue .Ns uses no» one an peaceaase ecm eeascuuauo ea «ass a“ can: possess ea .H3 .0: neuassun>e e>nvmeuea sea .on concensus .umazoaaoe seams; seasons: as -13 :easssamso e>uummmwu he ceaamucwseame« eza ~uaeaeu page chance as“: «as» ceuusuacmwae mug» :« cocoa xeeu e eueaxe eaeze name‘see ecu edezexues noaadsauos eo«>uem:Ca .ueuueu unse: do on: ex» masons» deezoe nus» ea ceaussas>e e>mussaea asobm awesome“ aqaasead on dual caucuses < a an mzzc< mm¢ve sedansaeu hem .nn concensus .on ads»; ecu seem—Jae one...» eueoaaue he uses» aeguecs naewxe ones» ceuusua uzcwue was» :« naeouume‘ cues aea ego eeeqeem .sn e e>aaemuea we en: es» noscam be easuaanoea was» oeuoeu usenu monotonous unuseaneu esMIwe ceauaee use eased em a assoc. n zoaeumm w onevmm 180 SECTION 6 In the space below, please indicate any additional factors or condtions that will hinder or facilitate the adaptation or use of formative evaluation by teachers in your school. These factors can be those particular to you. to the norms of your school and/or the norms of the culture. 181 SECTION 2 Please look through this questionnaire once more and see how you can help me to improve it. What would you do to make it more understandable? Please refer to specific questions or sections. Thank you. APPENDIX F 182 Division of Educational Systems Development College of Education Michigan State University East Lansing, Michigan 48824 September 10, 1980 Dear Colleague, DOCTORAL DISSERTATION QUESTIONNAIRE Each year the government of Nigeria invests a sub- stantial amount of her annual budget on education. This is based on the belief that education can help her citizens to acquire the knowledge and skills which they can use for improving their environments. - Part of this government eXpenditure is used to pur— chase instructional materials and to deve10p new ones for use in our schools. There are research evidences to show that these materials are seldom tried out and revised be- fore utilization. If instructional materials are seldom tried out and revised before utilization, one can hardly avouch for their quality and effectiveness. In other large industrial establishments, products are first tried out and revised with feedback from users before they are mass pro- duced for consumption. This try out and revision process is known as FORMA- TIVE EVALUATION. There are three different types of forma- tive evaluation procedures namely: (1) The Tutorial A - proach or the use of one student at a time (2) The Lar e Group Approach or the use of 40 or more students and 13) The Small Group Approach or the use of 4-8 students at a time. Each of the three types are described below: 1. In using the Tutorial Approach, the tutor selects his student, gives him a pre-test to determine his entry level, lets him go through the mater- ial Lnotes of lesson, films, cassettes, trans- parencies, etc.) and gives him a post test. The tutor then revises the original material using the post test scores. While the student is using the material the tutor gives him short written quizzes to find out his difficulties. The tutor can also interview and observe this student to discover problems this student is 183 encountering. Using these feedbacks. the tutor revises the original material. The re- vised material is again tried to see if it is effective. If it is not. the process is re- peated. An advantage of the tutorial approach is that the face-to-face interaction between a tutor and a student helps in identifying more detailed deficiencies about a material. Its disadvantage is the use of one student and the likeli- hood of introducing bias during the interaction. 2. It is for this reason that the Lgrge Group Approach is used by some evaluators. In this approach only the pre-test and the post test are used. There is no face-to-face interactions as we ‘haveIduring‘ifiterviews and observations of 'EuojeCts. An advantage of the Large Group Approach is that data is collected from many students while its big dis- advantage is the absence of face—to-face interaction. 3. To overcome these disadvantages of both the Tutorial and the Large Group Approaches some evaluators use the Small Group Approach. This involves using the face-to-face interaction as in tutorial approammsm - bervational and written feedbaék’from E- stu- dents. I am conducting a survey research for my doctoral dissertation entitled: ”Perceptions of Secondary School Teachers and Administrators of the Suitability of Formative Evaluation Procedures for Adaptation in Secondary Schools in Imo State of Nigeria." My aim is to find out how suitable the various Formative Evaluation Procedures used in other countries can be for our educational system and to identify factors that may facilitate or hinder its adoption. Using the elements for conducting formative evalua- tion identified from past research. I have deveIOped a questionnaire aimed at identifying teacher and administrator perceptions of the suitability of these various elements for secondary education in Imo State. \ 184 You have been randomly selected for this study. Since the ultimate goal of this research is to develop a formative evaluation program for the State. your honest and sincere responses to the questionnaire will be highly appreciated. Complete anonimity will be maintained in this re- search. Towards this end. no name is in any way required on the guesti onnaire. Thanks for your co-Operation. Your faith ’7 ’\ \_ _; Hya inth Ibe Ilirk'e"’ 185 PROCEDURES FOR FORMATIVE EVALUATION Section 1 Directions: Based on the letter accompanying this questionnaire describing and explaining the three models for formative evaluation and your understanding of your please check by marking (X) in one of the indicate which of the following formative proaches you would consider selecting for tive evaluation in your school. 1. Tutorial Approach 1 l 2. Large Group Approach | 3. Small Group Approach l , school system, boxes below to evaluation ap- conducting forma- 186 Section 2: Directions: Indicate by marking (X) the degree to which you agree or disagree as to which of the following characteristics of formative evaluation models influenced your choice of approach. STRONGLY STRONGLY DISAGREE DISAGREE AGREE AGREE 4. The ease of obtain- ing subjects influenced my choice of forma- tive evaluation model 5. Ability of the ap- proach selected to avoid introducing biases during a re- vision exercise in- fluenced my choice 6. The approach selected is similar to the type of formative evaluation conducted in my school____ 7. The approach selected is less complex than other approaches ______ _____, 8. The approach selected can lead to the col- lection of more detailed attitudinal data 187 SECTION 2 (Cent) STRONGLY DISAGREE 9. The possibility that a face-to-face inter- action will yield more data about program de- ficiency while using this approach influenced my choice 10.The possibility of administrative support for using this approach influenced my choice ll.The availability of resources for using this approach influ- enced my choice DISAGREE AGREE STRONGLY AGREE 188 SECTION 2 Indicate by marking (X) the extent to which you agree or disagree with each of the following statements about forma- tive evaluation. STRONGLY DISAGREE 12. Well specified course objectives are very essential for conducting formative evaluation 13. Formative evaluation ‘is possible even if a tutor cannot specify course objectives 1h. In selecting a sample for revising an instruc- tional. material (notes of lesson,films, cassettes, etc.), one should select students of varying abil- ities (i.e. high ability,wn average ability and low ability students) 15. Students used in forma- tive evaluation should be selected randomly STRONGLY DISAGREE AGREE AGREE 189 SECTION Cont STRONGLY STRONGLY DISAGREE DISAGREE AGREE AGREE 16. Students can be ob- served and interviewed while using an instruc- tional material and this information can be used for revision Quring aninterview, students can be agkpg: 17. To comment on the clarity of statements 18. To comment on the clarity of illustrations 19. To comment on the ap- propriateness of the se- quence of contents of in- structional materials 20. To comment on how boring the material is 21. To encircle difficult terms which they do not understand During their use of anggpptructiopgl materialI students_pgn be observed_fpp: 22. Difficulty in operating equipment used to present the material SECTION 3 (Cent) 190 23. 2h. 25. 26. STRONGLY DISAGREE DISAGREE AGREE Frowns on their faces as a sign of difficulty with the material Students should be pre- tested to find out if they possess the entry skills necessary for instruction Students scores on a post test can be used to find out if they understand the main points in a lesson During a lesson, students should be given short written quizzes to find out how they are doing Results of post tests should be analyzed to find out: 27. 28. What was similar about items missed How items missed differ from those passed STRONGLY AGREE 191 §§§TIQN 3 (Cent) 290 30. STRONGLY DI SAGREE DI SAGREE AGREE What in the instruc- tional material could have caused the fail- ure How the cause of this failure can be rectified STRONGLY AGREE 192 SECTION A Certain skills are essential for conducting formative evaluation. One such skill is the ability to specify course objectives in behavioral terms and the second is the ability to construct criterion referenced tests. These terms are de- fined below: Behavioral objective: This is a description of a performance you want learners to be able to perform before you consider them competent. To be well stated, a behavioral objective must specify: (1) the intended audience to use the instruction (2) the behavior in measurable (action or doing) terms for example "to write down the names of an object" can be measured whereas to understand or to know something" cannot. (3) the conditions under which learning is to occur and. (4) the criterion for assessment. Qpitepgon:peferenced megpure: This is a test item that meas- ures specifically a stated course objective. It is different from a Norm—referenced mea§ure which helps to discriminate or select among individuals in a group. Simple or difficult items are included to produce varied? scores, Criterion- referenced test measures the course objectives. It is aimed at finding out how far an individual has masterd a given task. Test items can be difficult or easy, discriminating or non-discriminating provided they test stated objectives. Indicate by marking (X) the extent to which you ape capable of doing the followipg: STRONGLY STRONGLY DISAGREE DISAGREE AGREE AGREE 31. I can specify course objectives in behav- ioral terms 32. I can construct valid test instru- ments aimed at find- ing out student ach- ievement of stated objectives 33. I have significant skills to objectively observe and interview a subject 193 SECTION Please indicate your position on the following statements about factor§_that facilitate or hinder the use of formatiyp evaluationginyour organization. STRONGLY STRONGLY DISAGREE DISAGREE AGREE AGREE 3a. Besides the tOp most officers in this organ- ization there exists another group of officers whose opinions are highly respected 35. For formative evaluation to succeed in this organ- ization it must be originated by these group Of officers whose Opinions are highly respected 36. There is a possibility of support by the highest ranked officers in the organization for formative evaluation 37. Because information passes through many hands before reaching me I will not be aware of how to use formative eval- uation SECTION 5 (Cent) 194 STRONGLY DISAGREE DISAGREE AGREE 38. Teachers will be promptly informed about formative evaluation in this school 39. There exists a task force sin this organization that will ensure that formative evaluation is executed expeditiously The implementation of formative evaluationwill be hindered by the following: #0. Lack Of time #1. Lack Of Opportunity for in-service training #2. Lack of qualified staff #3. Lack of Opportunity for workshOp/seminars ‘\ STRONGLY AGREE 195 SECTION Cont STRONGLY STRONGLY DISAGREE DISAGREE AGREE AGREE Promotion in this organization is based on: ##. An Officerfis year of graduation #5. An officer's perfor- mance on tasks #6. The advantages of con- ducting formative eval- uation for instructional materials outweighs the disadvantages of not doing so #7. Conducting formative evaluation will not run counter to the norms Of teachers, the school and the society #8. It will not be easy to try out formative evalua- tion procedures in my school system #9. I consider formative evaluation procedures simple to understand 196 SECTION¥5 (Cont) STRONGLY DISAGREE DISAGREE AGREE 50. I consider formative evaluation procedures easy to use in my school system 51. It will be easy to Ob- serve the effects/results of formative evaluation in my educational estab- lishment STRONGLY AGREE APPENDIX G 197 ORAL INTERVIEW INSTRUMENT Introduction: The researcher explains to his respondent the importance of the study as embodied in the letter to accompany each ques- tionnaire. The Three models are also explained including their advantages and disadvantages. The following questions will be asked to collect data relating to each research question in this study. Research Question I: 1. Given my explanation of the three types of formative evaluation, do you think that teachers personally use forma- tive evaluation in developing their instructional materials (notes of lesson, films, slides, transparencies, etc.)? 2. Respondent replies, If yes is the answer,then the fol- lowing additional questions will follow: a. From whom did you Obtain your feedback-~from in- dividual students, groups of students, "experts", etc.? b. What were your selection criteria for selecting your subjects? c. What (if any), were the critical attributes of the people you selected for your try out exercise? d. What kinds of feedback data did you try to Obtain-- achievement data, attitudinal data, background data? e. How did you gather the various kinds of data--through tests, interviews, etc.? f. How did you determine that revision was really nec- essary? 198 If no is the answer,then the following additional questions will follow: If you were to conduct formative evaluation, a. From whom would you obtain feedback? b. What would be your selection criteria? c. What types of data would you try to collect? d. What types of instruments would you use to gather your data? e. How would you determine if revision was necessary? Research Question 2: Given what you know about the three models of formative evaluation, what skills do you think you might need in order to conduct formative evaluation using: a. Tutorial approach b. Large group approach c. Small group approach Researcher: So you think that to be able to use the tutorial approach a tutor ought to be able to(paraphrases one the skills), can you explain to me further what you mean by the possession of (mention the skill stated by respondent). 4. Respondent replies. Research Question 3: Given what you know about the three models of formative evaluation, would you think it would be feasible to use: 199 a. Tutorial approach b. Large group approach c. Small group approach What do you see as the major problems that will prevent ef- fective use of formative evaluation in your school system? What factors in your present school system do you think will encourage the use of Tutorial approach, Large group approach, and Small group approach? Do you think there is any attri- bute of formative evaluation as you presently understand it that-will turn people away from using it? To what extent do you think formative evaluation is compatible with what exists in your school system now? To what extent is formative evalu- ation different from what obtains in your school now? Do you see any cultural values that will encourage or hinder the use of formative evaluation? Research Question 4: (Researcher at this juncture recapitulates the three models). With my explanation of the three models do you see the need to modify these models in any way to make them acceptable and useful in your school? (To facilitate comprehension, a diagrammatic representation of the models will be shown to respondents). Research Question 5: In what ways do you think your school organization should change in order for formative evaluation to be used in it? APPENDIX H O MICHIGAN STATE UNIVERSITY 20 MEG! Of EDUCATION EAST LANSING ' MICHIGAN ' “I24 WAIWT ou- ssoonoaav EDUCATION AND CURRICULUM ERICK“ HALL September 17, 1980 TO WHOM IT MAY CONCERN: This is to certify that Mr. Hyacinth Ibe Dike is currently enrolled in the doctoral program in Educational Systems Development (Educational Technology), College of Education, Michigan State University. He came to us in September, 1978 and completed his H.A. degree in this department. He then applied and was immediately accepted in the doctoral program. I served as his M.A. adviser and continue as Chairman of his Doctoral Committee. As part of Mr. Dike's requirement for completing his Ph.D., he must conduct an original field study, report his finding to H.S.U. in the form of a disser- tation and defend his research in an oral examination. In view of his experience and status as an educator in Nigeria, it was considered desirable for him to conduct a study in and for the ultimate benefit of his country. Consequently, Mr. Dike'will soon leave for Nigeria where during the‘month of October he will collect data to be used in his approved field study: "Perceptions of Secondary School Teachers and Administrators of the Suitability of Formative Evaluation Procedures for Adaptation in Secondary Schools in the State of Nigeria". Following completion of his study, Hr. Dike will return to Nigeria. Representing his committee, I am.asked to say that we would very much appreciate any assistance provided Mr. Dike toward this end. Sincerely, Castelle G. GeEtry, Director Professional Programs in Educational Systems Development COG/kc APPENDIX I E. 203 Division of Educational System Design College of Education Michigan State University East Lansing, Michigan 48824 August 30, 1980 The Commissioner for Education Ministry of Education Owerri Imo State of Nigeria Dear Sir: REQUEST FOR PERMISSION TO CONDUCT RESEARCH IN SECONDARY SCHOOLS IN IMO STATE I am a Nigerian from Imo State currently enrolled in a Doctoral program in Educational Systems Design (Educational Technology) at Michigan State University, East Lansing. I hope to come home this September 1980 to collect data for my Ph.D. dissertation. My topic of interest is: "Perceptions of Secondary School Teachers and Administrators on the Suitability of Formative Evalution Procedures for Adoption in Secondary Schools in Imo State of Nigeria." The meaning for formative evaluation used in this re- search is "the process of trying out components of prototypes of instructional materials with students and based on feed- back from them, revising the original program". This process of revision continues until the quality of the instructional material is at the desired level of effectiveness. My research depends on determining teacher and admin- istrator perceptions on formative evaluation procedures. To this end I have developed, with my Doctoral committee's ap— proval, a questionnaire to be completed by teacher and admin- istrator in a selected sample of secondary schools in Imo State. 204 Page 2 The purpose of this letter is to ask for your per- mission that I may submit my questionnaire to selected teachers and administrators. It would be appreciated if you could give me a letter to Principals of the selected secondary schools for this research. Thanks for your co—operation. Yours Sincerely . QQJ )1 .- .A / Hy cinth j. Dike BIBLIOGRAPHY BIBLIOGRAPHY V’Abedor Allan Joseph: Development_gnd Validation of a Model Explicatinthhe Formative Evaluation Process of Multi- Media Self Instructional Learning System. Ph.D. Thesis. East Lansing, Michigan State University, 1971. Abedor, Allan Joseph: "Second Draft Technology, Development and Field Test of a Model for Formative Evaluation of Self Instructional Multi-Media Learning Systems" Viewpoints Bulletin of the School of Education, Indiana University, Vol. 48, No. 4 July 1972. Alkin C. Marvin and Baker L. Eva: "Formative Evaluation of Instructional Development". A.V. Communication Review Vol. 21, No. 4, Winter, 1973. Ammons Margaret: "An Empirical Study of Progress and Product in Curriculum Development" Journal of Educational Research, Vol. 27, No. 9, 1964. Association for Educational Communication and Technology: Educational Technology, Definition_gnd Glossary of Terms. Vol. 1, Washington, D.C.; AECT, 1977 Bass, Ronald K.; Dills, Charles R.; Lunsden, Barry D: Instruc- tional Development: The State of the_Art. Columbus, Ohio: Collegiate Publishing Inc., 1978. Bergquist, W.H.; Philips, S.R.; and Quehl, G.: A Handbook For Faculty Development. Washington, D.C.; Council for the Advancement of Small Colleges, 1975. Borg, Walter R. and Gall Meredith Damien: Educational Research New York, Longman Inc. 1979. Dale, M. Brethower; David G. Markle, Geary A. Rummler, Albert W. Schrader, Donald E. P. Smith. Programmed Learning A Practicum'. Ann Arbor, Michigan. Ann Arbor Pub- lishers, 1967. Bruner, Jerome S. Toward a Theory of Instruction. Cambridge, Massachusetts, Harvard University Press, 1966. 205 206 Casper, Paulson F. "Evaluation of Instructional Systems" (ed) Jack Crawford; National Research Trainigg Manual Teaching Division of the Oregon System of Higher Edu- cation, Monmouth, Oregon, 1969. V’Cunningham, Donald J., "Comments on the Case Studies for Formative Evaluation -- The Sources of Information". Viewpoints: Bulletin of the School of Edggation Indiana University Bloomington, Indiana 1972. Dick Walter: "A methodology for the Formative Evaluation of Instructional Materials". Journal of Educational Measurement. Summer, 1968, vol. 5, no. 2. )/ Ebel, Robert L. Essentials of Educational Measurement. 3rd Ed. Englewood Cliffs, New Jersey, Prentice-Hall, Inc. 1979. V’Ellis, Henry C. "Judging the Teaching Effectiveness of Pro- grams". In Trends in Programmed Instruction (Gabriel D. Ofiesh and Wesley C. Meierhenry eds). Washington, D.C. National Education Association 1964. Evans, Richard I. and Leppmann, Peter K.: Resistance to Innovation in Higher Education. San Francisco, Jossey— Bass, Inc., Publishers, 1968. Federal Republic of Nigeria National Policy on Education Lagos-Nigeria, Federal Ministry of Information, 1977. Federal Republic of Nigeria, Implementation Committee for the National Policy on Education BLUEPRINT 1978-1979 Lagos—Nigeria: Government Printer, Federal Ministry of Information. Federal Rgpublic of NigeriaLThird National Development Plan 1975-80, Vol. 1, Lagos-Nigeria, Federal Ministry of Information. Gaff, Jerry G. Toward Faculty Renewal. San Francisco: Jossey-Bass Publishers, 1978. Gilbert, Thomas F.: "On the Relevance of Laboratory Investi- gation of Learning to Self-Instructional Programming" In Teaching Machines and Programmed Learning (eds) A.A. Lumsdaine and Robert Glaser Washington, D.C.: Department of Audio-Visual Instruction, National Education Association, 1960. 9’ Glaser, Robert "Instructional Technology and the Measurement of Learning Outcomes: Some Questions" American Psychologist, 1963 207 v’ Gooler, Dennis D. "Formative Evaluation Strategies for Major Instructional Development Projects" Journal of In- structional Development. Spring 1980. Vol. 3, No. 3. Government White Paper on the Education Review Commission in Imo State. Owerri-Nigeria: The Government Printer 1980. Gustafson, Kent L. and Abedor Joseph Allan: "Evaluating Instructional Development Programs: Two Sets of Criteria". Audio Visual Instruction. December 1971 Havelock, Ronald G. The Change Agent's Guide to Innovation in Education. 4th Ed. Englewood Cliffs, New Jersey, Educational Technology Publications, 1978. Horn, Robert E. Developmental Testing. Ann Arbor, Michigan Center for Programmed Learning for Business, 1964. Huse; Edgar F.: Organization Development and Change. Los Angeles: West Publishing Co., 1975. Husek, T.R. and Sirotnik, K: "Matrix Sampling" Evaluation Comment. Vol. 1, No. 3, 1968. Joint Commission on Programmed Instruction and Teaching Machines. "Recommendations for Reporting the Effect- iveness of Programmed Instruction Materials" A.V. Communication Review. Vol. 14, No. 1, Spring 1966. Kepner, Tom and Sparks,Lanny: Objectives Market Place Game What You Always Wanted to Know About Performance Ob‘ectives But Were_Afraid to Ask. National Special Media Institute. 1972. Komoski, Kenneth P.: "An Imbalance of Product Quality and Instructional Quality._The Imperative of Empiricism" A.V.Communication Review. Vol 22, No. 4 Winter 1974. Komoski, Kenneth P.: wTo Establish a National Institute of Education: Hearings Before the Select SubCommittee on Education and Labor. House of Representatives. 92nd Congress. First Session. Washington, U.S. Government Printing Office, 1971. Komoski, Kenneth P : "Learner Verification: Touchstone for Instructional Materials?"' Educational Leadership February 1974. Light, Judy A. and Reynold, Larry J.: "Debugging Product and Testing Errors: Procedures for Formative Evaluation of an Individualized Mathematic Curriculum" Viewpoint: Bulltin of the Schooligf Education. Indiana University, Vol. 48, No. 4 July 1972 Mager, Mager, Markle, Mbakwe, 208 Robert F: "On the Sequencing of Instructional Content“ Psychological Reports. Vol 9, l96l. Robert F. Preparing Instructignal Objectives. 2nd Edition. Belmont, California, Fearon Publishers, Inc. 1962. Susan M. "Empirical Testing of Programs". Programmed Instruction: The Sixty—Sixth Yearbook of the National Society for the Study of Education_Part II (ed) Phil C. Lange. Chicago, Illinois: The University of Chicago Press, 1964. Sam 0.: An Address on the occasion of the swearing in ceremony of members of the State and Zonal Education Boards . Owerri- -Nigeria: The Government Printer, October, 1980. V’Mehrens, William A. and Lehmann Irvin J. Measurement and Evaluation in Educatiom and Psychology. 2nd ed. New York, Holt, Rinehart and Winston, 1978. Montgomery, Robert and VanderMeer, A.W.: An Investigation Popham, Popham, Robeck, Rogers, Sachs, of the Improvement of Educational Filmstrip and a derivation of Principles relating to the effectiveness of these Media. Phase III. Revision of Filmstrip - Earth's Satallite. The Pennsylvania State University, l958. James W.: Evaluating Instruction. Englewood Cliffs, New Jersey, 1963. James W; Ersner, Elliot W; Sullivan, Howard J; and Tyler, Louise L. Instructignal Objectives AERA Mono- graph Series on Curriculum Evaluation and Instructional Objectives. Chicago: Rand McNally and Company, 1969. M.D. A Study of the Revision Process in Prpgrammed Instruction. Unpublished Master's Thesis. University of California, Los Angeles, 1965. Everett M. with Shoemaker, Floyd F. Communication of Innovation: A Cross- Cultural Approach. 2nd ed. New York: The Free Press, 1971. Steven G. and Abedor Allan Joseph: "The Relationship between Faculty Development (FD), Organizational De- velopment (OD) and Instructional Development (ID): Readiness for Instructional Innovation in Higher Educa- tion" In Bass, Ronald L.; Lumsden, Barry D. and Dills Charles (eds). Instructional Development: State of the Art . Columbus, Ohio. Collegiate Publishing Inc. 1978. 209 Schwen , Thomas M. and Keller, John M. "A Case Study development Convergent Formative Evaluation Methodology" Journal of Instructional Development, Vol. 1, No. 1 F311, 1977 Scriven, Michael: "The Methodology of Evaluation" as in Worthen, Blaine R. and Sanders, James R. Educational Evaluation Theory and Practice. Belmont, California: Wadsworth Publishing Company, Inc., 1973. Shipman, Virginia,; Lunsdaine, Arthur A.; Gropper, George L. Studiesiin Televised Instruction Report No. 1 Improve- ment of Televised Instruction Based on Student Response to Achievement Test. Pittsburg Pennsylvania: Metro- politan Pittsburgh Educational Television Stations WQED, WQEX and American Institute for Research, March 1961 Shoemaker, David M.: "Evaluating the effectiveness of Competing Instructional Programs". Educational Research . Vol. 5, No. S, May 1972. Silberman, Harry: Use of Exploratory Research and Individual Tutoring Techniques for the Development of Programming Methods and Theory, Final Report. NDEA Project 7-14-000- 181. Santa Monica, California, Systems Development Corporation, 1964. Skinner, B.F. "Operant Behavior" American Psychologist, VdL.lB,]963 Sullivan, Howard J. : "Objectives, Evaluation and Improved Learner Achievement" In AERA Monograph Series on Curriculum Evaluation Instructional Objectives by Popham, James W.: Eisner, Elliot W.: Sullivan, Howard J.; and Tyler, Louise L. Chicago: Rand McNally and Company, 1969. Tennyson, Robert D. "Evaluation Technology in Instructional Development". Journal of Instructional Development. Vol. 2, No. 1 Fall, l978. Title, Charles R. and Hill, Richard J. "Attitude Measurement' and Predictions of Behavior: An Evaluation of Con- ditions and Measurement Techniques" Sociometry. Vol. 30, 1967. Uhl, Norman P. IdentifyingInstructional Goals. Durham, North Carolina: National Laboratory for Higher Education, 1971. VanderMeer, A.W.: Smith, Philip, and Morrison, Jack: An In- vestigetion of the Improvement of Educational Motion Pictures and a Derivation of Principles Relatiog_to the Effectiveness of these Media. Pennsylvania: Col- lege of Education, The Pennsylvania State University, University Park, 1965. 210 VanderSchmidt, Hannelore: "Validation Data for Programmed Wells, Yelon, Tests: A Checklist for Evaluation of Testing" In Trends in Programmed Instruction. (Bagriel D. Ofiesh and Wesley C. Meierhenry, eds). Washington, D.C. National Education Association, 1964. Stuart: Instructional Technology in Developing Countries: Decision-Making in Education. New York, Praeger Publisher, 1976. Stephen L. Constructive Evaluation: Improving Large Scale Instructional Projects. Lansing, Michgian 1736 North Hayford Avenue, 1974. MICHIGAN sme UNIV. LIBRQRIES WWl“)”WWW"WNWWWWI 31293102085721