.1 '4 . , , H ran. ... .. . . 4. . I.” .'.. .. I . . ‘ a - - I .. q. I . . . y‘fififiifiifii'fiui‘?‘ ,.Ivfi.'.‘3‘d\'*""~“f-‘W3I'V’Ai'fl'n't"35":‘2‘11fnf' "1 ,~ :7 3* I.I,""‘ ‘7'5' '-’ I ' 1.1-" x‘ 2‘ '1‘..‘.‘.’."!:‘-" " '-'r"""‘-.-I 2 1;'.H1I;-:1-.': I. .5. w, ‘ ,. . M , . ....... m» -"' . mm- -.-t': ‘-'~: - v -:::'r.mw‘“.‘ I . ..' . . .I _ I . I. . .. . .n I .‘:.‘ “J'f . L w. ~ ‘A '""; r “' < - ‘ - x "I.“ ' f.‘ ‘.'-.' "w ,T': V "‘ .' "‘ ' “' " .15."! " ‘r'.'..' 1:255}: ,‘.r.:.'J'.':“.':" ”"“.!."“"".'»"{ ’93."..‘35‘1'9‘ ‘f‘ '* “ """"."' M " ' " u "'3 "*‘““““ '3‘ ‘ ' 33“ ""4 " , - -. - ‘ T I ' ‘ ‘ ' f ‘ ‘ - ' - ~ . . .I . T , . -. r . . ,_ ‘ ~. ' ‘ ”w. THE EFFECT OF SYSTEMATIC COURSE EVALUAHQN f j; 9: , . ~ BY sweats AND SYSTEMATIC TWO WAY- - iNSTRUCTIONAL TECHNOLOGY AND ON COGNITIVE PERFORMANCE IN INSTRUCTIONAL TECHNOLOGY INSTRUCTION Thesisifor the Degree of' Ph. D‘ MICHIGAN STATE UNIVERSITY f JAY CLEVELAND SMITH 1971 A TEACHER STUDENT FEEDBACK 0N ATTITUDES TOWARU? " 'l . .‘A, .'.‘-.»'£ ‘ll‘ - v.— —.—-p-— § .4- " A Q " I\I\\\\\\\I \I IIIIIIITTIIQTTT I ~ :‘I . .. Universit #53 ' -~‘;; 51" ‘cfim This is to certify that the thesis entitled THE EFFECT OF SYSTEMATIC COURSE EVALUATION BY STUDENTS AND SYSTEMATIC TWO WAY TEACHER-STUDENT FEED- BACK ON ATTITUDES TOWARD INSTRUCTIONAL TECHNOLOGY AN]: ON COGNITIVE PERFORMANCE IN INSTRUCTIONAL TECHNOLOGY INSTRUCTION presented by Jay Cleveland Smith has been accepted towards fulfillment of the requirements for Ph.D. Jegree mm Major professor 0-7 639 ABSTRACT THE EFFECT OF SYSTEMATIC COURSE EVALUATION BY STUDENTS AND SYSTEMATIC TWO WAY TEACHER~STUDENT FEEDBACK ON ATTITUDES TOWARD INSTRUCTIONAL TECHNOLOGY AND ON COGNITIVE PERFORMANCE IN INSTRUCTIONAL TECHNOLOGY INSTRUCTION BY Jay Cleveland Smith Purposes The study had two purposes: One purpose was to determine the effect of student course evaluation on attitudes toward instructional technology and its effect on cognitive performance in a graduate course in instruc- tional technology. A second purpose was to determine the effect of systematic two-way feedback on attitudes toward instructional technology and its effect on cognitive per- formance in a graduate course in instructional technology. Summary Three sections of Teacher Education 548: Audiovisual Media were offered during the summer, 1970 at Western Michigan University. Each of the sections was randomly assigned to one of three research treatments. Treatment A was a replication of procedures for course evaluation Jay Cleveland Smith utilizing student opinions and value judgments develOped by l Simth and detailed in Chapter II of the study. Treatment B was application of the same procedures with modifications involving systematic two-way (student teacher) feedback. Treatment C was a control. Subjects in each treatment group were given, pre and post treatment, the New Educa- tional Media Attitude (NEMA) inventory and an instructor written cognitive test, A Test for Audiovisual Media. The experimental design used in the study was a Pretest-Posttest Control Group Design. Four statistical hypotheses were generated and tested: 1 : When given the opportunity to evaluate system— atically a course of instruction, students' level of cognitive performance in that course will not be greater than without that oppor- tunity. l : When given the Opportunity to evaluate system- atically a course of instruction, students' level of attitude toward the content of the course will not be more positive than without that Opportunity. 2 : When given the Opportunity for systematic two- way feedback on a course of instruction, students' level of cognitive performance in that course will not be greater than without that opportunity. 2 : When given the opportunity for systematic two- way feedback on a course of instruction, students' level of attitude toward the content of the course will not be more positive than without that opportunity. A one-way multivariate analysis of covariance procedure was used to test Hypotheses la and l for significance at an b Jay Cleveland Smith alpha level of .05 with appropriate degrees of freedom. A Post-hoc Comparison in Data technique was used to test Hypotheses 2a and 2b' Conclusions The analysis of the data supports the following conclusions: 1. When given an opportunity for systematic evalua- tion of a course of instruction, students' cognitive per— formance in that course is better and their attitude toward the content of the course is more positive than when such evaluation opportunity is not afforded. 2. Although when given the opportunity for system- atic two—way feedback on a course of instruction, students' level of cognitive performance in that course is not materially affected their attitudes toward the content of that course are significantly more positive then when such feedback Opportunity is not afforded. 3. Students' attitude toward the content of a course is more positive when given the opportunity for systematic two-way feedback on that course of instruction than when only given the opportunity for systematic evalua- tion of the course of instruction without the Opportunity for systematic two-way feedback. 1Jay C. Smith, "The Design and Trial of a Course Evaluation System Utilizing Student Opinions and Value Judgments" (unpublished M.Ed. dissertation, University of Hawaii, 1969). THE EFFECT OF SYSTEMATIC COURSE EVALUATION BY STUDENTS AND SYSTEMATIC TWO WAY TEACHER-STUDENT FEEDBACK ON ATTITUDES TOWARD INSTRUCTIONAL TECHNOLOGY AND ON COGNITIVE PERFORMANCE IN INSTRUCTIONAL TECHNOLOGY INSTRUCTION BY Jay Cleveland Smith A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY College of Education 1971 .' vi) ACKNOWLEDGEMENTS The research has been done. The paper has been written and the orals have been passed. The University regulations state that when this dissertation has been typed, duplicated and bound, and my fees have been payed, I will be a "Ph.D." I"m not sure I know what that means. I think it means that, in the judgment of my Committee and the University, I should be able to demonstrate the attributes of a Ph.D. For over ten years I have associated with and observed Ph.Ds at work. From my observations and associa- tions, I've formulated a concept of what I believe to be the desirable attributes of a Ph.D. These attributes include a quite but apparent competence in an area, an ability to identify and solve a problem, incisiveness, a respect for colleagues, "professionalism," and a personal confidence that permeates all activities. This dissertation has been the catalyst which has given me the opportunity to learn from those individuals with whom I have worked. From each of them I have been able to observe combinations of attributes I hope to emulate. Each of them, in my experience, has impressed ii me with special qualities. As my acknowledgements I herein thank each of them and will, very frankly, copy from each of them. My committee has been invaluable to me: From Dr. Hideya Kumata I have observed (and hope to copy) a probing and gnawing incisiveness that characterizes a quick mind.--From Dr. Lawrence W. Lezotte I have observed (and hope to copy) an ability to systematically identify and solve a problem using planned research as a tool. In a very meaningful way, Dr. Lezotte has taught me, by example and on a one-to-one basis, the value of disciplined research techniques.--From Dr. Paul W. F. Witt I have observed (and hOpe to copy) "professionalism" of the highest order. From working with Dr. Witt I have come to know, I think, what it means to be part of a "community of scholars." From him I have also learned the importance of professional and personal integrity.--From my Committee Chairman, Dr. Charles F. Schuller I have observed (and hope to copy) an ability to get a job done: No foolishness. No alibis. No prostitution of standards. Identify a task and accomplish it! It has not been difficult for me to develop a respect for my colleagues. Without the help of one of them, David W. Hessler, the dissertation could not have been completed. It takes a rare individual to submit his work to microscopic examination. When that same person not only submits his iii work for examination but also alters his procedures, spends an inordinate amount of time working with you, and demon- strates a high and sincere ebullience for what you are trying to do, you know you have found not only a profes- sional colleague but also a good friend. Dave Hessler is that rare individual and I am indeed fortunate he is my friend. My fellow graduate students in Instructional Develop— ment and Technology have endured my moods and have construc- tively criticized my ideas and my work. The associations I have had with them have been a valuable co—curricular component of my program. I think I know the source of my desired Ph.D. attribute of personal confidence. It comes from the faith and support of loved ones. From the Very beginning, my mother and father have encouraged and supported me. My wife, Peggy, gave up a comfortable and secure life when she insisted that I return to school. In innumerable ways she has demonstrated her faith in our goal. My children, Gregory and Cherylle, have not always understood what I was doing. They did know it was right because I was doing it. It is difficult not to be personally con- fident with the faith and support of a family as loyal as mine. iv TABLE OF CONTENTS Page ACKNOWLEDGMENTS . . . . . . . . . . . . . ii LIST OF TABLES . . . . . . . . . . . . . . vii LIST OF FIGURES . . . . . . . . . . . . . viii LIST OF APPENDICES . . . . . . . . . . . . ix Chapter I. RATIONALE FOR THE INVESTIGATION . . . . . . 1 Purpose . . . . . . . . . . . . . 1 Need for the Study . . . . . . . . . l Hypotheses . . . . . . . . . . . . 4 Theory . . . . . . . . . . . . . 5 Definition of Terms . . . . . . . . . 13 Overview . . . . . . . . . . . . . 16 II. DEVELOPMENT AND DESIGN OF THE STUDENT COURSE EVALUATION SYSTEM . . . . . . . . . . 17 Review of Evaluation and Systems Analysis . . 17 Design of the Model for Student Course Evaluation . . . . . . . . . . . . 27 Summary . . . . . . . . . . . . . 37 III. PROCEDURES AND METHODOLOGY . . . . . . . 38 Purpose . . . . . . . . . . . . '. 38 Population . . . . . . . . . . . . 38 Research Design . . . . . . . . . . 39 Procedures . . . . . . . . . . . . 4O Differences in Treatments . . . . . . . 42 Instrumentation . . . . . . . . . . 46 Statistical Analysis . . . . . . . . . 49 Statistical Hypotheses . . . . . . . . 51 Summary . . . . . . . . . . . . . ‘53 Chapter Page IV. ANALYSIS OF DATA . . . . . . . . . . . 55 Analysis of Data . . . . . . . . . . 55 Hypotheses . . . . . . . . . . . . 57 Summary . . . . . . . . . . . . . 60 V. SUMMARY AND CONCLUSIONS . . . . . . . . 63 Summary . . . . . . . . . . . . . 63 Conclusions . . . . . . . . . . . L 65 Discussion . . . . . . . . . . . . 65 Recommendations . . . . . . . . . . 67 BIBLIOGRAPHY . . . . . . . . . . . . . . 74 APPENDICES O O O O C I O O O O O O O O O 86 vi LIST OF TABLES Baseline Data for NEMA . . . . Group Means . . . . . . . . Multivariant Analysis of Covariance Summary of Results . . . . . Summary of Results: Post—hoc Comparison vii in Data Page 49 56 57 61 62 LIST OF FIGURES Page Generalized Schema for Research in Teacher Effectiveness (Mitzel) . . . . . . . . . 6 Type III Variables . . . . . . . . . . 9 Systems Approach to Course Design (John Barson, et al., Michigan State University) . . . . . 30 Smith Model for Course Evaluation Utilizing Student Opinion and Value Judgments . . . . 32 Research Design . . . . . . . . . . . 41 Comparison of Treatment A with Treatment B . . 45 viii Appendix A. LIST OF APPENDICES Page Newer Educational Media Attitude Inventory. 87 A Test for Audiovisual Media . . . . . 91 Sample Course Hand-Outs . . . . . . . 98 Sample Course Evaluation Questionnaires. . 104 Statistical Data . . . . . . . . . 116 Major Generalizations: Student Ratings of Teachers . . . . . . . . . . . 123 Datrix Reference Listing. . . . . . . 127 ix CHAPTER I RATIONALE FOR THE INVESTIGATION Purpose One purpose of this studvausto determine the effect of student course evaluation on attitudes toward instructional technology and its effect on cognitive per— formance in a graduate course in instructional technology. A second purpose of the study is to determine the effect of systematic two—way (student-to-teacher and teacher-to— student) feedback on attitudes toward instructional tech- nology and its effect on cognitive performance in a graduate course in Instructional Technology. Need for The Study It has become increasingly apparent over the last few years that technology can improve instruction and should be an integral part of classroom instruction. The evidence has become so strong, in fact, that educators can no longer afford to overlook it. The Presidentially appointed Commission on Instructional Technology reported " . . . that technology properly employed could make education more pro- ductive, individual, and powerful, learning more immediate, instruction more scientifically based, and access to education more equal."1 The conclusion of the Commission was that this nation " . . . should make a far greater investment in instructional technology. We (the Commission) believe that such an investment will contribute to extending the scope and upgrading the quality of education, and that the results will benefit individuals and society."2 Even though there is both need and support for the use of technology in education, its actual use is minimal. In the Commission's report it is estimated that there are fifty million pupils attending class an average Of five ‘hours a day, five days a week or 1,250,000,000 pupil class hours a week, yet: All the films, filmstrips, records, programmed texts, television and computer programs do not fill more than 5 per cent of these class hours. Some experts put the figure at 1 per cent or less. . . . To generalize and oversimplify: the present status of instructional in American education is low in both quantity and quality. While it may be true that many teachers are still unaware of the potentials of technology for education, Lois V. Edinger, Professor of Education at the University of North Carolina and a recent past president of the National Education Association, reports that the "vast majority" of the teaching profession are aware of and 1Commission on Instructional Technology, To Improve Learning (Washington, D.C.: Government Printing Office, March, 1971), p. 34. 2 Ibid., p. 34. 3Ibid., p. 21 (Underlining is by the Commission). have accepted the value of technology in education, "albeit with varying degrees of pleasure and readiness."4 I If many teachers are aware of the value of tech- nology, why do so few teachers use technology? One Of the main barriers to use of instructional technology appears to be attitudinal. In a 1963 study, Tobias found that teachers reacted more negatively to terms which connotate automation, even if the terms all refer to the same thing.5 Teachers tend to feel threatened by technology, one indica- tion of which is the expressed concern that technology may "dehumanize" education.6 Other factors preventing the wide- spread use of technology include lack of adequate or accessible equipment, lack of skill with equipment, and lack of administrative commitment.7 A final barrier, identified by the Commission, to the widespread use of 4Ibid., p. 55. 5Sigmund Tobias, "Lack of Knowledge and Fear of Automation as Factors in Teachers' Attitude Toward Programmed Instruction and Other Media," Audiovisual Communication Review, Vl4:99-109 (1966), p. 99. 6Perhaps one of the most noteworthy discussions of the "dehumanizing" argument relative to technology is the 1959 volume, Human Nature and the Human Condition by Joseph Wood Krutch (Random House), and the counter-argument by the late James D. Finn, "The Tradition in the Iron Mask," Audio-Visual Instruction, June, 1961. 7Commission on Instructional Technology, op. cit., Appendix B: passim. technology in education is that "teachers (are) not trained in Instructional Technology."8 Where there are good (media) programs, and access to them is well-organized, the use of materials is often minimal because teachers are inadequately trained to exploit what is available.9 Accordingly, in light of the evidence that teachers are often not adequately trained in instructional technology, on the one hand, and the evident importance of positive teacher attitudes toward instructional technology: on the other, it was determined to investigate the relationship of student course evaluation and cognitive performance in instructional technology education and attitudes toward instructional technology. Hypotheses The study was designed to test the following hypotheses: Effect of Student Course Evaluation 1. When given the opportunity to evaluate system- atically a course of instruction, students' level of cognitive performance in that course will be greater than without that opportunity. 2. When given the opportunity to evaluate system- atically a course of instruction, students level of attitude 81bid., p. 83. 9Ibid., p. 83. toward the content of the course will be more positive than without that opportunity. Effect of Systematic Two-Way Feedback 3. When given the opportunity for systematic two- way feedback on a course of instruction, students' level of cognitive performance in that course will be greater than without that opportunity. 4. When given the opportunity for systematic two- way feedback on a course of instruction, students' level of attitude toward the content of the course will be more positive than without that opportunity. Theory In a 1957 analysis of how to determine the effective- ness of teachers and to predict the degree of success a potential teacher will achieve in a classroom, Harold E. Mitzel, Assistant Dean for Research, Pennsylvania State University and an experienced researcher of teacher effective- ness, decided that four major variables were involved in teacher effectiveness. To bring the four variables and their interrelationships into clear focus, he constructed a paradigm10 (see Figure l). The following is his eXplana- tion of the paradigm. 10Harold E. Mitzel, "A Behavioral Approach to the Assessment of Teacher Effectiveness" (New York: Office of Research and Evaluation, Division of Teacher Education), p. 5. (Mimeographed.) Type I Variables Type II Variables Type III Variables Type IV Variables Prediction Sources Contingency Factors Classroom Behaviors Criteria of Effec- tiveness (Inter- mediate Educational Goals) Teacher Variables Teacher Personality Attitudes Interests Abilities Teacher Behaviors etc. In the community Changes in Pupil Teacher Training Factors In extracurricular Behavior Achievement in teaching school activities methods courses In promoting mental (Pupil Growth) Performance in student health, etc. teaching l.~.—.- — ————-— — —--¢I In subject matter Specific knowledges and h In classroom I knowledge skills, etc. I A: I. In social skills , . I In appreciation Environmental , ' Pupil-Teacher of democratic Variables I I Interaction I values 41’ l L, In attitudes, School location I In classroom I appreciations, . - ------------ - etc. School Size School organization a 47 School plant and “-> Out of classroom ’ I equipment / ' “T_'_" Community economic V : 2| factors, etc. I m I Q: Pupil Behaviors 1 m I m I m I : Pupil Variables l I I I Attitudes : I Interests FEED BACK J ‘— — — Abilities * —————————————————— T etc. Figure 1: Generalized Schema for Research in Teacher Effectiveness (Mitzel). Type I variables are composed of an almost inexhaustible number of human characteristics (personality and training factors) on which teachers differ and which can be hypothesized to account, in part, for differences in teacher effectiveness. Ideally some Type I variables ought to be estimated before young people begin training as teachers, others by their very nature must be deferred until training is underway or completed. Type II variables are contingency factors (school environment and pupil variables) which modify and influence the whole complex of behaviors that enter into the educational process. If Type II variables play a commanding role in the achievement of educa- tional objectives, then we will be required to repli- cate studies of teacher effectiveness in a great many different situations, and predictions of teacher success from Type I variables will have to be con- tingent upon Type II variables. Type III variables, or behaviors (teacher-pupil behavior) are of crucial significance in the process of assessing effective teaching. The classroom provides the focal point wherein the personality and training of the teacher are translated into actions. Likewise school and background influences on pupils determine in part pupils' classroom behavior. It is primarily out of the interaction of these elements that we expect edu- cational goals to be attained. Considering that class- room behaviors bear such heavyresponsibilities in determining educational outcomes, remarkably little Is known about them or their effects.ll , Type IV variables (pupil growth) are the criteria or standards against which the whole of educational effort must be judged. We have subtitled them intermediate educational goals, meaning measurable outcomes at the end of a period of instruction to distinguish them from the ultimate criterion which might be phrased as 'a better world in which to live.‘12 The interrelationships among the four types of variables are indicated by connecting lines on Figure l. llUnderlining added. 12Mitzel, op. cit., p. 6. In general, solid lines are indicative of direct effects and dotted lines suggest indirect or tangential effects. In such a scheme teacher variables (Type I) and pupil variables (Type II) are direct determinants of teacher behavior and pupil behavior respectively. Environmental variables (Type II) indirectly influence both teacher and pupil behaviors. In the view presented here the complex of pupil-teacher interactions in the classroom is the primary source to which one must look to account for pupil growth.13 This study is generally concerned with Type III Variables (Classroom Behaviors) identified by Mitzel and specifically concerned with that part of the paradigm bordered by dotted lines and labeled "Pupil-teacher inter— action" (see Figure 2). Mitzel further indicated that his conceptual asses- ment scheme of teacher effectiveness rested upon at least two fundamental assumptions: First, there must be some stability in human personality which exerts a consistent governing or modifying effect on a teacher's behavior in the classroom . . . The second assumption is that the teacher (or more precisely, the teacher's behavior) as contrasted with the home, the school equipment, the principal, or other factors, is the primary causative factor in accounting for pupil growth toward the goals of the school.14 In part to define personality "stability" with reference to teacher behavior, Ryans attempted to build a "theory of teacher behavior. "15 His basic contention is l3Ibid., p. 6. l4Ibid., p. 7. 15David G. Ryans, Characteristics of Teachers (Washington, D.C.: American Council on Education, 1960), PP- I‘ 13-26. Teacher behaviors In the community In extracurricular school T activities I In promoting mental \ health, etc. I_-+—-_—____— _______ +— I In classroom I 1‘”—‘“" W“ “m" "““’ I / //I I Pupil-teacher interaction I v I I-- ~ ~ a I . In classroom I L eg- ——————————————— A I I I ‘“”"1 Out of classroom I I. Pupil behaviors Figure 2.—-Type III Variables. 10 "the behavior of the teacher that ought to be studied is social behavior." the 16 He further states that his prOposals: . . . do not constitute a complete inventory of all assumptions required for a theory of teacher behavior. Nor is any particular claim made at this point for theoretical rigor. But if in the area of teacher behavior there are advantages in resolving and systematizing our thinking, a starting point is.l7 necessary regardless of how tentative it may be. To develop his "systematic theory,‘ Ryans defined term teacher behavior as: . . . the behavior, or activities, of persons as they go about doing whatever is required of teachers, particularly those activities which are concerned with the guidance or direction of the learning of others.18 Ryans stated two major assumptions necessary for a theory of teacher behavior, and listed a number of implications (postulates) relating to each of them. One of his basic assumptions was that "teacher behavior is observable." 19 A postulate formulated by Ryans was that teacher behaviors "are revealed through overt behavior and also by symptoms or correlates of behavior." 20 l6Ibid., p. 13. l7Ibid., p. 13. lBIbid., p. 15. lgIbid., p. 19. 201bid., p. 21. 11 Taking a cue from both Mitzel and Ryans, Smith in a 1969 study21 conceptualized a "system for course evalua— tion utilizing student Opinions and value judgments." Smith viewed evaluation of a course in toto, and of teacher behavior in particular, as being made up of at least three integrant parts: (1) those measurements assessing the pupil's behavioral change attributed to the content and/or instructional processes of the course; (2) the assessment made by the instructor of his own instructional performance-- be that assessment based on objective or subjective (often intuitive) criteria; and, (3) the assessment made by the student participants of the course. Measurement of behavioral change has been the focus of numerous studies as indicated by C. Robert Pace in a speech before the American Educational Research Association in 1968: The years following World War I have been the years of tests and measurement, of individual differences, and selection and classification--the develOpment of standardized achievement tests, group tests of intel- ligence, the measurement of interests, ability grouping in the schools, and psychometrics as a special field of knowledge and theory.22 21Jay C. Smith, "The Design and Trial of A Course Evaluation System Utilizing Student Opinions and Value Judgments" (unpublished M.Ed. dissertation, University of Hawaii, 1969). 22C. Robert Pace, Evaluation Perspectives: '68" Transcript of speech delivered to the American Educational Research Association (AERA) presession (Chicago: February, 1968), p. 4. (Mimeographed.) 12 The instructor's self-evaluation of his performance, on the other hand, has not been the subject of extensive investigation. The definition of such self-evaluation and the identification of variables relative to that assessment is an enormously complex and difficult task. The work of Ryans has identified this evaluation activity as a component of teacher behavior. However, actual research on it is currently rare. The third integrant, student evaluation, has often been a catalyst for controversy. Many educators have advocated that a vital and integral part of both course design and the instructional process should be the inputs 23 provided by student evaluation. The basic idea is not new. As indicated in a recent Phi Delta Kappan editorial, the popularity of student evaluation and opinion as a part of educational procedures can be considered contemporary. It has taken some of us a long time to absorb the shock of facing a determined, agressive, and articulate generation of students. Now we have begun to recognize the dimensions of the problem and especially the need for continuous candid conversations with the students about problems that matter to them. . . . Obviously 23An examination of contemporary literature adds credence to the above statement. For further examination of this thinking, see, among others: Agony and Promise, ed. by G. Kerry Smith (Washington, D.C.: American Associa- tion for Higher Education, 1969); Educational Evaluation: New Roles, New Means, ed. by Herman G. Richey (Chicago: The National Society for the Study of Education, 1969); and, Campus Tensions: Analysis and Recommendations (Report of the Special Committee on Campus Tensions) Sol M. Linowitz, American Council on Education, 1970. 13 the real problem is not one of making concessions or of removing threats, but of establishing positive and wholesome patterns of relationships among all the persons involved in the education effort--including students.24 The intent of this study is to investigate one possible source of a contribution toward those "positive and wholesome patterns of relationships." Generally, professional educators, desire to make changes and revi- sions necessary to make their own course meaningful and useful, on the one hand; and, on the other hand, they embrace the philosophy that change for the sake of change or change based on passing pressures is not sound pedagogy. Whatever the underlying causes may be for a lack of adequate positive relationships, it appears worth while to investi- gate the contribution(s) which certain forms of student evaluation can make either to student-instructor relation- ships or to student attitudes and cognitive achievement, or both. Definition of Terms Several terms used throughout the study are defined as follows: 24"Editoral: Will Campus Restlessness Lead to Improved Education?" Phi Delta Kappan, Acting Editor Donald W. Robinson, V. LII: #2 (October, 1970), p. 557. l4 Attitude Opinion, attitude, belief: These terms do not have fixed meanings in the literature, but in general they refer to a person's preference for one or another side of a controversial matter in the public domain—- a political issue, a religious idea, a moral position, an aesthetic teste, a certain practice (such as how to rear children).25 This study is concerned with a person's anticipated action; accordingly attitude as used here is specifically defined either as: An attitude is a mental and neural state of readiness, organized through experience, exerting a directive or dynamic influence upon the individual's response to all objects and situations with which it is related.26 or, An attitude is a mental disposition of the human 27 individual to act for or against a definite Object. Evaluation To operationalize the term and make it more meaning- ful for the study, the definition by C. Robert Pace of the purpose and process of evaluation has been used: Evaluation is seen as an instrument of reform . . . both an act and a result. The reason for evaluating any present activity or program is to improve it.28 25Bernard Berelson and Gary A. Steiner, Human Behavior: An Inventory of Scientific Findings (New York: Harcourt, Brace and World, Inc., 1964): p. 557. 26G. W. Allport, "Attitudes," Handbook of Social Psychology, ed. by C. M. Murchison (Worcester, Mass.: Clark University Press, 1935), pp. 798-844. 27D. D. Dorba, "The Nature of Attitude," Journal of Social Psychology, V4 (1933), pp. 444-463. 28C. Robert Pace, op. cit., p. 3. 15 and, further, We undertake to evaluate a program because we hope thereby to improve it. We know that knowledge of results aids us in learning new skills. So likewise, an evaluation of our status and progress helps us to improve that status and to make further progress. By analyzing our experiences, resources and programs we help to clarify them and to bring our efforts more directly in line with our purposes. Thus evaluation is a technique that can and should lead to the con- tinuous improvement of education.29 ‘ Feedback Feedback in this study is defined as "the reaction of some results of a process serving to alter or reinforce the character of that process."30 Two-Way Feedback is defined as feedback from student to teacher and teacher to student. Instructional Technology The terms, instructional technology and educational media are used interchangeably in the study being reported. The definition used is by the Commission on Instructional Technology: Instructional technology goes beyond any particular medium or device. In this sense, instructional tech- nology is more than the sum of its parts. It is a systematic way of designing, carrying out, and evaluating the total process of learning and teaching in terms of specific objectives, based on research in human learning and communication, and employing a combination of human 291bid., p. 9. 30The Random House Dictionary of the English Lan uage (unabridged edition) (New York: Random House, 9 . l6 and nonhuman resources to bring about more effective instruction.3l System The definition of system used in this study is the one by Schuller: A 'system' may be defined as any group of dynamically related components which operates in concert or in related fashion for the purpose of achieving a specified goal or set of goals.3 Overview The development and design of the evaluation system central to the study is outlined in detail in Chapter II. Chapter III contains the research design of the study includ- ing a discussion of the procedures, instruments and statis- tical analysis. The results of the experiment and an analysis of the data are reported in Chapter IV. The summary, conclusion, and implications for further research are presented in Chapter V. 31Commission on Instructional Technology, op. cit., p. 5. 32Charles F. Schuller, "Systems Approaches in Media and Their Application to Individualized Instruction at the University Level," Michigan State University, 1967, presented in part at Bucknell University Symposium, February, 1968. (Mimeographed.) CHAPTER II1 DEVELOPMENT AND DESIGN OF THE STUDENT COURSE EVALUATION SYSTEM The present chapter first reviews the two basic components included in the develOpment of a student course evaluation system--namely, the evaluation process as a whole and systems analysis. This is followed by an explanation of the Student Course Evaluation Model design for this study. Review of Evaluation and Systems Analysis Student course evaluation is a part of the general process of evaluation. Most of us believe that all American boys and girls should have experiences that are maximally meaningful to them at the time, and that their judgments are necessary if we are to know what is meaningful.2 In the Theory section of Chapter I, it has been indicated that this study is generally concerned with teacher classroom behaviors and specifically concerned 1This Chapter is a more extensive and updated version of a Chapter of the same title that originally appeared in Jay C. Smith, "The Development and Trial of A Course Evalua- tion System Utilizing Student Opinion and Value Judgments" (unpublished M.Ed. dissertation, University of Hawaii, 1969). 2Stephen M. Corey, "A Perspective on Education Research," The Education Digest, 19:3 (November, 1963). 17 18 with pupil-teacher interactions (see Chapter I, Figure 1 and Figure 2). Three integrant parts of a course evalua- tion process have been identified: (1) those measurements assessing the pupil's behavioral change attributed to the content and/or instructional processes of the course; (2) the assessment made by the instructor of his own instructional performance; and (3) the assessment made by the student participants of the course. In order to estab— lish a foundation for the development of a course evalua- tion system designed for integrant number three (student course evaluation) a review of the general evaluation process is necessary. Evaluation as a Factor in Program Development Phil C. Lange defines evaluation as follows: Evaluation is the process of valuing something. It could be directed to the purpose of securing value judgments about the feasibility of a plan, or to the purpose of establishing the value of a plan according to some criteria, or it might appraise the observable outcomes of a program in accordance with the established purpose of that program, or it might be any of a large variety of other ways of valuing good intentions or actual procedures, or evident outcomes.3 The salience of evaluation in education today is evident in the spirit of reform and innovation which one feels in many segments of education-~new curricula, new 3Phil C. Lange, "Evaluation: On the Process of Evaluation," paper prepared for the Experienced Teacher Fellowship Program, University of Hawaii, May, 1968, p. l. (Mimeographed.) l9 technologies, new administrative patterns, and new clientele to be served. A search of the literature yielded much evidence and research pertaining to evaluation en masse. Information relative to the specific topic of student evaluation of courses was limited. Prior to the 1930's, professional educators were primarily concerned with tests and measurements. The empha- sis was on intelligence testing, standardized achievement tests, ability groupings, and psychometrics. As the measurement/evaluation field began to grow in maturity, a shifting concern from the confining limits of measurement to broader aspects of evaluation became evident. As Pace points out, both the scope and function of evaluation was expanded: Evaluation accepted and welcomed the use of Observa- tions, interviews, check lists, questionnaires, testi- mony, the minutes of meetings, time logs, and many other relevant means of assembling information. It included psychometrics but held that psychometric theory was irrelevant in many evaluation activities. Evaluation thus freed itself from the arbitrary restrictions of the experimentalist's preoccupation with research design and hypothesis testing. Evalua- tion became related to group dynamics, action research, self-improvement, and to other 'movements' concerned with the processes of change and betterment. The reason for evaluating any present activity or program was to improve it. . . . Thus the process of carrying out an evaluation was directly related to achieving the purpose of evaluation—-namely, the purpose of change and improvement.4 4C. Robert Pace, "Evaluation Perspectives: '68," Transcript of speech delivered to the AERA Presession, Chicago, February, 1968, pp. 2-3. (Mimeographed.) 20 Objectives and Evaluation Analogous with the development of evaluation as a process of reform, was a developing concern with the speci- fication and use of objectives. Ralph Tyler outlined the process pf evaluation as (a) identifying general Objectives in behavioral terms; (c) identifying situations in which the behavior could be observed; (d) devising and applying instruments for making the observation; and (e) relating the obtained evidence to the professed Objectives.5 As this process was applied it was evident that the clarity of objectives and the relevance of measures had a direct impact on the clarity and relevance of instruction. Evalua- tion was thus a way to improve instruction. David R. Krathwohl when speaking of objectives states: A major contribution of this [objectives] approach to curriculum building is that it forces the instructor to spell out his instructional goals in terms of overt behavior. This gives new detail; indeed it yields an operational definition of many previously general and often fuzzy and ill—defined objectives. The concept of evaluation as intimately related to the objectives of instruction led Benjamin Bloom and others to 5Ralph W. Tyler, "Translating Youth Needs Into the NEEDS of Youth, Part I," Fifty—Second Yearbook of the National Society for the Study of Education (Chicago: University of Chicago Press, 1963), passim. 6David R. Krathwohl, "Stating Objectives Appropriately for Program, for Curriculum, and for Instructional Materials Development," Journal of Teacher Education, March, 1965, p. 20. 21 construct the Taxonomy of Educational Objectives,7 a taxonomy which is equally relevant for the classification of objectives and the construction of test items. Viewed both in retrospect and contemporaneously, then, the emphasis is quite clear: Why do we evaluate? One very clear reason is in order to judge the effectiveness of an educational program. We undertake to evaluate a program because we hope thereby to improve it. By knowing its strengths and weaknesses we are enabled to plan more intelligently for its improvement. Thus evaluation is a technique that can and should lead to the continuous improvement of education.8 Finally, it can be said that evaluation is a cycle which involves clarifying objectives, measuring the attainment of objectives, and adapting teaching methods and materials to facilitate the better attainment of objectives. This cycle of continuous evaluation should be a powerful method for the improvement of curricula, the improvement of instruc— tion, and the improvement of testing. Evaluation, as Tyler has indicated, is a process. Pace has identified it as a process leading to improved performance. By implication, then, the process of course evaluation should lead to an improved course and, logically, improved instruction of a course. Richard L. Turner, Associate Dean for Research at Indiana University, has recently commented on what "good" teaching means: 7B. 8. Bloom, M. D. Englehart, E. J. Furst, W. H. Hill, and D. R. Krathwohl, eds., Taxonomy of Educational Objectives: HandbookI: Cognitive Domain (New York: McKay, 1956). 8Pace, Op. cit., p. 28. 22 Good teaching is judged by its outcomes. There are many possible kinds of outcomes to college teaching: student reports or evaluations, factual knowledge, ability to think in a content area, ability to integrate content areas, one's orientation toward the utility of knowledge, his selection of an occupation, his arousal to militant action, his interpersonal sensitivity, and so on. Such outcomes may be differentially valued. Different criterial weights may be assigned to them. The act of assigning value-weights in a situation is the procedure by which what is meant by 'good' in that situation is determined.9 In the same article, Turner emphasizes the need for systematic evaluation of instruction: Indeed, if there has been a failure in our efforts to improve teaching, and one suspects there has been, it is because we have neglected to systematically evaluate and painstakingly isolate those variables . . . which hold the key to our successes.10 One method of systematizing and studying a process (such as evaluation) is by the application of systems analysis. SYSTEMS ANALYSIS System, as defined by Webster's Unabridged Dic- tionary, is: "a regularly interacting or interdependent 11 An analysis group of items forming a unified whole." of this definition reveals that the key words in the definition are: regularly; interacting or interdependent; and, unified whole. A review of the extensive literature relating to systems analysis shows that there is no single definition 9Richard L. Turner, "Good Teaching and Its Contexts," Phi Delta Kappan, VLII, No. 3 (November, 1970), p. 155. 10 Ibid., p. 1580 llWebster's Unabridged Dictionary, 1968. 23 of it. Harry J. Hartley says of systems analysis: "It is fairly apparent that the term, systems analysis, possesses nearly as many definitions as there are persons who advocate its use. It is a prestigious term used by many in a casual "12 fashion. He then goes on to define systems analysis: The concept of systems analysis may be defined as an orderly way to identifying and ordering the differenti- ated components, relationships, processes, and other prOperties of anything that may be conceived as an integrative whole.l3 Hall and Fagen define systems simply: "A system is a set of objects together with relationships between the objects 14 and between their attributes." John Pfieffer defines systems analysis as " . . . a disciplined way . . . to analyze as precisely as possible sets of activities whose interrelationships are very complicated, and of formulating comprehensive and flexible plans, on the basis of the 15 A further definition for consideration may analysis." be the one from Kaufman, "The sum total of separate parts working independently, and in interaction to achieve a 12Harry J. Hartley, Educational Planning, Prggramming, Budgeting: A Systems Approach (Englewood Cliffs: Prentice- Hall, Inc., 1968), p. 24. 13 Ibid., p. 23. 14A. D. Hall and R. E. Fagen, "Definition of System," General Systems, Yearbook of the Society for General Systems Research, Vol. 1 (1956), p. 23. 15John Pfeiffer, New Look at Education: Systems Analysis in Our Schools and Colleges (New York City: Odyssey Press, 1968), p. 2. 24 16 previously specified objective." Barson and Heinich define systems in the context of instruction as: An Instructional System is a complex consisting of the following components: Learner(s) and a combina- tion of instructor(s), materials, machine(s) and technician(s), given certain inputs and designed to carry out a prescribed set of operations. This set of operations is devised and ordered according to the most recent and pertinent evidence from research and expert opinion such that the probability of attaining the output, specified behavioral changes in the com- ponents is maxima1.l7 A consensus of definition is that a system is composed of the parts of a whole, working in relationship to accomplish the purposes (or tasks) of the whole. A further consensus among practitioners of systems analysis is that there are definite limitations to the systems analysis approach. Chief among these limitations, as stated by Pfeiffer, is that "the systems approach is not a set, established thing with clear-cut rules to 18 follow in dealing with all problems." Or by Hartley, " . . . systems analysis should be viewed, not in a narrow 16Roger A. Kaufman, "A Systems Approach to Education: Derivation and Definition," Audio Visual Communication Review, 16: 4 (Washington D.C.: Department of Audiovisual Instruc- tion, 1968), p. 419. 17 John Barson and R. Heinich, "The Systems Approach to Instruction," Department of Audiovisual Instruction 1966 Convention, San Diego (audiotape) (Boulder: National Tape Repository, University of Colorado). Also reported in part in "The Systems Approach," Audiovisual Instruction, 11 (1966), pp. 431—433. 18 Pfeiffer, op. cit., p. 3. 25 context, but in a broad sense as a planning procedure for relating curricular objectives to human and material resources,’ and, further, "The value of a systems approach is that it formalizes what takes place in the framework of the management decision process at any jurisdictional level."19 Insofar as the purpose of this study is concerned, systems analysis can be defined and limited to a framework for an analysis of the working relationships inherent to the task, which are regularly performed to accomplish the purpose--stated or implied--of the task.20 The systems analysis approach then, at the risk of oversimplification, is a model for the way things are done. Accepting the definition that systems analysis is a model for the way things are done, a logical question that might be asked is, "Why use a systems analysis approach at all?" According to one practitioner of the art,21 "Our trade is to help people make decisions." The essential power of the approach is that it offers a solid objective foundation for decisions. Referring once again to Pfeiffer: 19Hartley, op. cit., p. 51. 20In the study being reported the "task" is the development of a student course evaluation system. lSpecial note should be made that systems analysis is generally regarded as an art and not a science. Not once in all the literature reviewed for this study has a formula been found that has labeled itself "this is the way it is to be done." 26 Indeed, the systems approach concerns itself above all with the nature of decision making. Intangibles have always played a leading role in the process, and there is no substitute for judgment, the unique con- tribution of the man shaping major policies. He is always on his own when the chips are down. No one can help him at the moment of decision, when he selects one course of action over another. Before he reaches this state, the systems approach comes in to provide guidlines and evaluations, on the theory that a combination of his judgment and an analysis drawing on the advanced technology of assessment may be more effective than either alone.22 Most professional educators concur that learning is a system of interacting variables requiring decision making at several levels (objectives, procedures, materials, evalua- tion) by several component members of the system (learner, teacher, curriculum designer). Yet too often: Too many professional educators View the notion of a systems approach, which has been borrowed from engineer- ing and industry, as harsh and ominous in its implica- tions for the management of instructional processes. But instructional planning in modern educational institutions cannot be conducted on a piecemeal basis without some effort toward a rational and efficient deployment of human and technical resources. Con- sequently, the use of the systems concept is intel— lectually and practically inescapable.2 In summary, systems analysis provides an intellectual technique for unifying the diverse activities of instruction (and evaluation of instruction) in a logically consistent fashion; and then, using that technique, coupled with other 22Pfeiffer, op. cit., p. 3. 23Donald K. Stewart, "A Learning Systems Concept as Applied to Courses in Education and Training," in Educational Media: Theory Into Practice, ed. by R. V. Wiman, and W. C. Meierhenry (Columbus, Ohio: Charles E. Merrill Publishing Co., 1969), p. 137. 27 educational techniques of problem solving, to answer ques- tions relative to those diverse activities. Design of the Model for Student Course Evaluation The Process of Systems Based on the premise that the systems analysis approach is a logical method of problem solving and that evaluation is a process of valuing leading to reform, the next step is to analyze the relationships of the two (systems analysis and evaluation) and then design a model utilizing pertinent features of both processes. Systematic thinking is logical thinking. By expanding the options and reducing uncertainties, the systems analyst increases the probability in his favor. The range of potential application of this concept is nearly unlimited. . . . Its ma or virtue is the enhancement of human judgment. 4 The literature relevant to instructional design is filled with models of one sort or another. Without exception, this investigator has been unable to find a single model of an instructional system that did not have as one of its subsystems-~direct or implied-—the need for evaluation. By the same token, little information has been found pertaining to a systematic way of evaluating, i.e., a systems model for the evaluation subsystem of course design. 24Hartley, Op. cit., p. 43. 28 John Pfeiffer, in his book New Look At Education: Systems Analysis In our Schools and Colleges, a report of a survey sponsored by Educational Testing Service of Princeton, New Jersey, identifies three basic features or elements in a systems approach: (a) Design for Action; (b) Seeking Alternatives; and, (c) Evaluation.25 He defines element one, Design for Action, as being able to "ask the right questions." He goes on to state that the first task in dealing with problems is to: Identify exactly what has to be done, which means defining objectives and——more than that--defining objectives in operational terms, in ways that demand concrete action.26 Criteria are than selected which measure how well the objectives are being met and determine when those objec- tives have been reached. The second element, Seeking Alternatives, calls for the identification and spelling out of different methods of meeting each objective. "This is an active not a passive step. There must be an organized effort to search out alternatives, perhaps the most important and creative 27 The final element, Evalua- phase of systems analysis." tion, involves the measurement of the alternatives selected in element two (Seeking Alternatives) and the comparative 25Pfeiffer, op. cit., p. 4. 26Ibid., p. 5. 27Ibid., p. 5. 29 benefits of each in light of the operational objectives designated in element one (Design for Action): Alternatives are generally evaluated in numerical terms, . . . but qualitative factors are always to be considered along with quantitative factors; there are always political implications, questions of morale, and other effects, which may not be measurable in satisfactory terms.28 To facilitate the understanding of a systems process and the three elements of the process outlined by Pfeiffer, an analysis of a representative system which appears relevant to the purpose of the study might prove useful. Figure 3 is a Model of a Systems Approach to Course Design formulated by John Barson and others at Michigan State University. The first level of the system (Innovation, Analysis, and Objectives) is representative of Pfeiffer's Element One, Design for Action. This level requires the asking of questions (Innovation and Analysis) relative to the task at hand. The formulation of objectives and setting of criteriaenxaimplied in both steps, Analysis and Objectives. Pfeiffer's Element Two,Seeking Alternatives, is illustrated by the steps: Strategy, Content, Examples, Media Forms, Search, Produce and Implement. This is the action phase of the systems approach. It is also a critical phase of the process which: . . . demands open-mindedness and readiness to discard preconceived notions. Furthermore, the alternatives 28Ibid., p. 5. 3O I INNOVATION L«——s , I ANALYSIS ————§I OBJECTIVES L I I I , STRATEGY ‘ T I / \\ I EXAMPLES I ‘I FLOW CHART V. I/‘/I \ _ / \ 4’ ‘ CONTENT v I MEDIA FORMS I-—~> SEARCH ~————+ PRODUCE __I IMPLEMENT . I Figure 3.--Systems Approach to Course Design (John Barson, University). et al., Michigan State 31 may be combined in different ways and each combina- tion represents a possible plan, a set of activities which may bring about a desired set of changes.29 Barson reports that in the system illustrated (see Figure 3), the alternatives revolving around Examples are proving to be crucial and sometimes elusive. Implied by such a report, and indeed, inherent to the system as a whole, is the need for Evaluation, the third Element in Pfeiffer's analysis of the systems approach. Finally, evaluation is a repetitive process. A plan must be monitored to check its current effectiveness, modified if necessary, checked again, remodified, and so on.30 Figure 4 is the model designed by this investigator to represent the systems analysis approach utilized in this study to evaluate an instructional course by the use of student opinion and value judgments. It should be noted that the model is limited to the concern of this study and, as such, makes no attempt to detail the planning nor the rationale that may have gone into the initial formulation of each component part. This model presupposes that appropriate thought concerning the design of the course had taken place and that the evaluation process would build on prior work. It should be noted, further, that this model is intended as a subsystem within a subsystem. In other words, this model is a systematic approach to 291bid., p. 5. 30Ibid., p. 6. 32 .mucmEm©3h wsHm> paw coflcflmo ucopsum ocHNHHHuD coflumsam>m mmusou How Hmpoz nuHEm "v musmflm \ . V V mIII llllrIimllllJ lfilgma m Ha _ How chHpmoHMHUoz /. mm>Humcumpam «AII IwmmmmmmmmI_ I.I.I:I moHomo meaqazmmmI. /rI onBo< I mHquumucH n“ II mmnwmccoflummso coflgm>nmmno Amaomommc . qazmom qazmomzH Ig MMH>mm II II II ..II.I.III II IIIIII.I ezmzmmommm: _.)..// dd I mmwmwomum Immn omusou _ IIIIIII I.II I;II:I.IIII eomqmm w 7 e _ _ muoumOHocH m. I-I I.I.I-II.I.I a I.I I I.I I I coapmadmom I COHuMSHm>m _ mHoH>mzmm 4N” mm>HuomflQO H mHquaza onemszmmemo _ Hmcflsume mmmomusm I..I I.II II II RI I.-I I.II. I. meommm Ig _ ezmzmeoH©:« How umme 4 R. k. .mnouzm>cH mpsufluufl mapmz Hoseapmospm 3oz k. EmmB m>HBHZUOU .N HBHZUOU .N <2mz .H BmMB m>HBHZUOU .N HBHZOOU .N HBHZOOU .N HeHzooo .m *42mz .H usaqaxd 1U8M1981L qseqqsod U QSOHU m moonw é QSOHU 42 (usually the last three numbers of the subject's social security number). This "unique" number was entered on every test instrument and used as an identification number by the researcher. 4. Each subject wrote his unique number on a 3" by 5" file card and inserted the card in a sealed envelope. At the outset of the administration of an instrument the enveIOpes were distributed to the subjects by name. The subject broke the seal on the envelope and entered his unique number on the instrument form. He then placed the 3" by 5" card with his unique number into a second envelope. This procedure was repeated at the administration of each evaluation instrument. 5. The researcher added a fourth digit to each unique number in order to identify subjects by groups. 6. All test instruments were machine scored by the Western Michigan University Testing Service. Standard answer forms and testing procedures were used. Differences in Treatments Treatment A This Treatment consisted of a replication of the procedures outlined by Smith6 and detailed in Chapter II of this study. Treatment A can be characterized as a system designed to provide student evaluative data to be 6 Jay C. Smith, "Design . . . op. cit., Chapter II. 43 used by an instructor for course improvement. The system is linear and static involving one-way (student-to-teacher) feedback at fixed intervals. Figure 4 (page 32) is a graphic representation of the Smith model for course evaluation utilizing student opinion and value judgments. Briefly, the student evaluation procedures involve three levels: (A) Analysis, (B) Measurement and (C) Action. Level A, Analysis, consists of a statement of the purposes and objectives of the course, specification of student terminal behaviors, selection of course instructional procedures, determination of evaluation indicators, and analysis of course population demographic data. Level B, Measurement, involves the collection of evaluative data from students. Both informal and formal measurement tech- niques are used. Questionnaires and opinionnaires are constructed from information derived from the analysis procedures that constitute Level A. (The formal measure- ments used during the summer of 1970 with Teacher Education 548: Audiovisual Media, Western Michigan University, are contained in Appendix D.) Measurements in the replication of the 1969 study (research Treatment A) were administered at the end of the first week of instruction, at the end of the fifth week of instruction and at the end of the tenth week of instruction. Level C, Action, occurs at the end of the instructional period. This level includes the formu- lation of alternatives for action, a decision regarding 44 course modifications, and implementation of course modifica- tions. Treatment B Treatment B consisted of the same procedures as Treatment A with the exception of a modification of the system to allow systematic two-way feedback. Treatment B can be characterized as a system for course evaluation that is looped and dynamic involving two-way feedback at fixed but more frequent intervals. Formal measurements were conducted at the end of every second class session as opposed to the first, fifth, and tenth weeks as with Treatment A. Instead of the student evaluative feedback being one-way (student-to—teacher), the feedback in Treat- ment B was two-way (student-to-teacher and teacher-to- student). Because of the above modifications in the feed- back variable, Level C, Action, was not limited to action only at the completion of the instruction but occurred throughout the term of instruction. Figure 7 is a graphical comparison of Treatment A with Treatment B. Treatment C Treatment C consisted of no treatment and thus Group C was the control in the research design. 45 TREATMENT A 7 TREATMENT B I I Course Evaluation System Course Evaluation System Revised fl. Model static; linear 1. Model dynamic; looped 52. Three steps: 2. Three steps: Analysis Analysis 1 Measurement Measurement I Action Action ' l i3. Feedback to Students: 3. Feedback to Students; E ‘ NONE; closed loop At Points: 2 A, B I I 3 i I 4 ; , 6 I 4. Current Course Revision: 4. Current Course Revision: | At point 3 At Points: 2 A, B ‘ I 3 I 4 3 6 I 7 35. Future Course Revision: 5. Future Course Revision: I At point 9 At point 9 $6. Feedback from Students: 6. Feedback from Students: At points: 4 At points: 2 A, B 6 3 7 4 6 7 culminated at point 8 dispersed at each point culminated at point 8 * Numbers refer to points in the Course Evaluation System (Figure 6). Figure 7.--Comparison of Treatment A with Treatment B. 46 Instrumentation Cognitive The cognitive test (see Appendix B) was written by the course instructor. The test consisted of forty-five items based on material covered during the instructional period. The test was submitted to a panel of three quali— fied authorities in audiovisual media prior to its use.7 With the exception of two items which were modified, the panel agreed that the instrument was valid. The test was given at the first and last class sessions. Attitude The instrument used to measure attitude toward instructional technology in the study being reported was the New Educational Media Attitude (NEMA) inventory. The original instrument was designed by Ramsey to test the hypothesis that "curriculum and supervisory personnel, and audiovisual workers, have significantly different mean scores on a measurement of attitude toward the uses of newer educational media."8 The outcome of the research was, however, that "The research provided an instrument 7The Panel members were Dr. David Curl, Professor of Teacher Education, Western Michigan University; Dr. Ken Dickey, Associate Professor of Teacher Education, Western Michigan University; and, Mr. Fred Brail, Assistant Director of the Educational Resources Center, Western Michigan University. 8Ramsey, op. cit., p. 3. 47 useful in discriminating between individuals possessing attitudes hostile to or in sympathy with the uses of newer educational media for instructional purposes."9 This instrument was used by Guba and Snyder in their research on MPATI telecasts.lO They attempted to measure generalized attitudes toward media with the instrument developed by Ramsey. Guba and Snyder found, however, that: The original form of the instrument was judged un- suitable for direct use because its terminology seemed oriented toward the older audiovisual devices and because some of the item content was deemed unsuited to the audience at hand. Accordingly, the number of items which were retained were rewritten . . . . 11 to give Wider and more current meaning to the items. In the final Version of their study, Guba and Snyder used twenty-three items. Hudspeth12 further modified the instrument by substituting the word "students" for the word "Children" in questions 7, 11, and 18. The Hudspeth version of the instrument was used intact in the study being reported and is contained in Appendix A. Although Hudspeth reports no reliability figures as to the instru- ment, Guba and Snyder report a reliability of r=0.85.l3 91bid., p. 12. lOGuba and Snyder, op. Cit. llIbid., p. 59. 12DeLayne R. Hudspeth, "A Study of Belief Systems and Acceptance of New Educational Media with Users and Non- Users of Audiovisual Graphics" (unpublished Ph.D. disserta- tion, Michigan State University, 1966). 13Guba and Snyder, op. cit., p. 12. 48 In a later study, Margolesl4 reports a similar correlation of r=0.86. Table 1 shows the baseline data on the New Educational Media Attitude inventory as provided by the three studies cited. The instrument is scored on a six-point Likert scale ranging from a "1--agree strongly" to "6--disagree TABLE 1.--Baseline Data for NEMA.* ’- Item Guba—Snyder Hudspeth Margoles n 573 36 70 m 67.1 64.8 71.4 SD 15.9 --** 17.2 reliability 0.85 --** 0.86 * New Educational Media Attitude inventory. *7: Information not given. strongly." In order to avoid response set, some items in the instrument are phrased negatively. These items were reverse scored in arriving at a total score. High total scores for subjects indicate an unfavorable attitude toward educational media. Low total scores indicate a favorable attitude. 14Richard A. Margoles, "A Study of Media Use Atti— tudes, And Barriers AS Measurements for Evaluating The Influence of Extra-Media Support Service on Faculty Teaching in Large Classrooms" (unpublished Ph.D. dissertation, Michigan State University, 1969). 49 Statistical Analysis The data were analyzed statistically by using a one-way multivariant analysis of covariance procedure. The multivariant analysis of covariance procedure was used because it took into account both the NEMA and cognitive test scores simultaneously. The multivariant technique is appropriate because it takes into account the statis— tical interdependence of the two measures (NEMA and A Cogni— tive Test for Audiovisual Media) which were taken on the same subjects at the same point in time. The analysis of covariance technique has several advantages. Primary among these is that the procedure is suited to the analysis of data from intact, non-matched groups. The era of exhaustive person-to—person matching appears now to be over, for analysis of covariance achieves the same results without the testing and discarding numerous Ss in search of matched pairs. Because it is so superior and efficient and involves no computional effort now that standard programs are available on computers, analysis of covariance is rapidly replacing the older, matching technique. 15 A second advantage of the analysis of covariance technique is that: Like analysis of variance, the model from which it is derived, analysis of covariance can be used in both Single-classification form, that is when there 15Deobold B. VanDalen, Understanding Educational Research: An Introduction (New York: McGraw-Hill, 1966 (fev. ed.)), p. 259. 50 is only one independent variable, and multiple- classification form, when there are two or more independent variables.16 In the study being reported the research Treatments A (student course evaluation) and B (systematic two-way feedback) and C (control) are the independent variables and attitudes toward instructional technology and cognitive performance in instructional technology instruction are dependent variables. It is convenient in analysis of covariance problems to speak of the dependent variable as the criterion variable and the relevant variable(s), for which we wish to make adjustments, as the control variable(s) . . . . The rationale underlying analysis of covari- ance involves a combination of analysis of variance and regression concepts. In its most basic form, we might think of analysis of covariance first determining the magnitude and direction of the relationship between the control variable(s) and the criterion variable(s). Having determined this, the procedure then statistically readjusts each criterion score, through a regression prediction technique, so that the scores compensate for whatever control variable disparity exists between the independent variable groups. Having done this, the adjusted scores are then subjected to an analysis of variance which tests for mean differences by identifying the amount of variation resulting from differences between the groups. An E ratio is produced which is interpreted in the usual manner. Finally, the actual means achieved may be adjusted to compensate for differences on the control variable(s).l7 A third advantage of the analysis of covariance procedures in educational research has to do with its precision. As stated by Campbell and Stanley: 16James W. Popham, Educational Statistics: Use and Interpretation (New York: Harper and Row, Publishers, 1967), p. 224. 17 Ibid., pp. 224-225. 51 Since the great bulk of educational eXperiments show no significant difference, and hence are frequently not reported, the use of this more precise analysis (analysis of covariance) would seem highly desire— able.l8 In the study being reported, the one—way multivariant analysis of covariance was computed by using the Control Data Corporation 3600 computer at Michigan State University. The data were input to the program, Multivariate Analysis of Variance (Analysis of Covariance) programmed by Jeremy Finn of the State University of New York at Buffalo and adapted for the Michigan State University Control Data Corporation 3600 by William H. Schmidt. The probability level selected for rejecting the null hypotheses was at the .05 alpha level. "It has been conventional in behavioral science research work to use 19 Choosing the .05 alpha the 0.05 level of significance." level reduces the probability that the error of finding differences due to chance is 5 of 100. Statisticalgfiypotheses To determine the effect of student course evaluation and systematic two-way feedback on attitudes toward instruc- tional technology and their effect on cognitive performance in a graduate course in instructional technology, four statistical hypotheses were generated and tested. Each 18Campbell and Stanley, "Experimental and Quasi— Experimental Designs for Research," op. cit., p. 23. 19Fred N. Kerlinger, Foundations of Behavioral Research (New York: Holt, Rinehart and Winston, Inc., 1964), p. 169. 52 null hypothesis tested is presented first, followed by an accompanying alternate hypothesis. Effect of Student Course Evaluation Null Hypothesis la: When given the opportunity to evaluate systematically a course of instruction, students' level of cognitive performance in that course will not be greater than without that opportunity. Alternate Hypothesis la: When given the Opportunity to evaluate sysfematICally a course of instruction, students' level of cognitive performance in that course will be greater than without that opportunity. Null Hypothesis 1b: When given the opportunity to evaluate systematIcally a course of instruction, students' level of attitude toward the content of the course will not be more positive than without that opportunity. Alternate Hypothesis 1b: When given the opportunity fo evaluate systematically a course of instruction, students' level of attitude toward the content of the course will be more positive than without that opportunity. Effect of Systematic Two-Way Feedback Null Hypothesis 2a: When given the opportunity for systematic two-way feedback on a course of instruc- tion, students' level of cognitive performance in that course will not be greater than without that Opportunity. Alternate Hypothesis 2a: When given the opportunity fbr systematic two-way feedback on a course of instruction, student's level of cognitive per- formance in that course will be greater than with- out that Opportunity. Null Hypothesis 2b: When given the opportunity for systematic two—way feedback on a course of instruc- tion, students' level of attitude toward the content of the course will not be more positive than without that opportunity. 53 Alternate Hypothesis 2b: When given the opportunity fOr systematic two-way feedback on a course of instruction, students' level of attitude toward the content of the course will be more positive than without that Opportunity. Summary Three sections of Teacher Education 548: Audiovisual Media were offered during the summer, 1970 at Western Michigan University. Each of the sections was randomly assigned to one of three research treatments. Treatment A consisted of a replication of procedures developed by Smith20 for course evaluation utilizing student opinions and value judgments. Treatment B consisted of application of the same procedures with modifications involving systematic two—way feedback. Treatment C was a control. Subjects in each treatment group were given, pre and post treatment, the New Educational Media Attitude inventory and an instruc— tor written cognitive test, A Test for Audiovisual Media. The experimental design used in the study was a Pretest- Posttest Control Group Design. To determine the effect of student course evaluation and systematic two—way feedback on attitudes toward instruc— tional technology and their effect on cognitive performance in a graduate course in instructional technology, four statistical hypotheses were generated. The hypotheses were tested using the one-way multivariant analysis of covariance 20Jay C. Smith, "Design . . . " op. cit., Chapter II. ',"I y-. 54 procedure. The probability level selected for rejecting the null hypotheses was at the .05 alpha level. CHAPTER IV ANALYSIS OF DATA Analysis of Data The statistical hypotheses were tested using a one- way multivariant analysis of covariance procedure. Scores on a cognitive test, determined as the number right, and scores on an attitude inventory, determined as low score having the more positive attitude, were used as the dependent variables. The independent variable was the three treatment groups. All hypotheses were tested using the .05 alpha level with the appropriate degrees of freedom. Statistical data are contained in Appendix E. A summary of the analysis is reported in Table 2 and Table 3. Following the tables each of the null hypotheses is stated and the related data presented. The multivariant analysis of covariance test of equality of mean vectors yielded a F—ratio of 9.25 (degrees of freedom 4 and 152) which was significant at the P=.0001 level. While this does not locate the source of the dif— ference between groups, it does indicate that at least one treatment condition did have a significant influence on either the NEMA Posttest and/or the Cognitive Posttest. The apprOpriate subsequent analyses were conducted so that the exact source of treatment influence could be identified. 55 ‘L . ID IL‘ ‘- 56 .Am xflpcommd ommv aflooz HOQMH>OHCS¢ MOM umme m .ummp O>Huflcm00w .A¢ xflpcmmmm oomv mpsuflppm manmuo>8m ODOOHUCH mmuoom 30H “>HODCO>CH evapflpu< mapmz Hmcoflpmospm BOZm .Aomezou IIpcoEpmme oz "0 #cmEummuB noummmom .xOMmemw mm3I03w OHmeoumwm mcfl>ao>:fl chHDMOHMHpOE CDHB mmnzpmoonm TEOM mo COHDMOHHmmfi um ucmEummHB noummmmm .mucoEOODn msam> paw mcoHCHmo pampsum mCHNHHHu: coflumsam>m mmnsoo How CDHEm we OOQOHT>OU mmuzpmoomm wo COHDMOHHQOM “4 ucmfiummne noummmmmm .mmHQwHHm> mum McESHOO «mmsomm mum m3omH mm.mm mo.om Hm.mm Nh.mm U oo.mm om.om nm.mv wo.mm m mo.mm mm.om mo.mw mo.mm 4 ummwumom wummpmnm ummuumom mummpmmm Nucmepmmme m>HBHZUOU m>HBHZOOU 4&sz dqamz SUHmwmmm H.mcmo2 QSOHUII.N mqmme 57 TABLE 3.--Multivariant Analysis of Covariance. Variable F—value Probability NEMA post 11.79 0.0001 COGNITIVE post ~6.98 0.0017 Degrees of Freedom for Hypothesis 2 Degrees of Freedom for Error 77 The analysis of covariance on posttest measures is presented in Table 3. Hypotheses Effect of Student Course Evaluation Null Hypothesis la: When given the opportunity to evaluate systematically a course of instruction, students' level of cognitive performance in that course will not be greater than without that opportunity. Alternate Hypothesis 1a: When given the opportunity to evaIuate systematically a course of instruction, students' level of cognitive performance in that course will be greater than without that opportunity. A one—way multivariant analysis of covariance on the cognitive interactions produced a F-value of 11.79 and a P=0.0001. Therefore at the .05 alpha level, the null hypothesis is rejected. Null Hypothesis lb: When given the opportunity to evaluate syStematically a course of instruction, students' level of attitude toward the content of the course will not be more positive than without that opportunity. Alternate Hypothesis lb: When given the opportunity to evaluate systematically a course of instruction, students' level of attitude toward the content of the course will be more positive than without that opportunity. 58 The one-way multivariant analysis of covariance procedure on attitude interactions produced a F-value of 6.98 and a P=0.0017. At the .05 alpha level, the null hypothesis is rejected. Effect of Systematic Two-Wanyeedback To compare treatments and statistically compute the effect of systematic two—way feedback (Research Treatment Group B) with non—systematic one—way feedback (Research Treatment Group A), and/or with no feedback (Research Treatment Group C), an Incidental or Post-hoc Comparison in Data was computed. This technique for comparisons is applicable to the situation where a preliminary analysis of variance and F test has shown over-all significance . . . if the experimenter has found evidenCe for over-all significance among his experimental groups, he may use this method of post-hoe comparisons to evaluate any comparisons among means. Using the technique outlined by Hays2 (see Appendix E), a critical difference of 3.16 in mean scores was determined as being significant between groups. Null Hypothesis 2a: When given the opportunity tOr systematic’two-way feedback on a course of instruction, students' level of cognitive perform- ance in that course will not be greater than without that opportunity. Alternate Hypothesis 2a: When given the opportunity fOr systematic two—way feedback on a course of 1William L. Hays, Statistics (New York: Holt Rinehart and Winston, 1963), p. 483. 2 Ibid., pp. 483—485. ‘1".,‘ 59 instruction, students' level of cognitive performance in that course will be greater than without that opportunity. The cognitive posttest mean score for Treatment A was i=29.03 (see Table 2), for Treatment B, §¥29.00 and Treatment C, 2525.86. There was not a difference in mean scores of 3.16 between Treatment A and Treatment B nor ‘_ between Treatment B and Treatment C. There was a difference of 3.17 between Treatment A and Treatment C. The difference between Treatment B (two-way feedback) and Treatment C (control) was not 3.16 therefore, as it is stated, the null hypothesis cannot be rejected. Null Hypothesis 2b: When given the opportunity for systematic two-way feedback on a course of instruc- tion, students' level of attitude toward the content of the course will not be more positive than without that opportunity. Alternate Hypothesis 2b: When given the opportunity far systematic two-way feedback on a course of instruction, students' level of attitude toward the content of the course will be more positive than without that opportunity. The New Educational Media Attitude (NEMA) inventory posttest mean score for Treatment A was ifi48.07 (see Table 2), for Treatment B, Y=42.37 and Treatment C, ié52.31. The mean score difference between Treatment A and Treatment B was 5.70 and between Treatment A and Treatment C, —4.24. The mean score difference between Treatment B and Treatment C was -9.94. On the NEMA inventory, a low score is indica- tive of positive attitude toward media. Since the differences between mean scores for all research treatments were greater 60 than the post-hoc comparison critical difference of 3.16 the null hypothesis is rejected. Summary Four statistical hypotheses were generated and tested: Two of the hypotheses were designed to determine the effect of student course evaluation on cognitive per- formance in a graduate course in instructional technology and its effect on attitudes toward instructional technology. Two additional hypotheses were designed to determine the effect of systematic two—way feedback on cognitive per- formance in a graduate course in instructional technology and its effect on attitudes toward instructional technology. A one-way multivariate analysis of covariance procedure was used to test Hypotheses la and lb for significance at an alpha level of .05. A Post-hoc Comparison in Data tech- nique was used to test Hypotheses 2a and 2b. A summary of the results of the statistical analysis is presented in the following table. A discussion of the findings and their implications will be found in Chapter V. 61 .m magma ppm RR .Hm>ma mamam mo. map m>onm Ho pm DCMOHMHcmHm ¥ «RQOHuommmm b.5183an.mmmlaoz EcoflpOOmmm «cofluommmm .>UH:5uHommo umcu DDOQpHB swap m>HuHmom OHOE on uoc HHH3 mmusoo may mo pampcoo mnp pumBou mpSDHupm mo Hm>ma .mucmpsum .QOHuosupmcH mo mmusoo m so xomnpmmm >m3n03p OHumEmumwm How muflcsuuommo may cm>Hm cos: .muHESDHommO umsu vsonpfl3 coca umummum on uoc Hafiz mmusoo pong CH mocmEHOmHmm m>flpflcmoo mo HO>OH .mpcmosum .COHposuumcfl mo mmusoo m co xomnpmmw >m3u03¢ OHmempmmm How hwflcswnommo mnp cm>flm con: .wuflcsuuomdo umap usozuHB swap m>HDHmOQ OHOE on uoc HHHB mmusoo mnp mo ucmucoo msw pumzou OUDDADDM mo Hm>wa .mucmpsum .coawosuumcfl mo mmmsoo m maamo Iflumfimumwm mumSHm>m Op muflcspuommo mnu cm>Hm con: .muflcsuuomeo wasp usonuHB cmnp Hoummum on no: Hafiz mmusoo posh CH mocmEHomumm w>HpHcmoo mo Hm>ma .mucmosum .cofluosuumcfl mo omusoo m waamo IHumEmumwm mumsam>m Op wpflcsuuommo mgp cm>flm cmsg O. N N 0 ,_g r... coapommmmlcoz Ho cofluomnmm mo ucoEmpmum mamwzuommm Hasz .mpHSMOM mo mnmEESmII.v mqmde 62 wo OOCOHOMMAU HOOHDHMU OOCIumom .mflpms pnmzov mpsufluum T>Huflmom wouOOHUCfl muoom 30A «Ls. .m>onm Ho ma.m pm TOCMOHMHCmHm Cmgp Mmummnm mnoom COTE CH OOCOHmmMHD k. *vm.mI RvN.vI II Hm.mm «sz va.m Rna.m II mm.mm m>HBquoo U Eva.mu II «on.m nm.mv figmz vH.m II mo. oo.®m m>HBHZUOU m II *vN.VI Ron.m no.mv «RCZmZ II 45H.m mo. mo.mm m>HBszOU d U .I < 0 .. d m .I C M ummuumom quEummHB Coummmmm .mumo CH COmHHmmEOU OOCIpmom "mwasmmm mo >HOEECmII.m mqmée CHAPTER V SUMMARY AND CONCLUSIONS Summary The study reported had two purposesf One purpose was to determine the effect of student course evaluation on attitudes toward instructional technology and its effect on cognitive performance in a graduate course in instruc- tional technology. A second purpose of the study was to determine the effect of systematic two-way feedback on attitudes toward instructional technology and its effect on cognitive performance in a graduate course in instruc- tional technology. Three sections of Teacher Education 548: Audiovisual Media were offered during the summer, 1970 at Western Michigan University. Each of the sections was randomly assigned to one of three research treatments. Treatment A consisted of a replication of procedures for course evalua- tion utilizing student opinions and value judgments developed by Smith1 and detailed in Chapter II of the study. Treatment B consisted of application of the same procedures with 1Jay C. Smith, "The Design and Trial of A Course Evaluation System Utilizing Student Opinions and Value Judgments" (unpublished M.Ed. dissertation, University of Hawaii, 1969). 63 64 modifications involving systematic two-way feedback. Treatment C was a control. Subjects in each treatment group were given, pre and post treatment, the New Educa- tional Media Attitude (NEMA) inventory2 and an instructor written cognitive test, A Test for Audiovisual Media.3 The experimental design used in the study was a Pretest- Posttest Control Group Design. Four statistical hypotheses were generated and tested. Two of the hypotheses were designed to determine the effect of student course evaluation on cognitive per- formance in a graduate course in instructional technology and its effect on attitudes toward instructional tech- nology. Two additional hypotheses were designed to deter- mine the effect of systematic two-way feedback on cognitive performance in a graduate course in instructional technology and its effect on attitudes toward instructional technology. A one-way multivariate analysis of covariance procedure was used to test Hypotheses la and lb for significance at an alpha level of .05 with apprOpriate degrees of freedom. A Post—hoc Comparison in Data technique was used to test Hypotheses 2a and 2b° 2Curtis Paul Ramsey, A Research Project for the DevelOpment of a Measure to Assess Attitudes Regarding the Uses of New Educational Media, Title VII, Project Number 492, National Defense EducatIon Act of 1958, Grant Number 740095 (Nashville, Tennessee: George Peabody College for Teachers, December, 1961). 3David W. Hessler, "A Test for Audiovisual Media" (Kalamazoo, Michigan: Western Michigan University, June, 1970). (Mimeographed.) 65 Conclusions The analysis of the data supports the following conclusions: 1. When given an opportunity for systematic evalua- tion of a course of instruction, students' covnitive per— formance in that course is better and their attitude toward the contant of the course is more positive than when such evaluation opportunity is not afforded. 2. Although when given the Opportunity for systematic two-way feedback on a course of instruction, students' level of cognitive performance in that course is not materially affected their attitudes toward the content of that course are significantly more positive then when such feedback opportunity is not afforded. 3. Students' attitude toward the content of a course is more positive when given the opportunity for systematic two—way feedback on that course of instruction than when only given the opportunity for systematic evaluation of the course of instruction without the opportunity for systematic two-way feedback. Discussion In the study reported, the term evaluation was defined in terms of the purpose of evaluation: Evaluation is seen as an instrument of reform . . . both an act and a result. The reason for evaluating any present activity or program is to improve it.4 4Robert C. Pace, "Evaluation Perspectives: '68," Transcript of a speech delivered to the American Educational Research Association (AERA) presession (Chicago: February, 1968), p. 3. (Mimeographed.) 66 The study reported was an investigation of the effect of evaluation by student participants of a course. The assumption was that such evaluation would contribute to course improvement. Educational research should result in guidelines for educational practices and procedures. An experiment by Gage, Runkel and Chatterjee5 indicated that when teachers are given feedback on their performance (pupil's ratings of their actual andjxkuxlteacher on twelve items), they changed in the direction of their pupil's ideal teacher, as measured by pupil's subsequent descriptions of the teacher. This observation combined with Ryan's basic assumption, detailed in the Theory section of Chapter I of the study, that "teacher behavior is observable;"6 and, further, with H. H. Remmers' statement that " . . . research has demonstrated that student evaluation is a useful, convenient, reliable, and valid means of self—supervision and self—improvement for the teacher,"7 gives credence to the value of the study undertaken. 5N. L. Gage, P. J. Runkel, and B. B. Chatterjee, "Equilibrium Theory and Behavior Change: An Experiment in Feedback from Pupils to Teachers" (Urbana: Bureau of Educational Research, University of Illinois, 1960). (Mimeographed.) 6David G. Ryans, Characteristics of Teachers (Washing- ton, D.C.: American Council on Education, 1960), p. 19. 7H. H. Remmers, "Rating Methods in Research on Teaching," in Handbook of Research on Teaching, ed. by N. L. Gage, American Educational Research Association (Chicago: Rand McNally and Company, 1963), p. 367. 67 Remmers lists fourteen "major generalizations from these (student rating of teachers) researches"8 (see Appendix E). The study reported does not add a fifteenth major generalization to the list. What it does do, however, is contribute to an identification and definition of a promising area for additional research as discussed below. Recommendations Two classifications of suggestions are given below. One type is suggestions for further research and is based on the findings of the study and the insights gained during the course of the study. The final suggestion is implica- tions of the study for instructional technology instruction. Suggestions for Further Research Treatment A.--Replication of procedures for course evaluation by students developed by Smith9 and detailed in Chapter II of the Study. Although Treatment A was a replication of an earlier study and the results of the first study were replicated, the first recommendation is that this study be replicated across instructors and different age levels. The first study by Smith was conducted with undergraduate students (n=126) enrolled in a first course in instructional tech— nology. The present study was conducted with graduate students (n=82) enrolled in a first course in instructional 8Ibid., p. 367. 9Smith, 0 . cit. 68 technology. In both studies, subjects in the respective populations, although enrolled in different sections, were taught by the same instructors. Had not the study demon- strated significance between treatment and control groups, a likely confounding variable could have been identified as the instructor. The data now in hand are not, however, conclusive enough to generally eliminate the possibility of instructor influence. Additional research needs to be done employing the system for course evaluation by students with groups of students enrolled in a variety of courses taught by different instructors. Another research need is to use the system of course evaluation by students with different age levels. The evidence now recorded is limited to subjects aged twenty to fifty—four, all having at least three years of college (see Appendix D, Demographic Questionnaire). Also, there has been no effort to determine the effect of the system on subjects by sex. A final need for additional research regarding Treatment A is that there Should be an empirical analysis of the relationship of each of the components of the system with the other components of the system: What happens when one component is left out? How does one component interact with another and on a third? Treatment B.--Application of the same procedures as in Treatment A with modifications involving systematic two-way feedback. 69 The first recommendation for additional research relative to Treatment B is that this study be replicated to determine if the investigated relationships are universal or specific to the group examined. The reported study indicates that systematic two-way feedback does not have a significant effect on cognitive performance on a course of instruction but does have a significant effect on posi— tive attitude toward the content of the course. The findings relative to cognitive performance may be limited to the group tested. The study did demonstrate that there was a significant effect on cognitive performance when students' are given the opportunity to evaluate systematically a course of instruction. Logic would seem to favor the contention that greater involvement of the students in the course through systematic two-way feedback would result in a significant effect on cognitive performance as it did on attitudes toward the content of the course. Only additional research will answer this question. A second area relative to Treatment B, in need of additional research is the definition and specification of the "systematic" component of systematic two-way feedback. In the study reported, two-way feedback was conducted in a systematic manner at the end of every second class session. The selection of every second session was an arbitrary decision made by the researcher. The effect of two-way feedback on cognitive performance and attitude may be 70 altered by different intervals. There is also a need to determine the relative effect of different feedback tech— niques. In the study reported, the feedback was formal at fixed intervals (see Appendix D, Sample Course Evalua- tion Questionnaire). It may be that informal techniques would have a different effect. It may be, further, that a combination of formal and informal techniques would result in a different effect on cognitive performance and attitude than would either technique alone. Speculations such as above need to be generated into hypothesis form and tested. Research Design and Procedures.--An obvious limita- tion of the study reported, and of most educational research, is that the research was limited in both time and situation.\ The study was conducted over a ten-week period. A study should be designed that would provide data regarding the actual behavior of the subjects over time when functioning withintxfiuflfimeenvironments. This is especially true of the attitude component of the study reported. Over time with the develOpment of new technologies, the cognitive content of a course in instructional technology likely will be modified and, perhaps, totally changed. A positive attitude toward instructional technology, it is hoped, will remain constant. Longitudinal research in education is not common. Nonetheless, the attitude variable of the 71 research reported should be investigated by "follow-up" types of research designs. A concern throughout the course of the research has had to do with the precision of the instrument used to measure media attitudes. Even though developmental and testing data for the New Educational Media Attitude (NEMA) inventory (see Chapter III, Instrumentation) suggest that the NEMA is a suitable indicator of attitudes toward educational media, it is justifiable to speculate that respondents may have widely varying attitudes toward different aspects of educational media. An attitude measurement instrument which provides indications of attitudes toward various aspects of educational media might be of greater validity for the type of research reported. Paul Dawson at the Teaching Research Division, Oregon State System of Higher Education, is currently testing an instrument, the Media Attitude Profile (MAP),lO which shows promise for that type of application. As more precise instruments——such as the MAP may become——are develop- ed, the study reported Should be replicated using those instruments. 10Paul Dawson, "Attitudes Toward Instructional Media and Technology: Refinement and Validation of the Media Attitude Profile," Continuation proposal for Research submit- ted to the U.S. Commissioner of Education for support through authorization of the Bureau of Research (Monmouth, Oregon: Teaching Research Division, Oregon State System of Higher Education, June 1, 1970). (Mimeographed.) 72 Implications for Instructional Technology Instruction The reported study may have implications for the general area of teaching-learning. As additional research is done involving a broader cross-section of the general area of teaching—learning, it is likely that those implica- tions will become more apparent. The study reported and the one preceding it were designed to determine the effect of systematic course evaluation by students on cognitive performance and attitudes toward the content of a course in instructional technology. The writer will, therefore, limit his discussion of the implications of the study to instructional technology instruction. Educators in the area of instructional technology have for many years professed that they are"missionaries." The Commission on Instructional Technology Report has indicated that the majority of the teaching profession are aware of instructional technology and the value of technology in instruction (see Chapter I, Need for the Study). The report also states that its actual use in instruction is minimal and research in the area has indicated that there are several barriers that contribute to the minimal use of technology in education. Two of the identified barriers are attitudes of teachers and lack of adequate training. In the study reported, systematic course evaluation by students and systematic two-way feedback on a course have been demonstrated to have a positive effect on 73 attitudes toward the content of the course. Systematic course evaluation by students has been demonstrated to have a positive effect on cognitive performance in a course 0 BIBLIOGRAPHY 74 BIBLIOGRAPHYl Acquino, Charles C. "Teacher Attitudes Toward Audiovisual as They Are Influenced by Selected Factors Within Teaching Environments." Audiovisual Communication Review, V:l8, No.2 (Summer, 1970), p. 187. Agony and Promise. Edited by G. Kerry Smith. Current Issues in Higher Education, 1969, American Associa— tion for Higher Education, Washington, D.C., 1969. Allport, G. W. "Attitudes." In Handbook of Social Psychology. Edited by C. M. Murchison. Worcester, Mass.: Clark University Press, 1935. Ballou, Stephen V. A Model for Theses and Research Papers. Boston: Houghton Mifflin Company, 1970. Barnlund, Dean C. Interpersonal Communication: Survey and Studies. New York: Houghton Mifflin Company, 1968. Barr, A. 8. "Second Report on the Committee on Criteria of Teacher Effectiveness." Journal of Educational Research, V46 (1953), 641-658. Barson, John. A Procedural and Cost Analysis Study of Media in Instructional Systems Development. Washington, D.C.: U.S. Department of Health, Education, and Welfare, Federal Contract No. OE- 3-16-030, September, 1965. Barson, John. Instructional §ystems Development: A Demonstration and Evaluation Prgject. Washington, D.C.: U.S. Department of Health, Education, and Welfare, Federal Contract No. OE-5-l6-025, June, 1967. Barson, John, and Heinich, R. "The Systems Approach to Instruction." Department of Audiovisual Instruc- tion, 1966 Convention, San Diego, Boulder: National Tape Repository, University of Colorado. (Audio- tape.) lA Datrix Information Search of dissertation abstracts related to the study is contained in Appendix G. 75 76 Battram, John V. "Teacher Perceptions Relating to the Newer Communications Media: A Study of Teacher Perceptions Related to the Use of the Newer Com— munications Media and the Nature and Quality of Such Use." Unpublished Ph.D. dissertation, Michigan State University, 1963. Benathy, Bela H. Instructional Systems. San Francisco, Calif.: Fearon Publishers, 1968. Berelson, Bernard, and Steiner, Gary A. Human Behavior: An Inventory of Scientific Findings. New York: Harcourt, Brace and World, Inc., 1964. Bloom, B. S.; Englehart, E. J.; Furst, E. J.; Hill, W. H.; and Krathwohl, D. R., eds. Taxonomy of Educational Objectives: Handbook I: Cognitive Domain. New York: McKay Publishing Co., 1956. I- Brown, James W.; Lewis, Richard B.; and Harcleroad, Fred F. AV Instruction, Materials and Methods. 3rd ed. New York: McGraw-Hill Book Company, 1969. Buckley, Walter, ed. Modern Systems Research for the Behavioral Scientist: A Sourcebook. Chicago: Aldine Publishing Co., 1968. Campbell, Donald T., and Stanley, Julian C. "Experimental and Quasi-Experimental Designs for Research." Reprinted from Handbook of Research on Teaching. Edited by N. L. Gage, of the American Educational Research Association. Chicago: Rand McNally and Company, 1966. Carpenter, C. R., and Greenhill, L. P. "Providing the Conditions for Learning: The 'New' Media." In Higher Education. Edited by Samuel Baskin. New York: McGraw-Hill Co., 1965. Chambliss, Edward J. "Attitudes of Teachers Toward Adopting Innovations and the Relationships of These Attitudes with Other Variables." Unpublished Ph.D. dissertation, Texas Technical College, 1968. Childs, John W. "A Study of the Belief Systems of Adminis- trators and Teachers in Innovative and Non—Innovative School Districts." Unpublished Ph.D. dissertation, Michigan State University, 1965. Churchman, C. West. The Systems Approach. New York: Dell Publishing Co., Inc., 1968. 77 Cohen, A. M., and Braver, F. B. Measuring Faculty Per— formance. American Association of Junior Colleges, 1969. Cohen, Arthur R. Attitude Change and Social Influence. New York: Basic Books, Inc., 1964. Commission on Instructional Technology, Sterling M. McMurrin (Chairman). To Improve Learning. Washington, D.C.: Government Printing Office, March, 1970. Crosby, Muriel. "Who Changes the Curriculum and How?" Phi Delta Kappan, LI (March, 1970), 385-89. Dale, Edgar. Audio-Visual Methods in Teaching. New York: The Dryden Press, 1954. Dawson, Paul. "Attitudes Toward Instructional Media and Technology: Refinement and Validation of the Media Attitude Profile." Continuation Proposal for Research submitted to the U.S. Commissioner of Education for support through authorization of the Bureau of Research. Monmouth, Oregon: Teaching Research Division, Oregon State System of Higher Education, June 1, 1970. (Mimeographed.) Dempsey, Richard A. "An Analysis of Teacher's Expressed Judgments of Barriers to Curriculum Change in Relation to the Factor of Individual Readiness to Change." .Unpublished Ph.D. dissertation, Michigan State University, 1963. Deshpande, Anant S.; Webb, Sam C.; and Marks, Edmond. "Student Perceptions of Engineering Instructor Behaviors and Their Relationships to the Evalua- tion of Instructors and Courses." American Educational Research Journal, 7 (1970), 289-306. Doll, Ronald C. Curriculum Improvement: Decision Making and Process. Boston: Allyn and Bacon, Inc., 1964. Doll, Ronald C. "The Multiple Forces Affecting Curriculum Change." Phi Delta Kappan, LI (March, 1970), 382-84. Dorba, D. D. "The Nature of Attitude." Journal of Social Psychology, 1933, 4. Drucker, A. J., and Remmers, H. H. "Do Alumni and Students Differ on Their Attitudes Towards Instruction." Journal of Educational Psychology, V42 (1951), 129-143. 78 "Editoral: Will Campus Restlessness Lead to Improved Education?" Phi Delta Kgppg . Acting Editor, Donald W. Robinson. VLII, No.2, October, 1970. Edwards, A. L. Techniques of Attitude Scale Construction. New York: Appleton-Century Crofts, 1957. Eichholtz, Gerhard, and Rogers, Everett M. "Resistance to the Adoption of Audio-Visual Aids by Elementary School Teachers: Contrasts and Similarities to Agriculture Innovation." In Innovation in Educa- tion. Edited by Mathew B. Miles. New York: Teachers College Press, 1964. Elliot, D. N. "Characteristics and Relationships of Various Criteria of Teaching." Unpublished Ph.D. dissertation, Purdue University, 1949. Ely, D. P. "The Changing Role of the Audio-Visual Process in Education: A Definition and a Glossary of Related Terms." Audio-Visual Communication Review, V6, No.11, 1963. Engler, David. "Instructional Technology and the Cur— riculum." Phi Delta Kappan, LI (March, 1970), 379-381. Farquhar, William W. "Directions for Thesis Preparation." East Lansing: Michigan State University School of Education, 1967. (Mimeographed.) Finn, James D. "The Tradition in the Iron Mask." Audio— Visual Instruction, V6, No. 6 (June, 1969). Flanders, N. A. "Some Relationships Between Teacher Influence, Pupil Attitudes, and Achievement." Contemporary Research on Teacher Effectiveness. Edited by B. J. Biddle and W. J. Ellena. New York: Holt, Rinehart and Winston, 1964. Foth, Henry D. "Effective Teacher Evaluation." Prepublica- tion draft. East Lansing: Department of Crop and Soil Sciences, Michigan State University, January, 1971. (Mimeographed.) Frymier, Jack R. "Why Students Rebel." Educational Leadership. Journal of the AssocIation for Supervision and Curriculum Development (NEA). Washington, D.C.: V27, No.4 (January, 1970), 346. 79 Fulton, W. R. Self-Evaluative Checklist and Criteria for Evaluating Educational Media Programs. Final Report, United States Office of Education, Depart- ment of Health, Education, and Welfare, under the provisions of Title VII, Public Law 85-864. Norman, Oklahoma: University of Oklahoma. (Mimeographed.) Gage, N. L. "Paradigms for Research on Teaching." In Handbook on Research on Teaching. Edited by N. L. Gage. Chicago, Illinois: Rand, McNally and Company, 1963, pp. 94-141. Gage, N. L.; Runkel, P. J.; and Chatterjee, B. B. "Equi- librium Theory and Behavior Change: An Experiment in Feedback from Pupils to Teachers." Urbana: Bureau of Educational Research, University of Illinois, 1960. (Mimeographed.) Grazia, Alfred, and Sohn, David A., eds. Revolution in Teaching: New Theory, Technology and Curricula. New York: Bantam Books, Inc., 1964. Guba, Egon G. The Basis for Educational Improvement. Address delivered to the National Seminar on Innovation at the Kettering Foundation, July 17, 1967. Bloomington, Ind.: National Institute for the Study of Educational Change, 1967. Guba, Egon G., and Snyder, Clinton A. Research and Evaluation on MPATI (Midwest Program on Airborne Television Instruction, Inc.) Telecasts. Final Report, R. F. Project 1367, Research Foundation. Columbus, Ohio: Ohio State University, April, 1964. Hall, A. D., and Fagen, R. E. "Definition of System." General Systems. Yearbook of the Society for General Systems Research, Vol. 1 (1956). Hartley, Harry J. Educational Planning, Programming, Budgeting: A Systems Approach. Englewood Cliffs: Prentice Hall, Inc., 1968. Hays, William L. Statistics. New York: Holt, Rinehart and Winston, 1963. Hessler, David W. "A Test for Audiovisual Media." Kalamazoo, Michigan: Western Michigan University, June, 1970. (Mimeographed.) 80 Hudspeth, DeLayne R. "A Study of Belief Systems and Accept— ance of New Educational Media With Users and Non- Users of Audiovisual Graphics." Unpublished Ph.D. dissertation, Michigan State University, 1966. Issacson, R. I.; McKeachie, W. J.; and Millheland, J. E. "Correlations of Teacher Personality Variables and Student Ratings." Journal of Educational Psy- chology, V54 (1963), 110-117. Issacson, R. I., et a1. "Dimensions of Student Evaluation of Teaching." Journal of Educational Psychology, Katz, Daniel. "The Functional Approach to the Study of Attitudes." Current Perspectives in Social Psychology. 2nd ed. Edited by Edwin P. Hollander and Raymond G. Hunt. New York: Oxford University Press, Inc., 1967. Kaufman, Roger A. "A Systems Approach to Education: Derivation and Definition." Audio Visual Com- munication Review. Washington, D.C.: Department of Audiovisual Instruction, V16:4, 1968. Kelley, Gaylen B. "An Analysis of Teachers' Attitudes Toward the Use of Audio-Visual Materials." Un- published Ed.D. dissertation, Boston University, 1959. Kerlinger, Fred N. Foundations of Behavioral Research. New York: Holt, Rinehart and Winston, Inc., 1964. Knowlton, James Q., and Hawes, Ernest. "Attitude: Helpful Predictor of Audiovisual Usage?" Audio- visual Communication Review, X (May-June, 1964), Kornhauser, A. "Constructing Questionnaires and Interview Schedules." In Research Methods in Social Relations. Edited by M. Johoda, M. Deusch, and S. W. Cook. New York: Dryden Press, 1951. Krathwohl, David R. "Stating Objectives Appropriately for Program, for Curriculum, and for Instructional Materials Development." Journal of Teacher Education, XVI (March, 1965), 83-89. Krutch, Joseph Wood. Human Nature and the Human Condition. New York: Random House Publishing Co., 1959. 81 Lange, Phil C. "Evaluation: On the Process of Evaluation." Paper Prepared for the Experienced Teacher Fellow- ship Program, University of Hawaii, May, 1968. (Mimeographed.) Linset, Seymour M., and Ladd, Everett Carl, Jr. "Politics and Polarities: What Professors Think." Psy- chology Today, v.4, No.6 (November, 1970). McIntyre, Kenneth, and Brown, Robert. A Study to Determine Specific Sources of Resistence to the Use of AV Materials by College and University Teachers and the Development of Procedures for Overcoming the Barriers to Optimum Use. NDEA, Title VIIA, Project No. 332, Grant No.-731052. Chapel Hill, North Carolina: The University of North Carolina, 1963. McKeachie, W. J. "Student Ratings of Faculty." American Association of University Professors. Bulletin, 55: 439-444, 1969. McKeachie, W. J., and Solomon, D. "Student Ratings of Instructors: A Validity Study." Journal of Educational Research, V51 (1958), 379-383. Maccoby, E. E., and Maccoby, N. "The Interview: A Tool of Social Science." In Handbook of Social Psy- chology. Edited by G. LIndsey. Cambridge: Addison-Wesley, 1954. Margoles, Richard A. "A Study of Media Use Attitudes, and Barriers as Measurements for Evaluating The Influence of Extra—Media Support Service on Faculty Teaching in Large Classrooms." Unpublished Ph.D. dissertation, Michigan State University, 1969. Medley, Donald M., and Mitzel Harold E. "Measuring Class- room Behavior by Systematic Observation." In Handbook in Research on Teaching. Edited by N. L. Gage. Chicago, Illinois: Rand, McNally and Company, 1963, 247-328. Miles, Matthew B. "Educational Innovation: The Nature of the Problem." In Innovation In Education. Edited by M. B. Miles. New York: Teachers College Press, 1964. Mitzel, Harold E. "A Behavioral Approach to the Assessment of Teacher Effectiveness." New York: Office of Research and Evaluation, Division of Teacher Education, College of the City of New York, 1957. (Mimeographed.) 82 Najam, Edward W., Jr. "The Student Voice: A New Force." Educational Leadership. Journal of the Association for Supervision and Curriculum Development. Washington, D.C.: V26, No. 8 (May, 1969), 749. Pace, C. Robert. "Evaluation Perspectives: '68." Transcript of speech delivered to the American Educational Research Association (AERA) presession. Chicago: February, 1968. (Mimeographed.) Pfeiffer, John. New Look at Education: Systems Analysis in Our Schools and Colleges. New York: Odyssey Press, 1968. Popham, W. James. Educational Statistics: Use and Interpretation. New York: Harper and Row, Publishers, 1967. Ramsey, Curtis Paul. A Research Projgct for the Development of a Measure to Assess Attitudes Regarding the Uses of New Educational Media. Title VII, Project No. 492, National Defense Education Act of 1958, Grant No. 740095. Nashville, Tennessee: George Peabody College for Teachers, December, 1961. Random House Dictionary of the English Language. (Unabridged editionY (New York: Random House, 1968). Rawnsley, John B. "An Analysis of the Attitudes of Teachers Enrolled in An Audio-Visual Television Course." Unpublished M.A. dissertation, University of Wisconsin, 1960. Remmers, H. H. "The Measurement of Teaching-Personality and Its Relation to the Learning Process." Education, V51 (1930), 27-35. Remmers, H. H. "Rating Methods in Research on Teaching." In Handbook of Research on Teaching. Edited by N. L. Gage. Chicago, Illinois: Rand McNally and Company, 1963, 329-378. Rossi, Peter H., and Biddle, Bruce J. The New Media and Education. National Opinion Research Center. Chicago, Illinois: Aldine Publishing Co., 1966. Ryans, David G. Characteristics of Teachers. Washington, D.C.: American Council on Education, 1960. 83 Ryans, David G. "A Theory of Instruction With Special Reference to the Teacher: An Information System Approach." Journal of Experimental Education, V32 (1963), 191—223. Schuller, Charles F. "Systems Approaches in Media and Their Application to Individualized Instruction at the University Level." Michigan State University, 1967, Presented in Part at Bucknell University Symposium, February, 1968. (Mimeographed.) Sharp, George. Curriculum Development as Re-Education of the Teacher. New York: Teachers College Press, 1951. Slobin, D. Y., and Nichols, D. G. "Student Rating of Teaching." Improving College and University Teaching, 17 (1969), 244—248. Smith, Duane R. "A Study of Elementary Teacher's Attitudes Toward, Beliefs About, and Use of Newer Instruc- tional Material." Unpublished Ed.D. dissertation, University of Pittsburgh, 1966. Smith, Jay C. "The Design and Trial of A Course Evaluation System Utilizing Student Opinions and Value Judgments." Unpublished M.Ed. dissertation, University of Hawaii, Honolulu, Hawaii, 1969. Smith, Karl U., and Smith, Margaret Foltz. Cybernetic Principles of Learning and Educational Design. New York: Holt, Rinehart and Winston, Inc., 1966. STAT Series Description No. 18. "Analysis of Covariance and Analysis of Variance with Unequal Frequencies Permitted in the Cells." East Lansing: Agri- cultural Experiment Station, Michigan State Uni— versity, 1969. Stress and Campus Response. Edited by G. Kerry Smith. 1968 Current Issues in Higher Education, American Association for Higher Education (NEA), Washington, D.C., 1968. Stewart, Donald K. "A Learning Systems Concept as Applied to Courses in Education and Training." In Educa- tional Media: Theory_Into Practice. Edited by R. V. Wiman and W. C. Meierhenry. Columbus, Ohio: Charles E. Merrill Publishing Co., 1969. 84 "Student Participation: Toward Maturity." Theme Articles, Educational Leadership. Journal of the Association for Supervision and Curriculum Development, Washington, D. C., 27, No.5 (February, 1970), 439-465. Stowe, Richard A. "Putting Salt on the Tiger's Tail or How to Work with Teachers." Audiovisual Instruction, 13 (April, 1968), 335-337. Tannenbaum, Percy. "Initial Attitude Toward Source and Concept as Factors in Attitude Change Through Communication." Public Opinion Quarterly, 20 (1956), 413-425. "TEED 548 Course Description." Graduate Catalogue. Western Michigan University, Kalamazoo, Michigan. 1970. Tobias, Sigmund. "Lack of Knowledge and Fear of Automation as Factors in Teachers' Attitude Toward Programmed Instruction and Other Media." Audio Visual Com- munication Review, V14 (1966), 99-109. Tobias, Sigmund. "Dimensions of Teacher's Attitudes Toward Instructional Media." American Educational Research Journal, V (January, 1968), 91-98. Tobias, Sigmund. "Effect of Attitudes to Programmed Instruction and Other Media on Achievement." Audio Visual Communication Review, V18, No.3 (Fall, 1969), 299. Torkelson, G. M. "Competencies Needed by Teachers in the Use of Newer Media and Various Approaches to Achieving Them." Seattle, Washington: University of Washington. (Mimeographed.) Trumbo, Donald A. "An Analysis of Attitudes Toward Change Among the Employees of an Insurance Company." Unpublished Ph.D. dissertation, Michigan State University, 1958. Turner, Richard L. "How Do Student Characteristics Affect Their Evaluations of Instructors." Bulletin of the School of Education. Bloomington, Ind.: Indiana University 45 (1969), 51-97. Turner, Richard L. "Good Teaching and Its Contexts." Phi Delta Kappen, VLII, No. 3 (November, 1970), 155. 85 Tyler, Ralph W. "Translating Youth Needs Into the Needs of Youth, Part I." Fifty—Second Yearbook of the National Society for the Study of Education. Chicago: University of Chicago Press, 1963. Urick, R., and Frymier, J. "Personalities, Teachers and Curriculum Change." Educational Leadership, 21 (1963). VanDalen, Deobold B. Understanding Educational Research: An Introduction. New York: McGraw-Hill, 1966 (rev. ed.). Wallen, Norman E., and Travers, Robert M. W. "Analysis and Investigation of Teaching Methods." In Handbook of Research on Teaching. Edited by N. L. Gage. Chicago, Illinois: Rand McNally and Company, 1963, 448-505. Webster's Unabridged Dictionapy. Springfield, Mass.: G. and C. Merriam Company, 1968. Witt, Paul W. F. "Educational Technology: The Education of Teachers and the Development of Instructional Materials Specialists." In Technology and The Curriculum. Edited by Paul W. F. Witt. New York: Teachers College Press, 1968. Wittich, Walter A. "Final Report on the Experienced Teacher Fellowship Program." Report at University of Hawaii, September 1, 1966 at August 1, 1967. (Mimeographed.) Wittich, Walter A., and Schuller, Charles F. Audiovisual Materials: Their Nature and Use. 4th ed. New York: Harper and Row, Publishers, 1967. Zimbardo, Phillip, and Ebbeson, Ebbe B. Influencing Attitudes and Changing Behavior. Los Angeles, Calif.: Addison-Wesley Publishing, Co., 1969. APPENDICES 86 APPENDIX A NEWER EDUCATIONAL MEDIA ATTITUDE INVENTORY 87 NEWER EDUCATIONAL MEDIA During the past twenty years or so many new teaching aids have been develOped. Some of these are sufficiently elaborate to change or even to replace temporarily the classroom communication processes which were formerly pretty much limited to students and teachers. Radio, television, motion pictures, slides and filmstrips, and phonograph and tape recorders, certain types of teaching machines and programmed learning methods—-all are examples of what might be termed the "Newer Educational Media." (NEM) “1 In American education today, there is some controversy 3 concerning these NEM. The following statements represent various points of view on this question. Please indicate the extent of your agreement or disagreement with each statement. Please do not make efforts to be I consistent or to select the "right answer"—-there are none. Simply enter the proper number in the space before each sentence according to the following code: 1. Agree strongly . Agree moderately . Agree slightly . Disagree slightly . Disagree moderately . Disagree strongly ONUlbL/JN 1. The widespread use of the NEM will revolutionize the process of instruction as we know it now. 2. The possible uses of the NEM are limited only by the imagination of the person directing the usuage. 3. The wide resources of the NEM stimulate the creative student. R 4. There are no educational frontiers in the NEM—- just new gadgets. R 5. Most students see the NEM mainly as entertainment, rather than as education. * Items designated "R" were designed as "negative" items and are reverse scored in determining the subject's attitude. 88 89 NEWER EDUCAT IONAL MEDIA Please indicate the extent of your agreement or disagreement with each statement. R 6. 7. 8. R 9. 10. R 11. R 12. R 13. R 14. R 15. 16. Agree strongly Agree moderately Agree slightly Disagree slightly Disagree moderately Disagree strongly QUTQWNH O .0. Most teachers lose the gratification of personal accomplishment when the child is taught by machine. Use of the NEM constitutes a major advance in providing for individual differences in the learning needs of students. Much wider usage of the NEM is needed. The vicariousness of learning by NEM aids is not conducive to the most effective learning. If surplus funds exist which could be spent only for supplementary books or for more NEM equipment, the latter should be chosen. Students can learn the basic value of a good education only when taught by conventional methods—-not by the NEM. The problems of getting materials and equipment when you need it, darkening rooms, setting up the equipment, and otherwise disrupting classes tend to counteract the value of most NEM. The "authoritative" presentations of most of the NEM tend to produce an uncritical acceptance on the part of most students. The passive quality of learning by NEM is not conducive to the most effective learning. The proper student attitudes for effective learning are not developed as well by the NEM as by conventional methods of teaching. Only through the NEM can vicarious learning experiences be provided in the Classroom. 90 NEWER EDUCATIONAL MED IA Please indicate the extent of your agreement or disagreement with each statement. R 17. R 18. R 19. R 20. R 21. 22. R 23. Agree strongly . Agree moderately Agree slightly . Disagree slightly . Disagree moderately . Disagree strongly ONUTIDLQNI—J The expense of most of the NEM is out of all proportion to their educational value. The NEM give little opportunity to provide for the individual differences of students. The personal relationship between teacher and student is essential in most learning situations. NEM materials are so specific as to have little adaptability to different teaching requirements or situations. With increased usage of the NEM, the teaching role may be down-graded to clerical work, proctoring, grading, and other simple adminis- trative tasks. The development of NEM centers in every school unit should be encouraged and facilitated. The NEM do not suitably provide for the special needs of either slow learners or brighter students. APPENDIX B A TEST FOR AUDIOVISUAL MEDIA 91 * A TEST FOR AUDIOVISUAL MEDIA PURPOSE: This test is given to students in TEED 548 for the purpose of determining the overall achieve- ment level of the students enrolled. We are interested in the total group performance, not that of a particular individual. Your individual evaluation is in no way effected by this test. DIRECTIONS: Do not write on the test. Use the WMU Testing Services form provided and mark the most I correct responses to the questions and state- I ments herein. Use a number two or number one lead pencil. Make only one mark per question. If you change an answer, erase the prior response completely. Please note that the numbered sequence runs horozontally across the answer sheet left to right. Thank you. ‘51 1. Communication and learning really mean the same thing. 1. Yes +2. No 2. Response and interaction are usually not part of the definition of communication. 1. True +2. False 3. Communication can be defined 1. structurally. 2. in terms of intent. 3. functionally. +4. all of the above. 5. structurally and in terms of intent only. 4. Nearly all descriptions of a communication situation include the following basic ingredients: 1. medium, technology, stimulus, receptor 2. transmitter, medium, source, receiver +3. source, message, channel, receiver 4. feedback, receiver, source, message 5. Channel, transmitter, medium, receiver 5. Theories and models of communication assist the teacher in applying audiovisual materials in teaching and learning situations. Several useful models were developed by . 1. Heider, Abedor, Smith and Witt 2. Cohen, Schuller, Lemler, and Townsend +3. Berlo, Shannon-Weaver, Hovland and Schramm 4. Smith, Berlo, Witt, and Lemler *Written by David W. Hessler, Western Mich. Uni., 1970. IIndicates correct response. 92 10. 11. 12. 13. 93 Meaning of any communication is 1. improved with audiovisual materials 2. improved with the specific media selected 3. the message itself +4. within the receiver Diffusion is a key process . +1. to get innovations adopted 2. involved in writing teaching objectives 3. in using audiovisual materials effectively 4. to help teachers communicate Without Change, there can be no learning. +1. True 2. False Audiovisual materials include . 1. all equipment used in teaching 2. all media +3. films, tapes, maps, filmstrips, models, slides 4. audio and visual materials only Two major organizations developed standards for joint media programs for public schools. The organizations were: 1. NAVA and AASL 2. MBA and NBA +3. DAVI and ALA 4. MAVA and MASL Within the public schools, all teaching/learning resources are brought together in . 1. the library 2. the instructional materials center 3. the learning center +4. all of the above 5. none of the above Robert Gagne'has proposed some which should greatly assist the teacher in deciding upon specific instructional approaches. 1. rules for using audiovisual materials 2. criteria for writing learner objectives 3. attributes of mediated instruction +4. conditions for learning Audiovisual communication includes: 1. verbal, visual, audiovisual, and non-verbal ‘ communication 2. linguistics, pictics, tectonics 3. syntactics, semantics, pragmatics 4. none of the above +5. all of the above 14. 15. 16. 17. 18. 19. 20. 94 Which of the following have the broadest communication value? +1. signs 2. symbols 3. signals 4. all these stimuli have equal value Robert Mager is best known for his writings about 1. effective ways to use audiovisual materials 2. audiovisual research +3. behavioral objectives for the learner ’ a 4. all of the above I 1 5. none of the above With you can dupe slides; make filmstrips from slides; make slides from filmstrips; and create i effective title slides. I l. A Leitz rotor and easel rig ‘ ,« +2. an illumitran and Repronar ” . an opaque projector and overhead projector E . a contract printer . all of the above . none of the above mwbw A major source of film evaluations is the . 1. Library of Congress film index 2. Education Index 3. Audiovisual Communication Review +4. Education Film Library Association The National Information Center for Educational Media is a major source of . 1. ratings of new AV resources 2. audiovisual research findings for teachers 3. audiovisual material indexes 4. audiovisual equipment evaluations and ratings 5. all of the above .f. ERIC is important for teachers interested in 1. reports on new audiovisual equipment +2. media research 3. audiovisual material evaluations 4. simple production techniques for AV materials 5. all of the above Color coding of cards in the card catalog is one way to . l. evaluate and rate audiovisual resources +2. differentiate type of media 3. correlate print and non-print resources 4. classify the subject area of the material 21. 22. 23. 24. 25. 26. 27. 28. 29. 95 Within most school systems, teachers acquire films +1. from a film rental library 2. from a regional center 3. from the particular building only 4. from the producer on a rental basis 5. none of the above Audiovisual materials should be evaluated 1. from reviews prior to preview +2. before, during, and after use 3. immediately after use 4. by the students Goals and objectives are not the same thing for the teacher. +1. True 2. False Synchronization of sound and slides is possible but very expensive. 1. True +2. False Sychronization of picture and sound on a 16mm motion picture projector is accomplished by spacing of the upper loop and lack of slack around the sound drum. 1. True +2. False The focal length of any projection lens determines l. the sharpness of the image +2. the size of the screen image 3. The brightness of the screen image 4. all of the above The dry mount (heat) press uses +1. MT-S 2. Diazo 3. Chartpak 4. none of the above In tape recording, a track is 1. either the dull or glossy side of the tape 2. dull side of the tape 3. glossy side of the tape +4. none of the above Depth of field in photography is controlled by the shutter speed. 1. True +2. False 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 96 The Kodak Visualmaker is used to make . 1. overhead transparencies +2. closeups with an instamatic camera 3. low cost visuals for the opaque projector Opaque projectors and overhead projectors provide nearly equal image brightness at the same projection distance with lamps of equal brightness. 1. True +2. False EVR is a new low cost video camera for school use. 1. True F2. False Super 8mm is a larger image (frame) format. +1. True 2. False Closeups with a camera can be made with . l. bellows 2. cu lenses 3. extension tubes +4. all of the above When projecting slides, the user should place them right side up, but flopped (backwards). 1. True +2. False Models and mockups are not the same thing. +1. True 2. False Using the microphone, is not the best way to record material from TV or radio on audio tape. +1. True 2. False Half inch video tape systems are not suitable for teacher or student programs. 1. True +2. False Cost/effectiveness is no longer a critical considera- tion for audiovisual media. 1. True +2. False Initial use of audiovisual materials will save the teacher time and effort. 1. True +2. False “I“; x l 41. 42. 43. 44. 45. 97 Generalizations about the design of audiovisual materials are too abstract to be useful for the class- room teacher today. 1. True +2. False Which of the following computer languages would be most useful for teachers and students to learn. 1 . FORTRAN +2 . APL 3. COBOL The principal advantage of programmed instruction is that it frees the teacher for more effective teaching. 1. True +2. False Visual literacy differs from perception studies in that it allows the teacher to observe individual creation of visuals and visual sequence. +1. True 2. False Instructional development is a process involving only the teacher, his students, and his objectives. 1. True +2. False A—H... APPENDIX C SAMPLE COURSE HAND-OUTS 98 “an! ‘..'.\ Hi LP.“ d'“ * Sample Course Hand-Out Teacher Education 548 Summer 1970 Audiovisual Media Mr. Hessler General Requirements for the Course: There will be a mid term and a short quiz based upon the student objectives which will be handed out in class for the units of study in the course. Students will be expected to produce simple audiovisual materials both individually and as a small team. There will be some possibility for choices among the various production activities, but all the students will do a few of the production projects. These production projects will include: dry mounting; lamination; bulletin board design in reduced scale; overhead projectuals; handmade filmstrip (as a group) and others to be announced. Students will be responsible for the assigned readings in the basic text, AV Instruction: Media and Methods; all handouts; and a limited amount of reading from materials placed on reserve in the Educational Resources Center at the front desk. Students who have not been through the Self-Instructional Equipment Laboratory will be expected to schedule them- selves through the different programs for the basic pieces of audiovisual equipment (operation) e.g. 16mm projector; combination filmstrip and slide projector; tape recorder; overhead projector; opaque projector and other short programs to be announced. All students will have some time to spend with individualized instruction of this type. The lab is located on the third floor of Sangren Hall on the left side of the short hallway leading to the photographic darkrooms and the graphics room. This short hallway is located behind the wood and glass door on the right side of the main corridor which runs into the main entrance of the ERC Reading Room. —1>Dmring the Summer Term, all of the 548 classes will be a part of a study concerned with outside course evaluation. Students will be asked to fill in a number of forms which will in no way affect grades or individual evaluation. There will be some other short assignments related to in class activities. Notes: * Written by David W. Hessler, Western Mich. Uni., 1970. 99 ;.~__‘ A... 100 Sample Course Hand-Out Teed 548 Audiovisual Media Western Michigan University Unit I Communication General Student Goals 1. Become familiar with several different communication models and the names of the individuals associated with the models discussed in class and those on the handouts. 2. Select a particular model of communication which helps you organize your thinking about the functions of the com- munication process. 3. Be able to identify some of the components or elements of these models which are common to all of them. 4. Recall some of the more important variables associated with the basic communication model elements and relate the constraints they place on the use or the consideration to use audiovisual media. 5. Use this conceptual framework when planning to select, use, or evaluate audiovisual media (materials or equipment). 6. Learn to apply the Abedor (with minor modifications by Hessler) model in attempting to solve instructional problems which necessitate the design and production of audiovisual media in some form. 7. Be able to tell others how to use the two models. Student Objectives 1. Identify the names of individuals responsible for some common communication models discussed in Class and provided on handouts and separate these names from a list which would inculde other unrelated names. 2. Select a single communication model from those dis- cussed or given as a handout and be able to reproduce the model without consulting notes or other aids. The repro- duction should include the pattern and the labels properly positioned. 3. From the model reproduced in #2 above, be able to list several variables associated with each of the major com- ponents (or elements) of the communication model. 101 Unit I Communication 4. From the model selected in #2 above, be able to discuss in written form how each of the elements (variables) in the model might affect your decision to use or not to use a particular type of audiovisual media. 5. The concept of noise in communication takes on par- ticular utility when planning or evaluating the use of a single medium which uses audio, visual (video), or audio— visual channels of communication. Given a detailed description of a Situation in which audiovisual media is used, be able to recognize all of the examples of "noise" in the channel(s) and suggest at least one way to correct or eliminate the noise in each example identified. 6. Without aids, be able to reproduce the entire Abedor (with minor modifications by Hessler) Model. Select a message design problem of your own and explain how this model with its various functions and steps leads you to a solution. Include all of the functions in your written explanation. 7. In your own words, be able to define the type of model used in this unit and explain its utility to the individual using audiovisual media. 8. On a written examination, differentiate the terms audiovisual media; instructional media; audiovisual materi- als; hardware; software; print media; non-print media with regard to their scope, duplication of meaning, and dif- ferences in meaning. Cite examples of items which might be included within the definition of each term. 9. From the code dimensions suggested by Krampen, list four code divisions which suggest the channels available in audiovisual communication. List two examples (or be able to identify two) of audiovisual materials for each of the channels within the code divisions you were able to identify. 10. Prepare a written explanation of Dale's Cone of Experience and describe the most common interpretation as to what the model (the Cone) represents (do not be concerned with memorizing all the levels of the Cone, but concern yourself with the extremes of the top and base). 11. Given two types of communication stimuli (signs) explicate the difference between signals and symbols and identify the given signs as to whether they are signals or symbols. 102 Unit I Communication 12. Given the terms dennotative, connotative, and referent, define each and explain their differences and their rela- tionship to communication signals. 13. Semiotics provides one with a basis for talking about message systems. The Morris Semiotic (a class handout) illustrates ways to discuss the three domains of syntactics, semantics, and pragmatics. Be able to describe how signs are related to other things within each domain and cite examples of how each sign relationship is taken into account by the person using audiovisual materials in the classroom. Notes --*- H. .ofl'fl ’. flu 103 ._ a if. 1 I .. . I..." mMBOZ V V W mm>HpomflQo mm>Hp mmo pCm _r IMCHmuH/m Iv mpmua mamoo Emanonm _ I COHuMOHCCEEOU IIIIII._I--..I - 1.. F u /, . % quEmHmEH \ , Emanoum _ mpmsam>m _AIII. pCm .fII Cmflmmo .. HMHOCOO # OOCUOHC \ _ . + \ E I i . \ \ /, _ r . /, / _ . / mpCHmHumCOU .u / , L \ paw I. / \ mo0H50mom / _ / _ \ _Il \. // \ ‘ \. / / _ x . \ \ / ./.- E . In- IIIII - Coummmmm pCm hHOTCB .mmHmHOCHHm mamm< mwdmmmz ¢ UZHZOHmmD mom QmQOZ mmqmmmm Q24 mOQmm< muflmnm>HCD CmmHCOHz Cumummz mvm come APPENDIX D SAMPLE COURSE EVALUATION QUESTIONNAIRES 104 “'NYJ) _’-.'. . PLEASE DO NOT WRITE YOUR NAME ON THIS QUESTIONNAIRE: This questionnaire is the first of several that you will be asked to complete during this term. These ques- tionnaires are designed to ascertain certain facts about students enrolled in this course. They in no way affect your grade in the course. DIRECTIONS: Read each question throughly before answering. Please answer all questions by circling the letter of the alphabet next to the correct answer. There may be more than one correct answer to some questions. When filling in blanks please print. The Essay questions may be written in longhand but please write legibly. If for some personal reason you do not wish to answer any one of the questions simply leave the question unanswered. Do not put your name on this questionnaire. THANK YOU 1. SEX A. Female B. Male 2. BIRTHDATE: A. Month B. Day C. Year 3. CLASSIFICATION: A. Junior B. Senior C. Graduate—Masters D. Graduate-Post Masters 4. Are you now teaching? A. Yes B. No 5. Are you or do you plan to teach in Michigan? A. Yes B. NO C. I think so D. I don't think so 105 “Fr-“5.: 3:“:2“ In 106 Please do not writeyyour name on this_page. 6. 7. 9. What is your approximate grade point average? EDUCATIONAL DATA: Are you a high school graduate? A. Yes B. No A.1 What Year B.1 Highest Grade From what type of high school did you graduate? A. Michigan public . Michigan private . Other public . Other private UOEIJ PARENTS EDUCATION: What is the level of your parent's education? 9.1 FATHER: A. Grade School B; High School §;_Vocational 9.2 MOTHER: 10. ll. 12. 13. 14. 15. College Grade School B; High School g;_Vocational College I-UI?’ l-UI TEACHING EXPERIENCE: A. None B. Yes B.1 Number of years Are you married? A. Yes B. No C. Divorced Do you have children? A. No B. Yes B.1 How many Have you served in the armed forces? A. Yes B. No Are you on a scholarship or fellowship? A. Yes B. No C. Government loan Have you ever had another Audiovisual Education course? A. No B. Yes 107 Please do not writeyyour name on this page. 16. If you are an undergraduate, do you plan to attend graduate school? A. Yes A.1 Near Future A.2 Sometime in Future B. NO C. I don't know 17. Are you an Education major? A. Yes has B. No B.1 What Major 18. If you are an education major, what is your area of I specialization? A. Elementary Education B. Secondary Education B.1 Subject Area (art, English, etc.) if C. Educational Administration D. School Librarian ESSAY QUESTIONS: (Please use back of page if necessary) 19. Why did you enroll in this course? (one paragraph) 20. What do you think should be the objectives of this course? (Please list with most important being first) THANK YOU 108 * SAMPLE TEED 548 COURSE EVALUATION QUESTIONNAIRE PLEASE DO NOT WRITE YOUR NAME ON THIS QUESTIONNAIRE DIRECTIONS: -This questionnaire in no way affects your grade in the course. Read each question thoroughly before answering. Based on your experiences in TEED 548 (large group presenta- tion, small group activities, and/or individual study) mark one of the spaces that most nearly represents yEEE feelings. If a statement accu— rately describes your feelings mark the middle space (number 2, 5, 8) to the left of the statement. If you feel that the most accurate statement is below what is described, mark the lower numbered space; if above, mark the higher numbered space. In any case mark only 233 space. —--———-————-————-——————————- 2 3 I do not feel that I can perform the above stated activity. 5 6 I feel that I can perform the above stated activity. 8 9 I believe that I cannot only perform the above stated activity but can do so with expertise. Identify several communication models and be able to discuss the primary functions of those models presented in class as they relate to the teaching learning process. Demonstrate ability to operate and describe the opera- tional principles of audiovisual equipment (hardware) made available in the laboratory and classroom. Produce simple audiovisual materials and be able to describe the process and principles involved. Relate the potential capabilities of audiovisual (mediated instruction) within the framework of a communication model discussed in class. Develop effective procedures to use various types of audiovisual materials which takes the total learning environment of the classroom into account. *Written by David W. Hessler and Jay C. Smith, Western Michigan University, Summer 1970. 10. ll. 12. 13. 109 2 3 I do not feel that I can perform the above stated activity. 5 6 I feel that I can perform the above stated activity. 8 9 I believe that I cannot only perform the above stated activity but can do so with expertise. Develop an awareness of good teaching attributes including effective interpersonal communication. Evaluate the audiovisual program of some school, school system or other unit from the standpoint of the student and the teacher from criteria discussed in class. Identify the names of individuals responsible for some common communication models discussed in class and provided on handouts and separate these names from a list which would include other unrelated names. r—‘fimc omanz m4e<_c<> llllllllllfififififififll koama.m~ cvvnc.e~ enn.n.~m neoce.on eaoom.p~ eamnn.~v cvvno.oa asnmm.e~ soooa.c. vomuou wumcou moeruz ' n N mw4m<~¢¢> mac szDJOu maawu wa< mzom on: mz<fi No a: 4<>nh c a o o c c e c c e o o o c o a a o c c . m4w>m4 «Chuck mwuuzunomau 62¢ rc~h<fi -.zmamoaa.- oqb—mau>~?: uwxucmfi >3 cwraauoczn wuz<_c<> no m~m>4<7< mp<~z<>_»an eHw~5.£m scrvcuom «venc.cm c VON") mcaawz a u224~a<>30 no m—m>4cu no m_m>4ou a an uaoazw «on zocwuzu no mwmaowa ~ .m_wmz»oa»z ac. :coumaa uc mwmxosc haso.o vmxo.o Naoc.a ou:o.c nfisu.¢~ moamou ~ eoco.o voua.«« wac=.= cogn.flfi «vam.cok moaymz « 24:» mmmg a u zzou nw»m 2_z: cm 24»: zeuxpmq m4m¢_a<> «co=.s z 24w: no >»_4<:om to emu» m»¢_a<>_»4:: zcu :_»au no m_m>4 pzmozwawoz_ az‘ ezmczmaia zwmxewm zoL»<_ucmm« oz no m.mmz»oa>x to pump act 1a.acm ~30 an .aommm «on zocwmmu to mammals m .mmmwzpom>z ac. zcommau uc mmimomc mace”: nvvo.v~ «=s°.a nok~.m~ mowo.s moon.c monocu a “see a “an“.fifi assa.e s~m~.«« ovsc.° emu~.s monzmz a 2‘2» «mm; a a 23°” EMCm z muh<~m<>ou w zh~3 m_m>4 2H mmom m0HZD mzfi zoqsonmoo ozd .one 2H ammm m0¢m MUMAQOU medfim Cadmoqoo .pmuHOQTH mCHon >psum mo coflumummmum Ca pmuasmcoo ¥ II BUMhmDm OH.mm NmOU ZAHmomUHE om.oaw MmOU UHmmflmoommx mood vmmalmw Oz medO .Q.mm mdzomfi wmdz mMBmHm .mxmbm mwszmhmO ZH dZHqOmdu mBmOZ 2H MAOOmUm wmdozoomm UHQOfiBdU ho mHqudzd 24 II BUmhmDm oo.mw NmOU SAHmomUHZ ov.ww NmOU UHmmdeOmmx hwma mmvlmw OZ mmnmo .Q.Qm 00443 waqu .MmquHB «ZOHBdDQ¢>thqmm BZMQDEm mDDommB mmmmDOU mQOEBWZ ZOHBdUDQm mozmHUm ho Bzm2m>0mmEH UZHZH ZH wNHN mwdm NEHmmm>HZD OZDOM EdEUHmm mom wmmDDmuomm QZ< mmHUHAOm m0 xoomQZQx ammomomm d mBHZ VBHmmm>HZD UZHZHde mMIU¢MB .ZOHB 2H mmOM m0HZD mBq42¢ Zd II BUMWmDm mm.vm meU EAHhOmUHE oo.me meU UHmmdmwOmmx hwma mmomlmm OZ mmflmo .Q.Qm 202mmzm momomw .mmAAHZ «wfiHmmm>HZD medfim mZ>¢3 Ed mmmmDOU ZOHB ZH mwvm mwdm ZOOMfiO m0 wBHmmm>HZD II Bumhmbm oo.mm rmOU EthomUHZ ov.mw MmOU UHmmdeOmmx hme mmmmlmw OZ mmamo .D.Qm mzz¢mw 2H mvmv mudm NBHmmm>HZD deem dzomdqxo II eomnmzm oo.mm smoo quComon oe.mm wmoo onmamoommx noma . ommmImm oz mmomo .o.em zHema< mqum .zzamm «mmmzudmfi BZmQDBm ho EmmUZOUIhAmm mZB ZO mmmDDmUOmm ZOHE¢34<>thqmm OZHmoémB ho Bummmm ME? MG NDDBm d on\om\mo wdh .IBHZm ImOh Qmm¢QMMQ mon .wmq¢za .«mmoomo .«emm .«e4aq¢>m oz< .ezmoaem oza «mmsomoomm .«qoonmsu .«mmmsoo limomdmm ZH DmmD momozwmx Hvomoo ImmmZDz mum 2H mHNm 00¢0 00H0 A02¢20 B¢ ¢2H002¢0 28202 00 MBHm20>HZD 003 .20HBOH0 2¢00202 00900A0m 2H 00>¢29200 m2H2m20H9¢002 A¢20200 .20HB¢0000 m80¢2fimm¢ 20HB¢B20mmH0 00 OHVN 02000> 2H hvov 00¢2 ¢H2200HQ¢0 00 NBHm20>HZD mmoeamemHszo< oz< .mezmz 2H mmmm 00¢2 NBHm20>H20 0B¢Bm 0H00 029 me0¢28m0¢ 20HB¢B20mmH0 00 II 8000000 on.mw 2200 20H0020HZ mo.Hmm M200 0H20¢20020X NmmH Hmmmlmm 02 20020 .0.22 00030002 22¢ .030¢2B 2000¢0BI8200090 2H 008000002 020000020 002¢0H00 00 mHmMA¢2¢ 2¢ II 8005000 oo.mm #200 £0H0020H2. oo.mm 0200 0H20¢20020x mmma Hoamlvw 02 20020 .0.00 m20002 ZHBmDh .H2H40002¢0 «.002200200 048002H0 20HmH000 20000H2200 80H2BmH0 20h¢2 ¢ 00 mHmNQ¢2¢ 0>H82H20m00 ¢ II 900600m oo.vm 0200 20H0020H2 mm.mam 2200 0H20¢20020x mmma mmmmlmm 02 20020 .0.22 B20000 B20002 .02¢0000 «.0H20 00 MZOHBDBHBmZH 02H2¢2022I2000¢0B 008 00 0020220 0002000 20HB¢0000I0¢20Hmm00022 2H mm020022 8200080 02HB¢DQ¢>0 200 m02D000020 00 00>2Dm ¢ 0¢20200 .20HB¢0000 ma0¢2Bmm¢ 20HB¢B20mmH0 20 momm 0ZDAO> 2H mnma 00¢2 0020HUm 00H022¢ 02¢ 02DBADUH20¢ 00 NBHm20>H20 0B¢Em 2¢0H20H2 II eomnmsm oo.mm smoo qumomon om.mm umoo onmemoozmx Hmma nmmvlam oz mmomo .o.am Spam mmemmq .zmazqqom «.mz¢20020 A¢20HBUD2BmZH 200 020H8¢0H022H EBHS w0000000 0¢209000H20¢ 2H 0820009m 0m2000 B202m Mm 00H2¢200Z 22¢0 2H m0HBHAHO¢ 00 20HB¢DA¢>0 000m ¢ A¢20200 .20HB¢0000 m90¢28m0¢ 20HB¢B20me0 00 mONN 02000> 2H mvmm 00¢2 H2DOmmHZ 00 MBHm20>HZD II B00003m Ob.mm $200 20H0020H2 mo.maw 0200 0H02¢20020x HmmH mvomlaw 02 20020 .0.00 AH>2¢2 2802202 .0003 09¢2 20HB002200 9200080 008 CE 20HB¢002 2H029 02¢ ¢000H2200 A¢0H2000B 200 mBZO0DBm 02HBU000m 2H 00200220 m00H80¢20 0¢20200 .20HB¢0000 m80¢2900¢ 20HB¢B20mmH0 00 momN 02000> 2H thN 00¢2 ¢30H 00 29Hm20>H20 0B¢Bm II Homemam oo.mm smoo qumomon om.vm wmoo onCHMOOmmx mesa mmmeImo oz «memo .o.zm sex zmon .meqom ¢30H 00 VBHm20>H20 0B¢Bm 029 B¢ mUHmNmm 2H 0m2000 Bm2H0 029 02H2¢B m920008m 00 002002020¢0 029 00 mHqu¢2¢ A¢0HBmHB¢Bm ¢ 20HB¢2BMH2H20¢ .20HB¢0000 m90¢28m0¢ 20HB¢B20mmH0 00 ¢m0bm 02000> 2H vam 00¢2 NBHm20>HZD 209000 II 800000m mo.mm $200 ZAH0020HZ om.oam $000 0H00¢20020X wwma mwhvalww 02 20020 .0.00 00002 0A¢200 .00¢02¢2 c.20HmH>2020m 02¢ 20HB¢2BmH2H20¢ 0¢20HB¢0000 2H 0m2000 ¢ 2H m0HBH>H80¢ 0022¢42I2000¢09 ZBHS 002¢2£00 m¢ 0022¢02l8200DBm 00 0800000 2H¢B200 00 mHm>q¢2¢ 2¢ 00H80¢22 02¢ 020029 .20HB¢0000 m90¢28m0¢ 20HB¢B200mH0 00 ¢mo>m 02000> 2H mmmm 00¢0 H200mmHZ 00 MBHm20>H20 II 800mem om.mw N200 20H0020HZ mo.HHw $200 0H00¢20020x mmmH hammlhm 02 20020 .0.00 z¢0¢ 2¢HAAH3 .20m22028 m8200DBm 00¢20 090H0 29H3 0mD 2HOIB 00 20HB¢DA¢>0 02¢ mA¢H20B¢Z 8208200 20000H2200 0H202000 22¢820204200m 00 BZ022000>00 029 N 00¢2 wen .meHZmImom omm zH Hmmm most madam qwmzzmm mme .o.mm .mn zH>qmqme aHsomHoIommoqo ¢ zH omqqomzm mezmQDEm momqqoo mommm>¢ zoqmm ozm m>om< zoma ezmze 2H 0m00 00¢2 mmma hoamlow 02 20020 88H020>H20 ¢>H2008 .0.22 0H000 .20200 82802000 0H000 2H 002000 ¢ 2H 08200080 80 000002 20 20H800280200 028 2000228 00H8HOH0¢ 20H8200202 00¢20 0>0222H 08 000H22008 ¢ 00 20H8¢00¢>0 2¢ wooqomowmm .one¢ozom II somamzm oo.mm smoo Equomon oo.mw wmoo onmamoommx msommemm¢ oneaezmmmHo mo OHHN mzaqo> zH mama momm coma ammuao oz zmomo weHmmm>sz zmo» 3wz .o.mm oH>0 2¢ .00>H800000 002000 0H2H0020 02H2H¢88¢ 2H 0002802 02H20¢08 038 00 00020>H800200 0>H8¢2¢2200 028 0¢0H2H00 .8000020802 II 8000000 oo.mm 8200 20H2020H2 00.00 8200 0H22¢20020x 080¢2800¢ 20H8¢82000H0 20 HONN 02000> 2H vmm 00¢2 0000 vavlom 02 20020 88H020>H20 002208 .0.22 .0 ¢8H2¢ .000¢8202 R020020 2¢00082200 000H2¢> 2H 08200080 0000000 20203 0020202 038 20 022088¢2 8002082H .0H0¢0. 028 20 0H080¢2¢ 0¢H2080¢2 ¢ 02H2H¢28 2020¢08 .20H8¢0000 II 8000000 00.00 8200 20H2020H2 00.0mm 8200 0H22¢20020x 080¢2800¢ 20H8¢82000H0 20 mova 02000> 2H mmHH 00¢2 vmma vam 02 20020 ¢800022H2 20 88H020>H20 .0.22 000H02¢0 Z¢HO0H3 ~2002002¢ «20H8¢00¢>0 .02000022 .0020000022 .000080¢22 .8220000H22 .0000200 82¢020000 02H8¢202000 0022¢0I200 02¢ 20 2H 02¢ 020H808H802H 2020H2 2H 02H20¢08 8200080 20 80>200 ¢ 20 820202 ¢ 20H8¢0000 082¢ 0¢H280002H 2H 02H20¢08 8200080 Adonwmm .on94oaom II somhmzm oo.mm wmoo qumomon ma.oaw Smoo onmmmoommx maoamemmc oneaemmmmHe no 0040 mzsqo> 2H 0080 moam vmma comm oz mmomo weHmmm>sz xmoz zmz .o.zm zeom onmmz .mmozm zoeoszm<3 mo stmmm>Hzo mme e< wsHCHmd moeoz zog mo mezmozem mom zaqonmmao mquzm onm q¢oHeHmo a C¢mmzmo .onedosom II Romnmzm oo.mm mmoo qumomon om.mm smoo onm 2H mmm moHza memem «Hz<>qzmzzmm mze .o.om EdHaqHz emmmq< .mmomzom momqqoo zH zoqaonmso oneaooom onzz mme oszoozo mezmoaem ozHozmsqmzH mmoeomm mze mo wooem a Cammzmo .zooqozowmm II aompmzm oo.mm umoo qumomon oo.mw smoo onm zH mam momm ommfl mvmma oz zmomo stmmm>st madam «Hz<>q»mzzmm mze .o.zm zmmmaz mmozmqm .mmsqooz C 08200080 20H8¢0000 202 0002000 8000020802 0¢20H8¢0000 02¢ 0¢20200 00 00200800 0200 20 20H8¢00¢>0 2¢ m 00¢2 8¢b .28H20 I202 002¢2022 avomoo I200202 202¢00 02H80H0 002020202 XH28¢0 131 m ISEJF. .I .4 #7 02020¢28 2020¢08 .2008¢0000 II 8000000 00.mw 8200 200202002 0N.vw 8200 0022¢20020x 080¢2800¢ 2008¢8200000 2O ¢008N 02000> 20 0v0 00¢2 m000 0N0v100 02 20020 880020>020 08¢80 ¢202¢020 .0.00 02¢>¢ 0200200 ¢00038 .2¢0200 0020000022 0208008 08¢202020000 80 8Z020>0020¢20020 0200¢02 20 2008¢00¢>0 08 002¢2200 0¢ 08200080 00¢20 08¢000220820 20 82020>0020¢20020 0200¢02 20 2008¢00¢>0 2020¢08 0¢20200 .2008¢0000 II 8000000 00.mm 8200 200202002 0v.0m 8200 0022¢20020x 080¢2800¢ 2008¢82000H0 20 0000 02000> 20 0N 00¢2 0vm0 mNm0 02 20020 88H020>020 2208 302 .0.22 02¢2000 002000 .20202 * 02020¢08 20 002802 20¢0222¢I02000022 028 20 000¢0 002000 0000080 0¢0000 ¢ 80 08200080 0000000 20 00800220 000020 02022028 02¢ .0000¢> 0¢0000 .0200000 0¢0000 ZH¢8200 20 0002¢20 028 20 2008¢00¢>0 2¢ 0¢20200 .2008¢0000 II 8000000 00.mw 8200 200202002 00.80 8200 0022¢20020x 080¢2800¢ 2008¢8200000 20 mON0 02000> 20 800 00¢2 N000 0m0v 02 20020 0020000 00H022¢ 02¢ 0208000020¢ 20 880020>020 08¢80 2¢002002 .0.00 0002¢0 002020¢0 .02000 a 020>H0 0>H800220 .002000 0000000 000¢0 ¢ 02¢308 00008088¢ 8200080 80022¢ 20023 02080¢2 20¢8200 20 00080¢2¢ 2¢ 8000020802 .2008¢0000 II 8000000 mh.m0 8200 200202002 0m.m0m 8200 0022¢20020x 080¢2800¢ 2008¢8200000 20 0000 02000> 20 vv0 00¢2 8000 000VN 02 20020 ¢80002202 20 880020>020 .0.22 2800¢NH00 82¢2 .2¢200200 « 002000 2008¢0000 820800002820 2¢ 20 2008¢00¢>0 8200080 20 00008088¢ 0¢200202 20 0002 028 oszH zH vvma mosz madam mz> q4oHooqomommm ozm .mzmqmomm qHozH .mmzaamoozm oneaazmHzo zomz mHmm 24 208¢028I200020 ll 8000000 00.nw 8200 200202002 0v.hw 8200 0022¢20020X 08002800¢ 2008¢8200000 20 0000 02000> 20 00NN 00¢2 0000 0Nm0lmm 02 20020 200200003 20 880020>020 028 .0.22 00000 2¢H0003 .308002 002000 00¢8202¢0202 200020 ¢ 20 08200080 20 82¢000¢00> 028 20 00080¢2¢ 2¢ 00080¢22 02¢ 820028 .2008¢0000 II 8000000 00.mm 8200 200202002 0N.mm 8200 0022¢20020x 080¢2800¢ 2008¢8200000 20 N000 02000> 20 0mNm 00¢2 0000 08m0lmm 02 20020 0¢02¢22¢ 20 880020>020 .0.00 x¢2 80000 .0028¢2 08200080 008200 202 02000>022 02¢ .0208¢>0802 .02082082000 20 02000002 800328000 02¢ 0¢02¢22¢ 800328202 20 0000200 82¢020000 000002 028 80 0000 0020000022 02¢ 000080¢22 20 00080¢2¢ 02¢ 80080 ¢ 02020¢28 2020¢08 .2008¢0000 II 8000000 00.mm 8200 200202002 00.0N0 8200 0022¢20020x 080¢2800¢ 2008¢8200000 20 000N 02000> 20 m0N 00¢2 0000 mom0lmm 02 20020 02¢082¢2 20 880020>020 .0.00 .8 00¢2 .20002 t 2008¢0000 2020¢08 082¢ 0¢H2800020 20 020000022 2008¢00¢>0 02020¢08I8200080l022 ¢ 20 000 02¢ 82022000>00 028 v 00¢2 8¢0 ~28020 I202 002¢2022 0v0000 I200202 202¢00 0208000 002020202 XH28¢0 132 @muuo2m2 0c0mn 805nm :0 0mp0smcou mmocmumwmm 00 02002 0002020202 0m 20084280020204 .200840000 II 8000000 00.00 8200 200202002 00.80 8200 00224200202 080428004 200848200000 20 000N 02000> 20 00vv 0042 «000 0m00l00 02 20020 880020>020 08480 04208 28202 .0.00 202200 00>40 .22400 « 08200080 0200042 20800200 I002 2803 0202203 20 0000 0020000022 08 00202002 20 20084004>0 24 20084280020204 ~200840000 II 8000000 00.mw 8200 200202002 0N.0w 8200 00224200202 080428004 200848200000 20 000N 02000> 20 800¢ 0042 V000 00vm0lv0 02 20020 880020>020 08480 4024>0802202 028 .0.00 2242 00000 .2000242002 : 00080424 824202020000 80 020020 0000000 00200404 024 2000002200 0820 08200080 02080000204 20 2008424200 80004 ~200840000 II 8000000 00.00 8200 200202002 00.0mm 8200 00224200202 080428004 200848200000 20 000m 02000> 20 000 0042 «000 m8m~I00 02 20020 880020>020 4240020 .0.22 .2 428242 .80002800 202020 04000 4 20 2420022 200840000 80004 028 2000228 88020842 240800220 024308 080004 800004 08 08200080 82420200 0242022 08 00200000 800400200020 0002000 82420200 20 00080424 24 0420200 .200840000 II 8000000 00.m0 8200 200202002 00.80 8200 00224200202 080428004 200848200000 20 000N 02000> 20 0va 0042 0000 N00v0l00 02 20020 0020000 0000224 024 02080000204 20 880020>020 08480 24002002 .0.22 .0 02400 .20020024 .880020>020 08480 24002002 84 08200080 002000 024 002000 82020 02080000204 2408I80202 20 00080424 0>082020000 0>084242200 4 00020000I00202200 .000202000 II 8000000 00.00 8200 200202002 00.N00 8200 00224200202 080428004 200848200000 20 4008N 02000> 20 00 0042 0000 0N00I00 02 20020 2080202043 20 880020>020 .4.0.0 82020 000 .24040 « 2000002200 20084280020204 00020000 028 20 002000 0008008480 820800002820 028 20 00242202202 8200080 08 020800028200 0208042 20 00080424 084024>08002 4 0 0042 840 .28020 I202 00242022 0v0000 I200202 202400 0208000 002020202 202840 "ImflflfifllfliflfiflfliflwflERIE/TIT