VICARIOUS LEARNING PRODUCED BY AN INSTRUCTIONAL SIMULATION: THE EFFECTS OF SELECTED INDIVIDUAL DIFFERENCE VARIABLES AND TELEVISION - MEDIATED OBSERVATION Dissertation for the Degree of Ph. D. ‘ MICHIGAN STATE UNIVERSITY THOMAS F. HOLMES 1976 I— LIBPA T? X) Midl’emn 3m: Universe, I INT 9’. This is to certify that the thesis entitled VICARIOUS LEARNING PRODUCED BY AN INSTRUCTIONAL SIMULATION: THE EFFECTS OF SELECTED INDIVIDUAL DIFFERENCE VARIABLES AND TELEVISION-MEDIATED OB SERVAT ION presented by THOMAS F. HOLMES has been accepted towards fulfillment of the requirements for PH.D. degree in Secondgy Education & Curriculum (Instructional Development and Technology) Major professor Dateoy/ ”(47 /j71é 07639 ABSTRACT VICARIOUS LEARNING PRODUCED BY AN INSTRUCTIONAL SIMULATION: THE EFFECTS OF SELECTED INDIVIDUAL DIFFERENCE VARIABLES AND TELEVISION- MEDIATED OBSERVATION By Thomas F. Holmes Recent research has indicated that instructional simulations (IS) can be a more effective method of producing student learning than other common methods such as lecture and reading (Maatsch et al., 1975b). The purpose of this study was to test the generalizability of IS. The study investigated the effectiveness of an IS on the learning of overtly passive observers of other students who actively partici- pated in the IS. The study investigated the effects of (a) television versus direct observation of an IS and (b) the sex and aptitude of the par- ticipating IS student. Dependent variables were observer cognitive achievement and preference for instructional method. Math aptitudes were measured by: (a) self-assessed math ability and (b) Michigan State University Math aptitude score. The two cognitive dependent variables were defined a priori as concepts and rules. Affect vari- ables were measured by two scales combining ratings of (a) pleasant and exciting and (b) clear and easy. Thomas F. Holmes Subjects were college sophomores in psychology classes who selected the experiment to fulfill a course requirement to participate in research. Subjects were randomly assigned to type of observation. The experiment was replicated 12 times, producing a total sample of 27 direct and 30 television observers. Thelearning task--Magic Squares--was mathematical in nature. This task was taught directly to a single participating student in an instructional simulation that was designed to be an effective learning environment for that one student. Both observer groups were instructed to learn by observing the simulation but not to discuss or take notes on the task. Twelve hypotheses were tested at an alpha level of .05. These tests, reviews of relevant literature, and analysis for type II errors produced the fellowing findings and conclusions: 1. Television observation is not significantly different from direct observation, as measured on the cognitive variables of this study. This assertion is made on the basis of: (a) no differ- ence between these factors at a liberal alpha of .20 and (b) no sig- nificant difference consistently found in the literature for televi- sion versus direct instruction in other settings. 2. Sex interactions between participating students and observ- ing students were not found to be a significant factor in observer cognitive performance. 3. Observers were significantly more satisfied with direct observation compared to television observation, as measured on a pleasant-exciting scale. Thomas F. Holmes 4. Assessing observer satisfaction using a clear-easy depen- dent variable or a sex-interaction factor produced no significant differences. Conclusions drawn from these findings and relevant literature were as follows: Television observation is not different from direct observation of an instruction simulation, as measured by the cognitive instruments of this study. No attempt was made to generalize this finding of the present study to courses of instruction over longer periods of time. Because students were found to prefer direct to televised instruction on a pleasant-exciting scale, it was reasoned that this preference for treatment might over time eventually be mani- fested in academic-type performances. Effects of individual difference variables produced mixed results. This study produced no reason to believe that sex interaction between simulation and observer students was an important factor in observer learning. The effect of ability of the participating model on observers' learning is less clear. Since an effect was noted on only one of the cognitive dependent variables, further research on the effects of the simulation student's ability should be undertaken. Other implications for research were also noted. Research on larger groups comparing televised IS with other televised methods should be undertaken. By contrasting cost and effectiveness measures, a more definitive assessment of the productivity of televised IS could be attained. This study has several implications for instructional prac- titioners and researchers. Observation of a simulation-~a class within Thomas F. Holmes a class--offers an instructional technique for teachers that will enable them to increase their productivity while retaining some of the presumed advantages of a small-class setting. Designers of instruction should find this a useful method for increasing instruc- tional variety and manipulating variables found to be important in increasing instructional effectiveness and efficiency. For example, mediation offers the potential for student control of pacing of instruction and the capability of serving large numbers of students with one instructional session employing essentially a one-to-one tutorial simulation. Although many areas are over supplied with teachers, some are not. For example, in some professions such as medical education it is difficult to attract faculty who command high salaries in private practice. Using the class-within-a-class method could alleviate the need for additional faculty by making more efficient use of those who are currently teaching. Of general import is the observation that a single self- assessed aptitude item can predict almost as well as a standardized scholastic aptitude battery. Considering the importance of assessing aptitudes and various problems in developing and using standardized instruments, self-assessment may be a much more efficient and almost as effective alternative in well-defined subject matters. A final general observation of this study is that there may be a function in instruction, specifically in demonstration, that is not well recognized in education--that is, the value of specific performance errors coupled with corrective feedback. It appears that Thomas F. Holmes an instructor and a naive student serve relatively unique roles in observers' learning. The instructor can serve to insure the technical correctness of a performance, whereas the naive student can identify, by his mistakes, the critical instructional needs for students with similar backgrounds. Considerable study must be undertaken in this area to identify the critical variables. VICARIOUS LEARNING PRODUCED BY AN INSTRUCTIONAL SIMULATION: THE EFFECTS OF SELECTED INDIVIDUAL DIFFERENCE VARIABLES AND TELEVISION- MEDIATED OBSERVATION By >4. g» LSF‘ Thomas Ff Holmes A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Secondary Education and Curriculum Area Of Instructional Development and Technology 1976 TO my mother and father. ii ACKNOWLEDGMENTS I wish to express my sincere appreciation to both my com- mittee and the Office of Medical Education Research and DevelOpment (OMERAD). Dr. Kent L. Gustafson, committee chairman, was a constant source Of counsel and encouragement throughout my entire doctoral program. I acknowledge also the committee service of Professor Keith Anderson. OMERAD proved to be an exceptional place to work and study. I am especially grateful to: Dr. Howard Teitelbaum for his intellectually stimulating fellowship. Raywin Huang for statistical analysis assistance. Dr. Michael Budd who persevered with me. The remaining faculty and staff of a fine organization. To committee members Professor Jack Maatsch and Dr. Dennis Hoban I would like to give special recognition. Although they were my superiors at OMERAD, from the beginning they made me feel like a colleague on the team. Additionally, Dr. Dennis Hoban has both looked out for me and been a constant source of encouragement. Professor Maatsch has helped me as much as anyone I know, and if I can approach his level of competence during my career, I will consider myself a SUCCESS. TABLE OF CONTENTS Page LIST OF TABLES ......................... vi LIST OF FIGURES ........................ viii . Chapter I. THE PROBLEM ...................... 1 Introduction ..................... l History/Background of VIM .............. 3 The Problem ..................... 7 Purpose of the Study ................. 8 Hypotheses ...................... 9 Background of the Problem .............. lO Rationale for the Study ............... ll Study Limitations .................. 15 Overview of the Study ................ 16 II. REVIEW OF THE LITERATURE ................ 17 Introduction ..................... l7 Productivity in Instruction ............. 17 Economic Terminology ................. 18 Inputs ....................... l9 Outputs and Productivity .............. 22 Human Factors Inhibiting Productivity ........ 26 Observational Learning Effectiveness ......... 28 Observation as a Type of Experience ......... 30 Special Problems in Observational Learning ...... 33 Observational Learning Through Television ...... 36 Summary ....................... 38 III. DESIGN OF THE STUDY .................. 40 Introduction ..................... 4O Instructional Task Analysis ............. 40 Measures ....................... 42 Affective Measures ................. 43 Cognitive Measures ................. 43 Aptitude Measures ................. 45 iv Che Chapter Page Design ........................ 45 Treatments ..................... 46 POpulation Description, Sample Selection, and Sample Assignment .............. 49 Experimental Facilities .............. 5l Television Production ............... 52 Hypotheses ...................... 53 Data Analysis .................... 55 Summary ....................... 57 IV. FINDINGS ........................ 58 Introduction ..................... 58 Analysis Of Cognitive Dependent Variables ...... 58 Effects on Observers' Cognitive Performance ..... 59 Measures of Student Affect .............. 67 Summary of Findings ................. 72 V. SUMMARY, CONCLUSIONS/DISCUSSION, AND IMPLICATIONS . . . 74 Introduction ..................... 74 The Problem ..................... 74 The Literature .................... 75 Design ........................ 76 Findings ....................... 77 Conclusions and Discussion .............. 81 Implications ..................... 83 Implications for Research ............. 83 Implications for Educational Practice ....... 86 APPENDICES ........................... 88 A. INSTRUMENT FOR ASSESSING STUDENT AFFECT TOWARD INSTRUCTIONAL METHOD ................. 89 B. INSTRUMENT FOR ASSESSING STUDENT COGNITIVE PERFORMANCE . 91 C. INSTRUMENT FOR ASSESSING STUDENT SELF-REPORTED APTITUDE. l02 D. STATISTICAL ANALYSIS .................. lO4 E. DESCRIPTION OF EXPERIMENT AVAILABLE TO STUDENT AT THE TIME OF SIGN-UP ................. ll3 F. PROCEDURAL DIRECTIONS GIVEN BY EXPERIMENTER-INSTRUCTOR . ll5 G GRAPHIC STIMULUS MATERIAL USED BY SIMULATION INSTRUCTOR . ll8 H. NORKBOOK USED BY STUDENT PARTICIPATING IN A SIMULATION . 128 BIBLIOGRAPHY .......................... T32 Table 3.l 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 5.1 5.2 LIST OF TABLES Sample Size in Individual Replication ......... Bivariate Intercorrelation Of the Subbatteries Making Up the Cognitive Dependent Variables ......... Analysis of Covariance for Effect of Type of Observation on Observer Performance on "Concepts" Analysis Of Covariance for Effects of Type of Observation on Observers' Performance on Rules . . . . Analysis of Covariance for Effect of Sex Interactions on Observers' Performance on "Concepts" ....... Analysis of Covariance for Effects of Sex Interactions on Observers' Performance on Rules .......... Partial Correlation Between the Self-Reported Math Aptitude of the Simulation Student and the Cognitive Performance of Observers .......... The Correlation Between the Michigan State University Math Score Of the Simulation Student and the Cognitive Performance of Observers .......... Analysis of Variance for the Influence Of Type of Observation on Student Affect as Measured by A-l . . . . Analysis Of Variance for the Influence of Type of Observation on Observers' Affect as Measured by A-2 Analysis Of Variance for the Influence Of Sex Interaction as Measured by A-l ............ Analysis of Variance for the Influence Of Sex Interactions on Observers' Affect as Measured by A-2 . . Summary of Findings on Two Measures Of Observer Cognitive Performance ................ Summary of Findings on Two Factors of Observer Satisfaction With Instructional Method ........ vi Page 59 61 62 63 64 65 66 69 7O 71 72 80 81 I35 D6 . DI. I18. Table Dl. DZ. D3. D4. 05. D6. D7. 08. Factor Analysis Of Cognitive Performance Variables . . . . Analysis of Covariance for Dependent Variable "Concepts" ....................... Analysis Of Covariance for Dependent Variable "Rules" Bivariate Correlations Between Observer Self-Reported Aptitude and Cognitive Performance ........... Zero-Order Correlations Between Observers' Standardized Scholastic Aptitude Scores and Cognitive Performance ................. Analysis of Variance for Affect Dependent Variable Pleasant-Exciting (A-l) ................ Analysis of Variance for Affect Dependent Variable Clear-Easy (A-2) .................... Factor Analysis Of Affect Scales ............. vii Page l05 106 107 108 111 112 LIST OF FIGURES Figure Page 1.1 Comprehension and Retention as a Function of Instructional Method ................. 6 3.1 Distribution of Subjects for Each Design Factor . . . . 50 3.2 Floor Plan for Televised Observation (TVIS) Plan . . . . 51 3.3 Floor Plan for Direct Observation (DOIS) Group ..... 52 viii cuss‘ sands on U this note: Incn L i “ECU! whit Sign tics OVER CHAPTER I THE PROBLEM Introduction Many teaching methods such as lecture, recitation, and dis- cussion have a history that can be traced back hundreds or even thou- sands Of years. Interest in simulation/games within public education, on the other hand, is hardly 10 years Old (Berliner & Gage, 1976). In this period of time, growing acceptance of simulation/games has been noted. Zucherman and Horn (1973) pointed out that there was a 50% increase in the number of readily available games and simulations between 1970 and 1972. The term simulation has a military training background, in which the word tends to take on a product connotation: Although the concept of simulation has a long military his- tory, a common definition has not yet been agreed upon. As a result, a great diversity Of equipment has been tagged with the term simulator (Miller, T974, p. 5). Gagne (1961) summarized true simulations as having three characteris- tics in common: (a) an attempt to represent a real situation in which Operations are carried out, (b) a provision for certain controls over the situation representing the real Operational situation, and (c) a design that deliberately omits certain parts of the real Operational situation. In contrast, Greenblat (1975) emphasized simulation as a process, considering it to be a dynamic model of some criterion system. 1 lhi: SEI‘I IEOC type QEFE IRON RICE I ._ Dew \ USCG _o.m_rf_“—. Patho 0f Va IONBV IOSEn IESs . IEIAOI IESeai This author also categorized types of simulation by the purpose they serve. An instructional simulation, then, would be one serving a teaching or training purpose. Shirts (1975) also identified different types Of simulations formed by combining the concepts of simulations, games, and contests. The promise of instructional simulations is related to what is known about learning from direct experience. The power of direct experience in student learning has long been advocated by educators (Dewey, 1916; Bruner, T960). Significantly, simulation may improve Upon learning from direct experience: . . . Reality may not always provide the Optimum experience for a particular educational purpose. Experience in the real situation may be too risky for others, i.e., learning of intuba- tion skills; it may be too expensive, i.e., patients occupying expensive hospital beds longer than necessary; it may be too stressful for the learner, i.e., embarrassment because of lack of skill in interviewing; it is Often unpredictable, i.e., patients not showing the same kind of signs or symptoms, although used as the same base in evaluation; and it is Often too complex, con- tains too many variables, too much "noise," i.e., components which are not directly relevant. In addition to those disad- vantages, real experience in a situation may be simply unavail- able, i.e., emergencies in medicine, or certain types of rare illnesses (Jason, 1974, p. 2). In learning to perform in unavailable environments, be they rare pathologies in medicine or walking on the moon, simulations clearly are of value. As an alternative to more traditional instructional methods, however, conclusions are not as straightforward. For example, Rosenfeld (1975, p. 290) stated: "Simulation games generally seem no less effective as teaching/learning devices than more traditional methods; they may be more effective." To interpret this finding it is useful to realize that research on teaching methods has historically produced findings of no sig' IIC‘LI an fro: DEII significant difference (Dubin & Taveggia, 1968). AS Hilgard and McLeish (1968) pointed out, in most school learning studies there is an "equalizer" effect. In these studies students usually learn from printed material as well as from the teaching methods that are being contrasted. Students can and probably do compensate for teach- ing inadequacies by relying heavily upon textbooks. It would seem important to control the equalizer effect in teaching methods research for a number Of reasons: 1. Student time and effort expended in compensating for poor instruction can be considered as an additional educational cost. 2. Students many times are not as efficient in self-directed study as they are in teacher-directed study (Berliner & Gage, 1976). 3. To the extent that conditions in the classroom are rela- tively Unique, students would have fewer opportunities to improve their learning by additional extra-class study. Significantly, recent programatic research that controlled for student equalizer effects found that teaching methods consistently differ in producing initial learning and retention (Maatsch et al., 1975b). History/Background of VIM The present study is part of a current research program, Variables in Instructional Methods (VIM), supported by the Office of Medical Education Research and Development (OMERAD) at Michigan State St. SUE USE University. Therefore, an overview of the development of VIM is pertinent. Seminal ideas for VIM originated in the summer of 1973 during informal discussions between Maatsch and the writer. The major con- clusion of these meetings was that current psychological literature was inconclusive on variables affecting instruction as measured by student outcomes. It was felt, therefore, that a research program such as VIM, designed to develop a theory of instruction, could be useful to various types of instructional developers. Maatsch et al. (1975b) described the program developed for this purpose. Seven empiri- cal questions eventually evolved: 1. With content Of instruction held constant, do methods Of instruction make a difference in student learning? 2. Do methods of instruction differentially affect performance on various test formats? In other words, will a lecture enhance performance on multiple-choice questions but produce poorer scores on problem solving relative to other methods? 3. DO methods differentially affect long-term retention of material learned? 4. If methods make a difference, which independent variables inherent in those methods produce the difference? 5. Can we increase the effectiveness or the efficiency of any method by manipulating the key variables inherent in that method? In short, can we design more cost-effective methods? an; ias Which individual difference variables affect learning outcomes and how do they interact with methods? In other words, are there aptitude-treatment interactions? Finally, how important are method variables and individual difference variables relative to each other? To test these questions and others, a mathematical puzzle-- Magic Squares--was chosen as the cognitive task. This particular task was selected because it fulfilled a number of important requirements: 1. 2. 3. It was possible to control for entry-level knowledge. The complete task could be taught in 10 to 30 minutes. The task lent itself to all Of the different common instruc- tional methods (i.e., simulation, observation, seminar, lecture, programmed instruction, and reading). Comprehension and retention of the (three) concepts and (six) rylgs_involved in construction of a magic square could be directly tested. Student ability to apply these concepts and rules in problem-solving test formats could be assessed (Maatsch et al., 1975b). As mentioned earlier, Maatsch found that selected instruc- tional methods do consistently rank order themselves in producing stu- dent learning. The relative effectiveness of the methods studied in VIM for both immediate comprehension and long-term (one month) reten- tion are displayed in Figure 1.1. The simulation was characterized by one instructor (experimen- ter) interacting with one student in the following procedure: 1. Initial instructional stimulus was presented to the student. peach k0 OGEQCCULOQ Percentage of Total 1004 90- 801 60d 50. 4o- 304 20- 10- Simulation Observation (of a simulation) Seminar Programmed Instruction Reading Lecture I100 P90 5 min. 1 month Figure 1.1. Comprehension and retention as a function of instructional method. 2. The student was queried to check his comprehension of the initial material. 3. If the student responded incorrectly, he received addi- tional information and was coached until he responded correctly. 4. When the student responded correctly the cycle was reini- tiated with the next element Of the learning task. These elements were sequenced to correspond to the steps normally used to accomplish the task. The Problem The VIM research strongly suggests that instructional simula- tion can be more effective than other common forms of instruction. Nevertheless, simulations using one instructor with one student Obviously are not practical in most instructional situations. The increased effectiveness Of this method is countered by its apparent high cost. Unfortunately, there is increasing evidence that educational costs are becoming more difficult to meet. TO this point, the Carnegie Commission on Higher Education (1972b) predicted that the recent his- torical trend of increasing the percentage of GNP to higher education has run its course. Further evidence of public resistance to increased educational expenditures is seen in school bond failures, and in the Performance Contracting (see Mecklenburger, 1972) and Accountability Movements (see Lessinger, 1970). An immediate problem is that simu- lation as described in VIM must be used in a way that reduces its costs if it is to become a feasible instructional tool. Obse efie teCIll INF-cl Significant to this problem is an instructional technique that the VIM research simply called "observation,“ in which two students would watch the interaction between the simulation participant and instructor. These Observers would not overtly participate in the instruction but were asked to try and learn as much as possible simply by watching the simulation. In post-treatment testing Of student learning, the observers' performance looked much like that of the active similation participants. Since the Observers were apparently acting independently during instruction, it would seem reasonable that the actual size of the Observation group would not be a critical variable in the Observer's learning. However, large Observation groups could positively affect faculty-student ratios and hence reduce costs. This approach has been used with lectures and demonstration to reduce instructional costs(Simpson,l972). What the VIM research suggests, however, is that the learning of observers in large groups can be improved if they watch a more powerful instructional session, i.e., a simulation rather than a lecture-demonstration. As will be discussed below, utilization of technology may provide the key to furnishing cost-effective Observational learning. Purpose of the Study The purpose Of this study is to test the generalizability Of observation Of a simulation. This test addresses two issues in the effectiveness Of using this technique: effectiveness as a function Of technological means Of increasing group size and effectiveness as a function of different simulation students being Observed. ent rail nuni grea acte con: vari have Bidd‘ Ieads Technology in the form of television offers a dramatic means of increasing the size of observation groups. Optimal and consistent observation orientation can be presented to a Virtually unlimited number of students. Television instruction has been used extensively since the early 1950's. Studies Of television instruction generally have concluded that Observation by means Of television is no differ- ent than direct Observation in learning information (Chu & Schramm, 1967; Dubin & Hedley, 1969). However, Maddox (1970) in his review maihtained that TV lectures are inferior to classroom lectures in com- municating information, but that the differences are probably not great. The second issue--effectiveness as a function of the char- acteristics of the student participating in the simulation--is concerned with the potential effect of different instructor-participant variables on an Observer's learning. Two such student variables that have been found to be important in classroom studies are sex (Dunkin & Biddle, 1974) and aptitudes (Kerlinger, 1975). Hypotheses The purpose Of this study in context with relevant literature leads to the following general hypotheses: I: Observers' cognitive performance will be significantly superior in live as compared to televised observation. II: Observers' cognitive perfOrmance will be significantly better when the simulation participants and their respec- tive observers are Of the same sex as compared to when they are of Opposite sexes. III: Observers' cognitive performance will be significantly and ne ativel correlated with a self-reported aptitude of students Ueing Observed in a simulation. U0 h< 10 IV: Observers' cognitive performance will be significantly and negatively correlated with a standardized scholastic apti- tude score of the students being Observed in a simulation. V: Observers' satisfaction with instructional method will be significantly superior in direct as compared to televised Observation. VI: Observers' satisfaction with instructional method will be significantly better when the simulation participants and their respective Observers are Of the same sex as com- ‘pared to when they are Of Opposite sexes. Background of the Problem The immediate problem of increasing the efficiency Of an instructional method can perhaps be better understood in the context of a more general concept of productivity. Economists define produc- tivity as the value of outputs or products relative to the inputs or cost incurred producing these outputs. When outputs and inputs are Of the same metric, such as both being assessed in dollars, productivity can Simply be determined by dividing the price of outputs by the price of inputs. If instruction can be considered a production pro- cess, then productivity should be a meaningful way to assess that process. Unfortunately, in service industries such as education, pro- ductivity is difficult to measure (Gross, 1964). Furthermore, in education in particular there is variation and even confusion about the meaning of productivity (Harrison & Stolurow, 1975; Scanlon & Weinberger, 1974). In spite Of these difficulties, a major strategy for increas- ing instructional productivity has evolved. This strategy is based on two bodies Of literature-~methods effectiveness and effects of class si :r- ~—‘ r.- in; pro PFC I-rq l”, of i $111: that 1 11 size. Dubin and Taveggia (1968) summed up the research on teaching methods: In the foregoing paragraphs we have reported the results of a reanalysis of the data from 91 comparative studies Of college teaching technologies conducted between 1924 and 1965. These data demonstrate clearly and unequivocally that there is no measurable difference among truly distinctive methods of college instruction when evaluated by student performance on final exami- nations (p. 35). Although this finding is disappointing from the perSpective of design- ing instruction, if it is true it does simplify the instructional productivity issue. For if outputs do not vary with teaching methods, productivity of instruction is simply a function of the costs of inputs. Since education is a labor-intensive industry, the major cost Of instruction is faculty salaries. It follows, then, that faculty/ student ratios are a major factor in productivity. The second relevant body of literature--the effects Of class size--provides the rationale for increasing student/faculty ratios to improve productivity. In his review of the literature, DeCecco (1970) concluded that performance differences are Usually not found between classes of 30 or more. The general conclusion that teaching methods and class size do not make a difference is the rationale for large-group lectures (200-300 students) and for the televised lectures and demonstrations that can be seen on most major campuses today. Rationale for the Study As noted above, the VIM research casts doubt on the assumption that instructional methods do not make a difference. As a consequence, bU‘ til pI‘l .3” I. F. .Pfifli (a!) 3 Us. OS! Stu ch: C0! VIE Iee 12 maximum productivity would appear to be dependent not only upon cost but on the effectiveness of the instructional method, with Observa- tion of an instructional simulation appearing to be the Optimally productive instructional technique. A crucial factor in this argument is the effectiveness of student learning produced by overtly passive Observation. Research has demonstrated that Observational learning can be effective, pro- vided that certain conditions for Observer learning are met. Litera- ture on observational learning goes back at least to Miller and Dollard (1941). Beginning in the 1960's, Bandura began testing the assumption that learning required overt action. Bandura (1969) demon- strated that both live and mediated models could produce powerful changes in subsequent affective behavior of passive observers. As a consequence of his work and public interest in the effects of tele- vision on children, research began in the late 1960's on passive learning of cognitive knowledge (Zimmerman, 1975). Passive processes are not only more efficient, but in many instances they are the most effective means of attaining specific Objectives. For example, Powell (1966), a defender of the lecture method, argued that students in initial instructional stages of learn- ing are given content and many times do not know enough to act intel- ligently. In these stages much student guidance may be required and a demonstration or presentation of information can be highly useful. Wood et a1. (1975) supported this concept with their theoretical con- tention that comprehension must precede performance. Erro what the tion certs 81110! "New COgnj LEFIO 13 Lack of knowledge can produce other problems in learning. Although direct experience can be highly instructive, the risks may be prohibitive. To control the physical risk to students and even risk to the environment from the student, various levels of simula- tion may be required before the student is confident or capable of real-world performance. Just as there can be unacceptable physical risk in learning, there can be a psychological risk as well. This is suggested by the high correlation between drop-out rates and low school performance. High failure rates can have deleterious effects on students' motiva- tion. To maintain motivation in programmed instruction units, program error rates are designed to be below 10 to 20% (Gilbert, 1962). But what this approach may gain in terms of positive student affect toward the program could very well be at the expense Of cognitive informa- tion. Errors can be instructive. Science itself progresses to a certain extent from negative information or known errors (Kuhn, 1972; Simon, 1969; Pratt, 1963). Observational learning Offers a technique whereby the error rate in a program might be increased to increase cognitive learning effectiveness without the errors having a dele- terious effect on the students' self-esteem. The simple contention is that, under certain conditions, stu- dents can profit from experiencing the learning errors made by other students. A critical qualification is that the reasons why certain responses are not correct must be made explicit to the observer/ learner. As is detailed in the next chapter, natural environments "(£3 Iar Thi: ing tati COSt IEre defii Simpf done Setti Dart)< °b$er( 14 present a number of obstacles to learning by Observation (Olson & Bruner, 1972; Zimmerman, 1975). Apparently, instructional simulation such as the one designed by Maatsch et al. (1975b) meets many of the needs of Observational learners. If the student participating in the instructional simula- tion is representative Of the Observer/learners, the first student may serve as a surrogate for the second. As the teacher adjusts his instruction to ensure that the participaing student is learning, instruction is also being Optimized for Observers. Additionally, one could speculate that as the participating student is encouraged to justify his actions or describe why he performed in a certain way, the critical learning alternatives would become explicit for observers. In the spirit of scientific evidence, the rational case for large-group Observation Of a simulation requires empirical validation. This study is an initial exploratory test of the technique. Provid- ing that the findings are encouraging, further, more costly experimen- tation could then be considered. More specifically, this is not a cost study; neither does it compare large Observer groups across dif- ferent instructional methods, both Of which are required to make definitive statements about productivity issues. Rather, the study simply addreSses the questions: (a) What is lost when Observation is done by means of television rather than being directly in the classroom setting? and (b) How important are the characteristics of students participating in the simulation to the learning of their respective Observers? kee; limi inpr cost cost effe Iear expe conti evali recal USefL Conte Subje Dartic UIFect 15 Study Limitations An empirical study of this sort cannot be all things. In keeping this study to a reasonable size and scOpe, the following limitations were identified. 1. Although the underlying rationale for this study is to improve the productivity of instructional simulations, it is not a cost study. A more definitive productivity study would involve direct cost-effective analysis.. 2. The present study does not assess over-time instructional effects typical of classroom environments because it is a "one-shot" learning encounter. For example, the motivational effects of this experiment on subsequent student activities were not measured. In context, however, the larger VIM program (Maatsch et al., 1975b) evaluated the effects of post-treatment student activity on delayed recall. 3. Although the instructional content (magic squares) has useful research characteristics (p. 5), transfer Of findings to other content areas is not empirically tested in this study. 4. It is assumed that the performance Of the quasi-volunteer subjects in this study is generalizable to more typical students. 5. It is assumed that the small size (one to five) of the observation groups is not a critical variable, since the Observers learned independently; i.e., they did not interact among themselves. 6. The quantity and quality of errors made by the active participating student in the instructional simulation was not assessed directly. Rather, it was inferred that the simulation student's tiV‘ Des< In I 16 demonstrated aptitude for the task directly correlated with errors he would make in learning the task. Overview of the Study Chapter 11 consists of a discussion and review of produc- tivity in education and a review of Observational learning research. Described in Chapter III are the design and analysis of the study. In Chapter IV the findings are reported. A summary and conclusion as well as a discussion of the findings are included in Chapter V. with first is co that ducin Instr cates use t1 Resear PSYChc Iftera I0 Pro MServ ness a encing CHAPTER II REVIEW OF THE LITERATURE Introduction In the broadest of terms, an instructional designer is concerned with two issues: efficiency and effectiveness of instruction. The first of these is directly related to'economic productivity; the second is concerned largely with learning/instructional theory. Reviewed in this chapter are two general bodies Of literature that deal with these issues. To identify the major variables in pro- ducing efficient instruction, literature in economics, education, and instruction dealing with productivity is reviewed. This review indi- cates that much of the instructional technology literature does not use the theoretical constructs developed to eXplain productivity. Second, literature on Observational learning is reviewed. Research is cited that demonstrates children can learn affective, psychomotor, and cognitive behaviors from watching a model. This literature indicates that natural environments can be greatly improved to produce changes in passive observers.' This section also shows that observational learning Offers the potential to increase the effective- ness as well as the efficiency of instruction. Productivity in Instruction As stated in Chapter I, educational institutions are experi- encing demands for increased productivity. It was shown that a major 17 sol nol bet com The: ecor is t whet UEVe' "Hut 2957) 18 solution to this problem is thought to be the increased use of tech- nology. It was also noted, however, that technology has yet to ful- fill the promise Of increasing instructional productivity (Minow, 1970; Armsey & Dahl, 1973). Scanlon (1974) evaluated the situation as follows: In these days of increasing demand for accountability with regard to instructional outcomes, and a simultaneous leveling of financial resources made available, it would seem that many more institutions should be turning to the wise use Of techno- logical aids to instruction. Their failure to do so in the past most certainly supports the notion that a fundamental reexamina- tion should be undertaken of the relationship of technology to education at all levels (p. 1). One approach to such a re-examination is to pose two questions: (a) How.does technology influence productionand why might it be any- better than other inputs to the process? and (b) How is productivity computed and what are the conditions conducive to its maximization? These questions have been dealt with most extensively in the field of economics. Cohn (1972, p.1), paraphrasing Samuelson, said: "Economics is the study of the production and distribution Of all scarce resources-- whether physical goods or intangible services that individuals desire." Educators have strongly advocated the application of economic analysis to the education industry (Roger & Ruchlin, 1971; Tollett, 1970; Roger & Jamison, 1974). Economic Terminology, Essentially, productivity is a subset Of economic growth or develOpment. Whereas growth is concerned with only the absolute mag- nitude of output, e.g. the size Of the Gross National Product (Abbott, 1967), productivity relates this output in some way to the cost of prod nean fer, in d reso The ' used nary nach‘ tone; Feola This able rises Insen outpu Der m (AUDOT StdIe 19 production, i.e. an input-output analysis (the most common form is by means of some type of ratio). Since the costs of various inputs dif- fer, it is the “mix" Of inputs used to produce an output that results in different productivities or economics. Inputs Economists list from three to five different inputs or resources--land, labor, capital, enterprise, and technological progress. The two primary resources are land and labor, land being a broad term used for all natural resources, such as minerals and water. The pri- mary resources of land and labor can combine to form capital, e.g. machine tools and equipment.‘ (The use Of capital as synonymous with money is not relevant to this discussion.) To these three resources some economists add technological change or progress (Samuelson, 1970), which, simply stated, is improvement in methods or procedures for put- ting other resources together. Generally, productivity increases as primary resources are replaced by capital, and as both can be technologically improved. This is because the primary variables--land and labor--involve "vari- able costs"; i.e., as the level Of output goes up the cost of inputs rises even faster. Capital costs, on the other hand, are relatively insensitive to increases in outputs. As a consequence, increasing output in capital-intensive production can actually decrease the cost per unit Of output. This phenomenon is known as economics of scale (Abbott, 1967). However, extensive use Of capital to achieve economics of Scale still involves costs, quite frequently high start-up costs. Tec ser I‘ES exi onl ita 1‘99 use ITIV UP ris teC Spe IIVi Oct 20 Technological change (new knowledge or information) in the technical sense has the advantage of being free. That is, economically scarce resources are not consumed by using new information. Technology can be used to achieve three goals: (a) introduce new products, (b) improve existing products, or (c) change an input-output ratio. Of these goals only the third is technically an improvement in productivity. Despite the advantages, certain forces inhibit the use of cap- ital and technology. For example, to finance initial development cost requires that one defer gratification. That is, money that could be used for immediate consumption Of goods must instead be used for investment. Second, each acquisition Of capital items involves start- up costs. Third, and perhaps the most important, is the element of risk--the investment might not pay off. And finally, in the case of technological progress, there is the uncertainty of not being able to specify precisely when the develOpment will be successful. Unfortunately, there is evidence in the instructional technol- ogy literature that the term technology is used to denote what econo- mists would call capital improvements. Sattler (1968), writing on the history of the instructional technology movement, called this the "equipment concept." An illustration is: Instructional technology can be defined in two ways. In its more familiar sense, it means the media both Of the communica- tions revolution, which can be used for instructional purposes alongside the teacher, textbook, and blackboard. In general, the Comnission's report follows this usage '(Snider, 1970, p. 21). Of course, using labor-saving devices could improve produc- tivity. Unfortunately, in education capital has generally been used, not as a replacement for existing methods but as an adjunct to these nolo 21 methods. (The Carnegie Commission on Higher Education, 1972a; Tickton, 1970). It is not unusual to find media equipment and other techno- logical devices gathering dust in the public schools. A second but less familiar definition of instructional tech- nology does exist: In this sense, instructional technology is more than the sum of its parts. It is a systematic way of designing, carrying out, and evaluating the total process of learning and teaching in terms Of specific Objectives, based on research in human learn- ing and communications and employing a combination of human and nonhuman resources to bring about more effective instruction (Snider, 1970, p. 21). It is informative to note that this definition is almost exclusively concerned with effectiveness. There is no explicit statement of concern with economics, productivity, or efficiency. It is true that in the systems-analysis literature concern is expressed for the selection of alternative methods that achieve Objectives more efficiently (Anderson, 1975). However, the use of terms such as "a systematic way of designing," "systems," "general systems analysis," and "systems approach" is quite loose in education (McDonald-Ross, 1972). Briggs (1974) cited products of "system" design. Fortunately, these examples have begun to generate critical evaluation. Scriven (1975) expressed doubt that they are worth their cost, and Haggerty (1974), calling these programs "student centered," wrote: I certainly would hope that one Of the R 8 D laboratories concentrating on student-centering, such as the Wisconsin Research and Development Center for Cognitive Learning, will expand its R & D efforts to include work on improving individual productivi- ties so that these conceptions can be developed to the point where they become a built-in part of the entire approach (p. 14). be! Ara 7; stu the bece fits IGrc 22 Outputs and Productivity Since the subject of outputs in education presents special problems in computing productivity, these tOpics are considered together. Productivity is concerned with the economic principle of return on investment: (a) Can the same benefits be obtained at less cost? or (b) Can increased benefits be had at the same cost? (Samuelson, 1970). Here benefit is synonymous with Output, and cost is used for inputs. The most general productivity analysis--cost- benefit--would permit both benefits and costs to vary concomitantly. Levin (1970) conducted a study entitled "A Cost-Effective Analysis Of Teacher Selection," which is an example of this approach. The effects Of teachers' verbal ability and years of experience on student achievement were assessed. Levin found that the former was the more powerful predictor of students' subsequent verbal achievement. In education a cost-benefit study such as Levin's is rare because of a number of difficulties. One problem is comparing bene- fits directly to costs, since both must be measured on the same scale (Gross, 1964). Levin compared an input (teacher's verbal ability) to an output (student's verbal ability), both of which were measures of the same skill. This analysis, although useful, can be highly mis- leading because it is only a "partial productivity" analysis. There are both additional cOsts and benefits to any educational process, beyond achievement, that could significantly change the total picture. To compute total productivity requires some means of aggregating both multiple outputs and inputs (Gross, 1964). This is most commonly done by using prices or monetary costs. the drc 23 Woodhall and Blang (1970), in the article "Productivity Trends in British University Education, 1938-62," dealt with the problem of comparing various educational outputs. They constructed three sets of index measures--cultural, educational, and economic--and noted that they were roughly comparable. That is to say, all measures showed a drop in productivity over the time considered. But the preceding study is relatively macro-economic in scale, wherein the market places help supply dollar values to outputs. Such advantages are not as available in such areas as instructional design. What is the dollar advantage of a 90 versus a 70 on an achievement test, for example? Because educational outputs are so difficult to quantify, a productivity analysis is generally "cost-effective rather than costebenefit" (Anderson, 1975). Here outputs are held or assumed constant and an investigation is made to determine the least expensive alternative strategy. Wilkinson (1973), in his article "Cost Evaluation of Instruc- tional Strategies," provided a useful review and analysis of costing. The author pointed out in his "cost-benefit decision model" that strate- gies can vary in their relative economics at different student popula- tion levels. However, Wilkinson used the term benefit in two distinct ways: first, as the level Of output attainable from a strategy which he assumed is given orconstant(cost-effective analysis) and second, as the difference between the benefit and input. This latter concept is more generally called efficiency or gain (Anderson, 1975; Rogers & Rucklin, 1971). 24 The foregoing studies illustrated the difficulty in pricing and aggregating outputs, which is necessary in a cost-benefit analysis. Outputs are not only difficult to quantify usefully, but a strong belief exists that instructional benefits are relatively stable. For example, Dubin and Taveggia (1968), in their comprehensive review of instruction, which aggregated 91 studies over 40 years, concluded: Increasing attention will be demanded of college and univer- sity administrators to the cost-benefit analysis of various teaching methods. Up to this point, the "benefit" portion of cost-benefit analysis has largely depended upon private Opinion and prejudice. We think that we have demonsrated in this mono- graph that the usual prejudices regarding preferred college teaching methods are no longer acceptable as bases for alleging the benefits of particular teaching technologies. Indeed, since there are no differences among a wide range Of teaching technologies we may assume that their respective benefits are equal. This, then, turns the attention in cost- benefit to the cost side of the issue [or to cost-effective analysis] (p. 49). The implication of this analysis for student-teacher ratios appears to be straightforward--use techniques such as large-class lectures. Suppes (1974), however, stated that historically educational research has been decision rather than conclusion oriented; i.e., studies have not been designed to accumulate knowledge. Capital-intensive media in the form of electronic transmitters have great potential for reaching large audiences. Of these media, instructional television may be capable of the broadest application. Unlike radio, it carries the dominant human sense of sight; and unlike CAI, it is not as dependent upon development in teaching and learning theory. Jameson, Suppes, and Wells (1974) presented a recent review of three surveys of the comparative effectiveness of ITV. They synOp- sized these studies as follows: Bill“: to 1 Stu: 25 Chu and Schramm surveyed 421 comparisons Of ITV and TI (Traditional Instruction)(one teacher in a class of about 20-40 students) that are reported in 207 separate studies. Their results indicate that students at all grade levels learn well from ITV, though this seems somewhat less true for older stu- dents than for younger ones. The effectiveness of ITV cuts across virtually every subject matter. Dubin and Hedley pro- vided a more detailed survey of the effectiveness Of ITV at the college level. They reported on 191 comparisons of which 102 favored ITV and 89 favored TI, although most of the differences were insignificant at the standard levels of statistical signifi- cance. When data were available Dubin and Hedley extended their comparisons to include the distribution of statistics of the individual comparisons of ITV and TI: in this way it was pos- sible to weigh apprOpriately differences in performance Of differ- ing degrees of statistical significance. The results of this analysis, applied to all their data, indicated a slight, but statistically significant difference in favor Of TI. When studies Of two-way TV were dropped from this sample, the overall compari- son yielded a small, statistically insignificant advantage for TI. An unusually stringent criterion for interpretability Of results was utilized by Strickell in comparing ITV to TI, and it is worth commenting on his survey here. After examining 250 com- parisons of ITV to TI, Stickell found ten studies that fully met his requirements for adequate controls and statistical method (interpretability) and 23 that partially met his requirements. Schramm provides clear tabular summaries of these studies. None Of the fully interpretable studies and three of the partially interpretable ones showed statistically significant differences; each of the three statistically significant cases favored the ITV group. It should perhaps be noted that when highly stringent controls tend to force the methods Of presentation into such simi- lar formats that one can only expect the "no significant differ- ences" that are in fact found. When ITV is used in a way that takes advantage of the potential the medium offers-~as, perhaps, with Sesame Street--we would expect more cases of significant differences between the experimental group and the "alternative treatment" (for it would not be a "control" in Stickell's sense group (pp. 34-36). Literature on the comparative effectiveness of ITV was reviewed by Caffarella (1973).' He attempted to show at what point the student enrollment level necessary for the cost Of using the medium was equal to the cost Of equivalent courses taught by one instructor for every 30 students. As the initial capital expenditure goes up, so does the C0 be pe 26 break-even point, with the minimum break-even point in a simple closed-circuit system being from 200 to 500 students. Human Factors Inhibiting Productivity Failure to maximize productivity may have little to do with the capacity to use resources well. In service industries and to a lesser extent in goods production as well, producers themselves can control the innovation process (Gross, 1964). Consequently, it would be prudent to expect resistance to any technology that would put people out of work (Sisson, 1974). Frequently, the objectives of an instructional program pre- clude efficiency. For example, socialization of children and develop- ment in learners of the ability to work with peers and adults may limit the amount of individual study desirable (Hoban, 1973). Additionally, demands for equity in educational Opportunity may require diversion of resources to pOpulations requiring substan- tial development. It is significant, however, that increased produc- tivity has been a classic way Of easing inequalities (Gross, 1964). Because vested interests are unlikely to give up their piece of the pie, increasing the size of the output has been a traditional way of accommodating the disadvantaged. One might expect, also, that the methodological errors typi- cal of this research literature would indicate that insufficient power is available in the analysis to go beyond failure to reject the null to assertion of the null hypotheses. Despite these reasons for doing cost-effectiveness studies, such analysis is subject to a significant danger. As in other service areas, education is subject to slipping II he CC fi' na re fo ti eO we' ins Ins V15 9091 tele 27 quality of output (Gross, 1964). If, in fact, output does vary but it is not being measured, efforts to reduce the cost Of inputs could have a deleterious effect on output. Finally, the concept of risk should be considered. Innova- tions that are just slightly better than contemporary practice have a poor chance of being adapted. Some authorities have stated that cost reductions of new technologies must be at least 5:1 to be justi- fied in terms of an acceptable return on investment (Sission, 1974). In summary, the concept of productivity has been shown to depend upon the degree to which primary costs such as labor and natural resources can be replaced by capital and technological prog- ress. The review indicated that of these inputs, capital, in the form of media equipment, is the clearest example of increasing produc- tivity in instruction. It was noted that in instructional technology, economic concepts pertaining to the problem Of productivity are not well Operationalized or possibly even understood. Individualized instructional systems may increase effectiveness, but are costly. Capital investment seems to be best suited to reducing cost. What appear to be lacking are approaches that address concomitantly both factors of productivity-~outputs and costs. Currently, the best approach to increasing productivity in instruction is the use of capital-intensive technology such as tele- vision. By spreading relatively fixed costs over a large student pOpulation, and at the same time maintaining effective instruction, television can increase productivity. Vio gra. pre. onl 28 Observational Learninngffectiveness It is apparent that one way to increase the efficiency of instruction is to utilize observational instructional systems. An important issue in the use of these methods is their effectiveness. The intent of the following review is to outline some Of the recent literature on theories Of human learning and to point out variables that are important in designing Observational learning systems. Although many theories Of learning have been postulated, beha- viorism may be the most easy to identify in actual instruction. Pro- grammed instruction, contingency management, and individually prescribed instruction are examples of instructional techniques based on behavioral learning theory. However, acceptance of behavioral theories in explaining human performance has not been universal (Rogers, 1969). Increasingly, voices from a variety of disciplines, among them communications, have added new insights to learning: When we speak about the processes of learning we usually talk about motivation, practice, achievement, new skills or insights attained--we usually talk, that is, about learning as active and purposive behavior. We think of it as the province of school and classroom. We know that there are other, more passive kinds of learning, but we focus less on these, in part because they are presumed to be less effective, in part because they have been less noticeable--at least until the rise of the mass media, especially the electronic media (Krugman & Hartley, 1970, p. 184). In higher education, McKeachie (1974) noted that mature stu- dents can apparently learn in the absence of variables (e.g. feedback) behaviorists have thought to be necessary to learning. And from the discipline Of psychology itslef, alternative explanations of learning 29 are gaining acceptance (Binder, 1974; Bandura, 1969; Maatsch, 1975a). Recent research has suggested that such fundamental behavior- istic principles as overt responding and contingent reinforcement are not so much ineffective as they are unnecessary to learning. One of the fundamental means by which new modes Of behavior are acquired and existing patterns are modified entails modeling and various processes. Indeed, research conducted within the framework of social-learning theory (Bandura, 1965; Bandura & Walters, 1963) demonstrates that Virtually all learning phenomena resulting from direct experience can occur on a vicarious basis through Observation of other persons' behavior and its conse- quences for them (Bandura, 1969, p. 118). Bandura's position would seem to have direct implications for instruction--he described modeling procedures as "ideally suited for effecting diverse outcomes . . . on a group-wide scalei (1969. P- 113)- Zimmerman and Ghozeil (1974) provided a straightforward defi- nition Of modeling as "a group of stimuli that serve as an example or a pattern" (p. 441). The authors maintained that modeling research has had a tremendous impact on psychological theory: Before the current interest in modeling, a large movement was afoot under the banner Of behaviorism which attempted to describe learning without referring to covert thought pro- cesses. . . . Modeling research has forced behaviorists to recognize the fact that the human organisms can and do "mediate" or think and that explanations Of human behavior that do not take mediation into account will be less effective than explana- tions that do (p. 444). Although theoretical interest in vicarious learning is rela- tively recent, passive learning processes have probably always been assumed in education. Indeed, the fact that information relevant to action can be acquired through means other than direct action is what makes instruction possible (Olson & Bruner, 1972). The importance of 30 investigating this phenomenon is that researchers are now producing findings Of direct relevance to instruction. Observation as a Type of Experience Programmatic research on Observational learning Of affects7 was initiated by Bandura in the early 1960's. About 1970, work in this area began on cognitive variables (Zimmerman, 1975). From an instructional perSpective, Olson and Bruner (1972) produced a frame- work that is useful for comparing learning by Observation to other major types Of learning experiences. In their article, "Learning Through Experience and Learning Through Media," the authors described three categories of behavior from which subjects may extract infor- mation--contingent experience, Observational learning, and symbolic systems. Basic forms Of instruction are directly related to these categories; for contingent experience, the student learns by doing and the instructor manages an environment; for observational learn- ing, the student learns by matching and the instructor demonstrates with some feedback. And in symbolic systems, the student learns by being told and the instructor provides facts, descriptions, and explanations. Of these three learning experiences, observational learning many times is an ideal method Of compensating for inherent weaknesses of the other two. The primary conditions of learning through contingent experience--self-initiated action and direct knowledge of results-- have been demonstrated by learning theorists from Thorndike (1932) to Skinner (1954). This type of learning is unquestionably the most tre son he the reg} flat that then desc' lean to a\ tive tatio 37519! Is ins “F de 31 general of the three categories. Probably all organisms learn by contingent experiences. It is through applying learning principles that Skinner was able to communicate with pigeons (Olson & Bruner, 1972). The authors maintained that the major disadvantage of contin- gent learning experience is ambiguity. Yelon (1975), speaking of the need to specify instructional objectives, related a story that illus- trates the problem of ambiguity. A father was attempting to get his sons to quit swearing in school. At breakfast he asked one boy what he wanted to eat. The boy said, "Give me some of those damn corn flakes." The father proceeded to knock the boy off the chair. Next, the father asked the second boy what he wanted to eat. The boy replied, "I don't know, but I sure don't want any of those damn corn flakes." The moral of the story is that an important way to make sure that human learners discriminate correctly is, where possible, to tell them what stimuli to attend to. In general, the less language used to describe or define the task or stimulus, the more management of the learning situation is required. Tasks must be Specially organized to avoid confounding relevant with irrelevant stimuli. Language or symbolic systems often can be much more effec- tive than contingent learning. But language also has inherent limi- tations. First, of course, the learner must be skilled in the symbol system. Second, language is powerful fOr rearranging concepts but is insufficient for providing new experiences. For example, how does one describe a wheat field to a blind person? This deficiency of 32 "real world" experience makes language instruction less than ideal for transfer to real-world situations. These are but the most obvious limitations of two types of learning experiences. Observational learning is a more general experience that can incorporate the strengths of other experiences as well as make unique contributions to the efficiency and effec- tiveness of instruction. Numerous descriptions are generally subsumed under vicarious phenomena--modeling, imitations, observational learning, identifica- tion, copying, vicarious learning, social facilitation (Bandura, 1969). The present study focuses especially on social modeling--the learning that is possible from observing the performance of others. The immediate advantage of modeling is that it can incorporate ele- ments of other learning experiences. For example, reinforcement of a model affects observers' performance for both affective and cogni- tive behaviors (Bandura, 1969; Zimmerman, 1974). And models obviously can supplement demonstrations with verbal descriptions. But modeling can do more than offer a convenient method of combining direct and symbolic exposure. Modeling facilitates both response learning and transfer. Baron and Meyer (1974), in review- ing social learning from media, quoted Maccoby (1954): Media provides a child with experience which is free from real-life controls so that in attempting to find solutions to a problem he can try out various modes of action without risking injury or punishment which might ensue if he experiments overtly (p. 239). It is clear, also, that observing novel responses of a model should be much more efficient in expanding the number of student responses, ES} SUE $61 thl RO: tfi thi obse III N cal .- dGOOr 33 especially in comparison to the trial-and-error learning of responses suggested by learning from direct contingent experience. Modeling also appears to facilitate the achievement of a second general goal--the ability to transfer or generalize. In one of the few studies of adults' observational learning, Chalmers and Rosenbaum (1974) found that observers were superior to performers on transfer on a reversal-shift task. This advantage could not be explained as a function of original learning, since the observers equaled the performers on the original task. The researchers postu- lated that observational training entails a relatively reduced degree of associative interference. Olson and Bruner (1972) seemed to support this contention: Information picked up from [direct] experience is limited in important ways to the purpose for which it was acquired-- unless special means are arranged to free it from its context (p. 172). Special Problems in Observational Learning The previous discussion pointed out advantages of learning by observation. Zimmerman (1975) emphasized in his review that several * characteristics of natural environments inhibit observational learning. One inhibiting factor is making clear the stimuli to which the model or demonstrator is reacting. An additional special problem for observers arises when a model uses covert mediational Operations, as in rule learning. It is imperative that students understand the criti- cal alternatives in the subject matter; but a skilled performance by a demonstrator can obscure this requirement. tr Ch Vi af IIE‘ cii Cat oth- Dor- 34 Modeling as an instructional technique is successful to the extent that it creates an awareness both of the critical alter- natives and how to choose between them. To this extent a good demonstration is different from a skilled performance (Olson & Bruner, 1972, p. 148). .But knowledge of subject-matter alternatives is insufficient for successful instruction: Good instruction through modeling depends on a sensitivity of the instructor to the alternatives likely to be entertained by the student (Olson & Bruner, 1972, p. 138). This dependency on where the student is, produces problems that are not unique to observational learning. Markel (1974), in her review of concept learning, stated that utilization of current instructional theory is insufficient to insure student learning.. What is needed is experimentation with representative learners. This instructional principle was probably most clearly illus- trated in an analysis of the successful programs produced by the Children's Television Workshop (Cooney, 1970). Here a team of beha- vioral scientists carries on extensive applied research before and after television production (Reeves & Palmer, 1970). A potentially more efficient means of exposing critical alter- natives is for subjects to observe the learning of similar subjects. Noting the effectiveness of this approach, Olson and Bruner (1972) cited Herbert and Hash's pioneer study, "Observational Learning by Cat" (1944). In this study two groups of cats learned to Open doors by observing other cats. One group saw an errorless performance, the other an error-filled and a correct performance; both observer groups performed better than a control group. However, the group that 35 observed the error-filled performance learned more readily than the one that saw only the error-free performance. Recent research on instructional methods supported this con- tention that observing another learner can itself be a highly effec- tive learning experience. Maatsch et al. (1975b) contrasted six instructional methods-~lecture, reading, seminar, programmed instruc- tion, modeling, and Simulation--on a short (20-30 minute) but complex learning task. The methods consistently rank ordered themselves on immediate comprehension and 30-day retention. The two superior methods were simulation and modeling. In the simulation method, students were presented with concepts and rules in an order necessary to solve a problem. Comprehension of each element was assessed by requiring the student to respond orally. Next the student was given feedback on the accuracy of his response. Then the student actually perfOrmed a subset of the task, to which he again received corrective feedback. The modeling group comprised observers who were encouraged to attempt to do mentally what was being required of the simulation subject. On subsequent testing the modeling and simulation groups performed essentially the same. To maximize learning by observation or even to make it work at all, instructional designers will probably need to attend to a number of variables. Baron and Meyer (1974) speculated about impor- tant variables hi"electronic media" that could apply in any observation experience: Skills, knowledge and attitudes can be taught more effec- tively and efficiently if presented by attractive, successful models. . . . Learning through observation can be facilitated AlthOL raisec ing mt tude e intera cooper subsec jects and Se treatn greate for a genera tEIevi ter, i GVaIIa POlnt eIfect IIIel ke 36 if identification is allowed to work in concert with initia- tion (p. 177). Although these authors did not offer empirical verification, they raised potentially important research and design questions concern- ing model-observer interaction. Koran and Snow (1971), in their study entitled "Teacher Apti- tude and Observational Learning of a Teaching Skill," showed that interactions between these two variables can occur. These studies compared “video-mediated" and written modeling of a teaching skill on subsequent micro-teaching and written performance of the skill. Sub- jects entering scores on Hidden Figures, Maze Tracing, Film Memory, and Sentence Reproduction interacted significantly with modeling treatment. That is to say, "video-mediated" modeling produced greater gains for subjects who were low on these variables. Observational Learning Through Television Television is an important subset of observational learning for a number of reasons. First, much of the literature has been generated because of a concern for the effects on children of watching television. Second, as pointed out in the first section of this chap- ter, instructional television is one of the media educators have available for increasing productivity. It would be useful at this Point to review variables in television that relate to instructional effectiveness. Supporting research cited above (see Chalmer & Rosenbaum, 1974), Mielke (1972) asserted that these variables are numerous: Se BC ti fr 5% Si ta ab' V51 of sti' Oils 37 As control over the receiver's environment decreases sensi- tivity to interest and motivational inducements as are necessary in mass communication must increase. Sole concentration on single efforts such as learning can become dysfunctional as multiple effects interact with learning in the more unrestricted environments (p. 7). Salomon (1972) also Spoke of the added complexity of media use. In his article entitled "What Is Learned and How It Is Taught: The Inter- action Between Media, Message, Task and Learner," the author argued that each medium carries a unique message in addition to its content. Olson and Bruner (1972) maintained that as learning objectives move from almost exclusive concern with information to acquisitiOn of complex skills, methods and media as well as other variables will become more important. One of the capabilities of television is to depict motion. Significantly, the value of motion is not limited to psychomotor tasks. Spangenberg (1973), in his review entitled "The Motion Vari- able in Procedural Learning," expanded on the value of the motion variable. Motion has been found to be valuable when content is serially ordered; that is to say, one thing follOws another. Serial ordering is important in instruction not only because some proce- dures can be analyzed in that way but because, to a large extent, instruction itself is serially ordered (Zimmerman, 1975). Second, media with motion may be easier to design for purposes of learning. Spangenberg (1973) cited the fact that film compared to still pictures required fewer revisions to enable students to become oriented to a process. TE Of con to lea- MOI R635 ires I‘VE; I(1‘5 tr 38 Finally, as noted by Koran and Snow (1971), subjects can differ on their needs and abilities to profit from motion. Televised instruction could eliminate the need to screen for these subjects. In conclusion, research on observational learning theory in general and social modeling research in particular has indicated that vicarious processes are especially effective in providing new experiences and reSponse for learners, as well as in facilitating transfer. On the basis of this research, it would appear that a class- within-a-class teaching method offers a way of utilizing these research findings to improve both the effectiveness and the efficiency of instruction. The potential for achieving these objectives may lie in televising small, instructionally effective classes such as simu- lations. Summary This chapter reviewed the literature related to two general topics. The first concerned the issue of productivity and how this concept relates to instruction. The review indicated education seems to have a simpler conception of productivity than its common economic meaning. An implication of this is that increasing instructional productivity will be difficult until the concept is better understood. Research also indicates that Operationalizing productivity in education presents special problems. Finally, the review concluded that capital investments such as television can be more productive than direct instruction above certain numbers of students. ir TIN 39 The second part of the chapter reviewed observational learn- ing literature. Surprisingly little empirical research has been done in this area, considering that much instruction is Observational. Research on cognitive outcomes of observation is only a few years old. Therefore, most of the implications suggested for instruction are speculative. Nevertheless, observation can produce powerful effects when it is designed to emphasize critical alternatives for students in learning tasks. Finally, Maatsch et al.'s (1975b) observation of a simulation as a method of instruction was identified as a method having direct practical benefits. ter of ing ize sar mod the are test is d1 invo' Sign? Uste CHAPTER III DESIGN OF THE STUDY Introduction This chapter begins with a task analysis of the learning con- tent. Second, instruments are described including: (a) the measures of student affect toward instruction, (b) the achievement test assess- ing learning of task content, and (c) the self-report and standard- ized measures of aptitude. Next is a description of the student sample, followed by the design including treatments, experimental model, procedures, experimental facilities, and television production. This is followed by a listing of the research hypotheses and, finally, the statistical analyses used for the test of hypotheses are identified. Instructional Task Analysis In the following section the instruments used in this study are described, including the formats for the cognitive achievement test. Also, the learning content--Magic Squares--for that instrument is described here. Thiagarajan (1971) analyzed Magic Squares as involving two types of learning--concepts and rules--as defined by Gagne (1965). The elements of the task making up these variables are listed on the following page: 40 41 1. Concepts A. The Defining Elements A Magic Square is a square with rows and columns of numbers. 1. The numbers in rows, columns, and diagonals 2. produce an identical sum 3. and no number can be used more than once in any one Magic Square. Geometric Figure l. A square with an equal number of 2. odd rows and columns. 11. Rules Rules for assigning numbers to a square: 1. Name of Rule: First Number When is it used: When square is empty. How is it applied: Place first number in top row, middle column. EXAMPLE: (1) . Numbers Series 1. Must be positive. 2. Must ascend. 3. Must maintain constant interval between adja- cent numbers. 4. Can start with any positive number. Name of Rule: TOp to Bottom When is it used: When last known number is in the tap row (excep- tion right corner). How is it applied: Place next number in the bottom row one column to the right of the last number. EXAMPLE: (2) 42 3. Name of Rule: 4. Name of Rule: Right to Left Exceptjon to the Diagonal When is it used: When is it used: When last number is.in When the last number has right-most column (excep- a cell one row above and tion upper cell). one column to the right but the cell is already filled with a number. How is it applied: How is it applied: Next number is placed up one row in the left- most column. Place the next number directly below last number. EXAMPLE: 1 EXAMPLE: 3 1 (4) 2 3 (4) 2 5. Name of Rule: 6. Name of Rule: Diagonal Upper Right-Hand Corner When is it applied: When is it applied: When the last number has When the last number is in an empty cell one row the upper right corner. above and one column to . . . . the right and the cell How IS It applied. is empty. Place next number directly How is it applied: below last number. Place the next number in EXAMPLE: the empty cell one row up I 5 and one column to the 3 5 (7) right. 2 EXAMPLE: 1 3 (5) 4 2 USEEEESE Three general kinds of measures were used in this study: (a) an assessment of student affect toward instruction, (b) an adapted ‘J "3 an $91 alt the eac 43 cognitive achievement test, and (c) researcher develOped and selected standardized measures of aptitude for the task. A description of_ these instruments follows. Affective Measures Six items of student affect toward instructional methods were measured on a five-point semantic differential type scale: (a) pleasant- unpleasant, (b) clear-unclear, (c) easy-difficulty, (d) exciting-boring, (e) efficient-inefficient, and (f) the degree to which the student would prefer the instructional method (never-all the time). These scales were developed and used by Maatsch et al. (1975b). Dependent variables on student affect were formed on the basis of factor analy- sis. The instrument is found in Appendix A. Cognitive Measures Two types of cognitive measures were used in this study-- achievement and aptitude relevant to the learning task. The achieve- ment test was one originally developed by Maatsch et al. (1975b) and further modified for the present study. All the learning task ele- ments (see above) were tested in four common paper-and-pencil formats: recognition, recall, application, and problem solving. These formats are described below; representative pages of the instrument can be seen in Appendix B. The recognition batteries were multiple choice, with four alternatives. The concept elements were tested by verbal statements, the rules by graphic examples. In the latter case four Magic Squares, each with only enough numbers to illustrate one rule, were the stimulus 155 fi 95 Cr Cd gi na not as: cii of rel' con* only and reve choi 44 material. The student was then instructed to select the one of four figures in which an assignment rule was not violated. The recall batteries for concepts asked the student to list each of the elements that (a) define what a Magic Square is, (b) des- cribe a correct number series, and (c) indicate what geometric figure can be used. The rule subbattery required the generation of the name of each of the assignment rules and how it is applied. The application of rules subbattery required placement of a given number in a Magic Square containing a minimum amount of stimulus material. This material was organized in such a way that the given number could only be placed correctly by using a specific number- assignment rule. The application of concept subbattery posed spe- cific problems not seen in instruction, which required comprehension of specific concept elements for solution. In the problem-solving subbattery, students had to select and combine a number series and a geometric figure to form a correct Magic Square. This figure was of greater complexity than any given during the instructional treatment. Items were added to the original Maatsch test to increase the reliability of the concept score and to measure additional task content--"knowledge." The items added to increase reliability were only in multiple-choice format. The knowledge items were short answer and multiple choice. Unfortunately, these last two batteries were reversed in production of the test. As a consequence, the multiple choice may have cued students in answering the short answer, since COV for SOS CIEVI abi‘ Duz; batt esti self indi ment; U18 6 45 both batteries tested the same content. Therefore, any inference on performance on the short-answer battery is limited to recognition. Cronbach alpha calculations of reliability for this instru- ment were .91 for rules and .84 for concepts. Aptitude Measures Two types of aptitude measures were used for two purposes in this study. The first purpose was to identify predictors of performance that could be used statistically to reduce error variance (analysis Of covariance). The second purpose was to identify predictors of per- formance to serve as independent variables in hypotheses (see Hypothe- ses V through VIII). The first type of aptitude assessment was three researcher- developed scales for student self-assessment of ability: (a) math ability, (b) math interest, and (c) time spent on paper-and-pencil puzzles (Appendix C). The second type of measure used was available college entrance batteries and indexes. These scales could provide a more objective estimate of task-relevant ability and verify the accuracy of student self-reports. Unfortunately, entry scores were not available for all individuals in the sample. Design Described in this section of the chapter are the three experi- mental treatments, the model in which the conditions were contrasted, the experimental facilities including the televised treatment procedure, be ef‘ tic Ins knc st'. Vid Nex bel Heri Verb. IASII AASHe was t 46 and, finally, the procedures under which the experiment was admin- istered. Treatments As indicated in Chapter I, the general concept tested could be called a "class within a class," wherein the internal class is the effective method and the external class serves to increase instruc- tional efficiency. The method chosen for the internal class was the Instructional Simulation (IS). This method is designed to maximize known effective psychology learning variables in a highly controlled student environment. The study used 15 in the following way: First the instructional task was broken down into its indi- vidual elements (see above--Analysis of the Instructional Task). Next the order of presentation was determined on the basis of what was believed to facilitate recall. For example, the assignment rules were presented in the order in which they are normally used to make a Magic Square. This is in contrast to other rational approaches, such as teaching first the rules that are used most frequently (see Thiagarajan). During instruction the task was presented to the student one element at a time. The elements were actual figures and numbers, displayed on an overhead transparency. CorreCt and incorrect examples of the rule were displayed concurrently. The student was asked to verbalize what specific concept or rule was being demonstrated. The instructor indicated whether the response was correct, how to make the answer more complete, or what the answer should have been. The student was then asked to apply this task element to a problem and was again 47 given corrective feedback as needed. This completed the structured cycle of teaching each task element. However, the student was encour- aged from the very beginning to interrupt and ask questions at any time. A typical example of the above scenario follows. 1. Instructor displays illustration of task element. 5 5 5 3 3 3 3 l 6 5 5 5 3 3 3 3 5 5 5 5 3 3 3 (Instructor has indicated that only figure on the right is a Magic Square.) 2. He then asks student to formulate a reason that would eliminate the figures on the left from being called Magic Squares. 3. Student responds by saying that if a figure contains only one number it cannot be a Magic Square. 4. Instructor responds with, "Not only is what you say correct but a complete statement of the concept is that no number in a Magic Square can be used more than once." 5. Instructor displays a problem. 7 1 5 165 2 12 8 8 8 3 4 6 6 1C114- 8 8 8 3 8 2 8 TE! 4 8 8 8 6. Instructor asks student to point out the figures that could pp§_be Magic Squares. 7. Student points to the first and last figures. 81 The instructor says "Correct" and begins a new cycle on the next task element. R1 Pro pro no; 3106 four 48 Effects investigated in this study concerned observation of this IS session. Two groups observed the 15 at the time it was con- ducted. One group sat in the same classroom and saw the IS by direct observation (0015), while the other group (TVIS) sat in a remote class- room and observed the IS on television. Subjects in both observer groups were instructed to learn as much as they could without asking questions, discussing the task among themselves, or taking notes. Over a period of two weeks, 12 IS with observer groups were run. The sample sizes for each replication are indicated in Table 3.1. Table 3.1 Sample Size in Individual Replication Observation Experimental Replications GIOUP l 2 3 4 5 6 7 8 9 10 ll 12 Male 2 3 2 1 1 l 1 1 00 Female 3 l 4 l l 1 1 2 1 Male 2 4 1 l 2 1 l l 2 l RTV Female 2 1 4 l l 1 l l 1 The investigation originally proposed only eight replications, prodUcing a total observation p_of 64. However, in recruiting (see procedures below) subjects, the original blocks of nine students were not obtained. Additionally, some students who signed up did not appear for the experiment. Therefore, the experiment was replicated fOur additional times, resulting in a total observation p_of 57. ass stu Dif sys ass an out for our: tra ind IS COU for TEST ted "1911‘ dent Star 49 For each treatment, students were blocked on sex and randomly assigned to observation groups. To limit confounding effects, the students chosen to participate in the IS were limited to Caucasians. Different races, however, were not identified (as by a numbering system) so as to avoid possible student reactivity. Therefore, random assignment to the IS was not possible without the chance of obtaining a non-Caucasian. The subjects for the IS group were selected with two objectives. One was to balance the 15's on sex. The other was, as much as possible, to balance the observation groups on sex. There- fore, the IS students were selected from sex groups that were odd in number and would not split evenly into two groups. Figure 3.1 illus- trates the extent to which these objectives were met. Here the 12 individual treatments have been pooled on the basis of the sex of the IS student to form a 2 x 2 x 2 design. As can readily be seen, the balance is better on the experimental contrast (type of observation) than on the quasi-experimental contrast (sex of the IS student). Population Description, Sample SETéction, and’Sample Assignment Subjects in this experiment came from beginning psychology courses at Michigan State University. Although students volunteered for this specific experiment, they were required to participate in research for the course in which they were enrolled. Students were recruited a week to 10 days before the time of each of the 12 treat- ments. One and a half hours were allotted for each treatment. Stu- dents then scheduled themselves by signing a sheet, which was a standard psychology form used for human research. Additionally, a hal‘ ava‘ inS‘ the spat witi remi woul and Dire. Obse (DI 19181 Obsei (11 expor' Then a ESL S( IDstru 50 half-page general description of the purpose of the research was available to students, who were not told at that time what specific instructional model they would be in. The only clue to this was that the sign—up sheets for the experiment contained 10 available name spaces, while a concurrent experiment on lecture used a sign-up sheet with 25 spaces. The sign-up system provided the student with a reminder card that contained the address at which the experiment would take place and spaces for the student to enter the time, day, and the name of the experimenter. Model Model Male Female Sub- Male Female Male Female Totals Totals Direct Male 3 9 12 Observation 27 (0015) Female 8 7 15 Televised Male 4 12 16 Observation 30 (TVIS) Female 8 6 l4 Subtotal 7 16 21 13 Grand Totals 23 34 Total 57 Figure 3.1. Distribution of subjects for each design factor. When students arrived they were first told the purpose of the experiment and the name of the experiment groups and learning task. Then a one-page sheet containing the self-report aptitude and inter- est scales was administered. While students were completing this instrument, they were Split by sex into two groups by numbered cards. Twc the the M for odc fat '1 Val Pb L Fic obs 51 Two series of l, 2, l, 2 . . . cards were used, one for males and the other for females. By distributing the order cards to individuals, the available number of subjects of each sex was split. This produced two groups that were closely balanced on sex. If the number was odd for one sex and even for the other, an individual was picked from the odd numbered group to be an IS subject. The groups were then assigned randomly to treatments. Experimental Facilities Two rooms were used for the three treatment groups. Figure 3.2 is a floor plan of the larger room, in which the IS and 00 groups met. Figure 3.3 shows the floor plan of the smaller room used by the TVIS observation group. AK’T””T___—_::::=-. TWV [ l [ IMONITORS O O STUDENT OBSERVERS Figure 3.2. Floor plan for televised observation (TVIS) group. TeIe the Dtoj: the 5 SDect Who 52 I_L PROJECTION SCREEN STUDENT INSTRUCTOR OVERHEAD PROJECTOR <:) STUDENT OBSERVERS O % if} ..i:....1 Figure 3.3. Floor plan for direct observation (0015) group. Television Production Figure 3.3 shows the placement of two television cameras in the large experimental classroom. The robot camera was fixed on the projection screen, while the floor camera framed the instructor and the student of the IS. Both of these shots were taken from the per- spective of the students in the 0015 group. The objective was to communicate with as much fidelity as practical the information the di re th re th: The eac ria We. The see Date I Ker test 53 DOIS group was receiving. To meet this objective, the assistant director of Instructional Television at Michigan State University was recruited to consult on the design of the television production for the TVIS group. It was decided that the students in the DOIS group had two relevant visual perspectives of the IS. They were either watching the material on the screen or the interpersonal interaction in the IS. The problem that became apparent was how to decide which of these views to show at any one time. The solution was to show both simul- taneously on two television sets and let the TVIS student, like the DOIS student, select for himself. Consequently, two complete closed- circuit systems were used. The TVIS student, therefore, observed the IS by means of two 25-inch monitors sitting side by side four feet off the floor. In the IS the instructor and student sat obliquely toward each other with the table in front of them. All instructional mate- rials were displayed to this student on the overhead transparency projector. Also, the student worked the problems on the projector. The visual material was thus presented so that all students could see it. Additionally, the lettering and aspect ratio of the trans- parencies was produced according to television legibility standards (Kemp, 1968). Finally, the audio level for the TVIS group was adjusted by a technician prior to each treatment. Hypotheses The design and procedures described above were intended to test the following research hypotheses: II: III: IV: VI: VII: VIII: IX: 54 Observers' cognitive performance, as measured by concepts, will be significantly superior in direct as compared to televised observation. Observers' cognitive performance, as measured by a rules score, will be significantly superior in direct as com- pared to televised observation. Observers' cognitive performance, as measured by a con- cepts score, will be Significantly better when the simu- lation participants and their respective observers are of the same sex as compared to when they are of Opposite sexes. Observers' cognitive performance, as measured by a rules score, will be significantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of Opposite sexes. Observers' cognitive performance, as measured by a concepts score, will significantly and negatively_corre- late with a self-reported aptitude of students being observed in a simulation. Observers' cognitive performance, as measured by a rules score, will significantly and negatively_correlate with a self-reported aptitude of students being observed in a simulation. Observers' cognitive performance, as measured by a con- cepts score, will significantly and negatively_correlate with a standardized scholastic aptitude score of the students being observed in a simulation. Observers' cognitive performance, as measured by a rules score, will significantly and pegatively_correlate with a standardized scholastic aptitude score of the students being observed in a simulation. Observers' satisfaction with instructional method, as measured by a pleasant-exciting score, will be signifi- cantly superior in direct as compared to televised observation. Observers' satisfaction with instructional method, as measured by a clear-easy score, will be significantly superior in direct as compared to televised observation. th in: Th‘ 55 XI: Observers' satisfaction with instructional method, as measured by a pleasant-exciting score, will be signifi- cantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of Opposite sexes. XII: Observers' satisfaction with instructional method, as measured by a clear-easy score, will be significantly~e better when the simulation participants and their respec- tive observers are of the same sex as compared to when they are of opposite sexes. The reader should note the relationship between the six general hypotheses enumerated in Chapter I and the 12 listed here. Notice that the concepts of cognitive performance and satisfaction with instructional method have been Operationalized with two measures. This results in a doubling in number of the original general hypotheses. Data Analysis Data relevant to the hypotheses of this study were analyzed by three techniques--analysis of variance (ANOVA), analysis of covariance (ANCOVA), and partial Pearson product-moment correlation. Individual student performance data are considered the unit of analysis. Test administration and experimental procedures were designed to eliminate interactions among students. The researcher is, therefore, willing to assume independence of observations at the individual student level. The first and last four hypotheses of the study address the question of group differences; for only two groups the apprOpriate analysis is the t_test (Glass & Stanley, 1970). However, ANOVA for two groups is equivalent to the t_test and is, therefore, also apprOp- riate. Two assumptions underlying ANOVA are homogeneity and normality of the error variance. These assumptions should not present a major pFO TBS H96 dep ARC: COO' Ilnl are DOS! for fror effi ANCE beCa Part 8395 othe tion para: data, 56 problem. Reasons for major deviations were not evident to the researcher and ANOVA is robust to violations of these assumptions (Kirk, 1968). ANOVA was used for Hypotheses IX through XII where the dependent measure dealt with student satisfaction with instruction. However, the dependent measure in Hypotheses I through IV, performance on a math- type task, had an obvious potential nonfactor predictor--math aptitude. ANCOVA was used here because it tested for treatment effects while controlling for metric nonfactor predictors of performance. Control- ling for covariate effects is useful when treatment samples differ or are biased on the variable (Cochran, 1957). But this study dealt with possible bias by randomly assigning subjects to treatment. The reason for using ANCOVA in this study was to gain precision. By eliminating from the error variance the effects of covariates, true treatment effects would be easier to detect. An additional assumption with ANCOVA is no covariate-by-factor interactions, which was tested and is reported in Appendix D. Correlational analysis was used for Hypotheses V through VIII because the independent variables were not an experimental factor. Partial Pearson product-moment correlation was used for these hypoth- eses. The partialing controlled for effects of identified variables other than the independent variable. Pearson product-moment correla- tion was used because of the added precision of parametric over non- parametric statistics. Technically, Pearson assumes equal-interval data, but in reality is robust for this assumption (Nefzger & Drasgron, 195 str usi for' Sci wer tab whit and Inst The appl coll Shon cedu desc 57 1957). An important limitation of correlational analysis is that straightforward causal inferences cannot be made. Descriptive data analysis, ANOVA, and ANCOVA were computed uSing the IBM 6500 computer at Michigan State University. Programs for this analysis were from the Statistical Package for the Social Sciences (SPSS), version six. Additionally, partial correlations were computed by hand with data obtained from a zero-order correlation table. The formula for this computation was 412 = ’12 ' r13 r23 .3 Y 1 - r2 23 (Glass & Stanley, 1970, p. 185) Summary This chapter began with an analysis of the learning task, which was broken down into two major components identified as concepts and rules. Three types of instruments were discussed. Aptitude instruments were self-assessment and standardized scholastic measures. The cognitive test used four paper-and-pencil formats--problem solving, application, recall, and recognition. The sample was composed of college cophomores recruited from psychology courses. The design was shown to be a 2 x 2 x 2 fixed-effects, repeated-measures model. Pro- cedures, experimental facilities, and the television production were described. ESE a D kIlOi bat Toma Indi lab] f0m... CHAPTER IV FINDINGS Introduction This chapter presents the findings relevant to the 12 hypoth- eses Of this study. The order of the results is: l. The empirical character of the dependent variables Factors affecting observer cognitive performance Variables related to observer cognitive performance boom Student preference for instructional method Analysis of Cognitive Dependent Variables As described in Chapter III, this study began with three a priori-defined dependent variables labeled rules, concepts, and knowledge. Table 0.1 in Appendix D is a factor analysis of the sub- batteries making up these three variables. Since the knowledge sub- batteries loaded on a factor with an eigenvalue of less than one and accounted for less than 10% of the total variance, this variable was eliminated from further analysis. Rule subbatteries were loaded on the first factor, which accounted for 72% of the total variance. The remaining factor accounted for 18.4% of the test score variance. Individual subbatteries of concepts loaded on both factors one and two. Table 4.1 depicts the intercorrelation of the subbatteries (four test formats measuring the same content) making up rules and concepts. 58 59 Table 4.1 Bivariate Intercorrelation of the Subbatteries Making Up the Cognitive Dependent Variables Dependent Subbattery l 2 3 4 5 6 7 8 Variable 1. Problem Solving 1.00 2. Application .77 1.00 Rm” 3. Recall .64 .73 1.00 4. Recognition .68 .79 .64 1.00 5. Problem Solving .38 .23 .24 .26 1.00 6. Application .40 .36 .48 .35 .65 1.00 ““9”“ 7. Recall .41 .41 .56 .49 .57 .56 1.00 8. Recognition .23 .31 .39 .45 .27 .23 .451.00 Table 4.1 supports the factor analysis interpretation and shows that concepts, unlike rules, are either a multidimensional construct or are not well measured. First, note that rules subbatteries inter- correlate highly with themselves (.64 to .79) and lower with concepts. Second, in concepts only the problem-solving and application subbat- teries correlate higher among themselves (.65) than they do with rules. From this analysis it is not clear what performance the concept bat- tery is measuring; therefore, "concepts" is used in the remainder of the study as only a label for an unknown performance factor. To keep -this distinction clear, the label is set in quotation marks. Effects on'Observers' Cognitive Performance Using the two dependent variables described above, rules and "concepts," eight hypotheses were generated to test the effects of 60 four independent variables on the cognitive performance of observers of an instructional simulation. The first two independent variables were experimental factors that were analyzed by a three-way analysis of covariance. The statistics relevant to these variables are tabled in this chapter. The complete analysis including covariates is pre- sented in Tables 02 and 03 Of Appendix D. In this chapter, the decision to reject null hypotheses is based on an alpha level of .05. Later, in Chapter V, some of the findings are interpreted at an alpha level of .20. The reader will recall that in Chapter I it was argued that televised instruction was a method for improving efficiency, assuming the effectiveness of televised instruction was equivalent to direct instruction. The first two hypotheses are a test of this assumption. The first null hypothesis tested is: Null Hypothesis I: Observers' cognitive performance, as measured by a "concepts" score, will not significantly differ in direct as compared to televised observation. Research Hypothesis 1: Observers' cognitive performance, as measured by a "concepts" score, will be significantly superior in direct as compared to televised observation,. Table 4.2 presents the statistics relevant to Hypothesis I. Since the probability of the observed effect occurring by chance is .999, the null hypothesis is not rejected. 61 Table 4.2 Analysis of Covariance for Effect of Type of Observation on Observer Performance on "Concepts" 7 fl. F E Type of Observation 1 .141 .999 A. Direct 28.06 8. Television 27.06 The next hypothesis is: Null Hypothesis II: Observers' cognitive performance, as measured by a rules score, will not significantly differ in direct as compared to televised observation. Research Hypothesis II: Observers cognitive performance, as measured by a rules score, will be significantly superior in direct as compared to televised observation. Table 4.3 presents the statistics relevant to Hypothesis II. Again, since the probability of the observed effect occurring by chance is .999, the null hypothesis is not rejected. Tables 4.2 and 4.3 indicate that type of observation was not found to be a signifi- cant factor (p_= .999) for either measure of cognitive performance. In Chapter V, additional evidence is cited to assert that televised Observation is probably not a significant factor in learning from an instructional simulation. 62 Table 4.3 Analysis of Covariance for Effects of Type of Observation on Observers' Performance on Rules X' .9: F 'p_ Type of Observation 1 .090 .999 A. Direct 31.17 8. Television 30.20 Hypotheses III and IV test the importance of the sex interac- tion factor. If the outcomes of observers of a simulation are depen- dent on sex interactions between observed and observing student, this would complicate the use of the method. The next hypothesis tested is: Null Hypothesis III: Observers' cognitive performance, as measured by a“Concepts" score, will not significantly differ when simulation participants and their respective observers are of the same sex as compared to when they are Opposite sexes. Research Hypothesis III: Observers' cognitive performance, as measured by aflEoncepts" score, will be significantly better when the simulation participants and their respective Observers are of the same sex as compared to when they are of opposite SEXES . Table 4.4 presents the statistics relevant to Hypothesis III. Since the interaction in Table 4.4 is not significant, the null hypothesis is not rejected. Hence further analysis that would gener- ate mean values for sex patterns was not conducted. 63 Table 4.4 Analysis of Covariance for Effect of Sex Interactions on Observers' Performance on "Concepts" d_f F P. Two-way interaction of sex of simulation student 1 .305 .999 x sex of observer students The next hypothesis tested is: Null Hypothesis IV: Observers' cognitive performance, as measured by a rules score, will not significantly differ when simulation participants and their respective observers are of the same sex as compared to when they are of Opposite sexes. Research Hypothesis IV: Observers' cognitive performance, as measured by a rules score, will be significantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of opposite sexes. Table 4.5 depicts the statistics relevant to Hypothesis IV. Again, Since the interaction in Table 4.5 is not significant, the null hypothesis is not rejected and further analysis to determine mean values for sex patterns was not conducted. The findings for Hypoth- eses III and IV can thus be summarized: The sex of the simulation student has not been identified as a significant factor in the cogni- tive learning Of observers of that student. The reader will recall that research cited in Chapter I suggested that the ability of the simulation student might be nega- tively correlated to the cognitive learning of observers of that student. To test this assumption, two types of assessment of student ability were obtained: three self-reported and seven standardized 64 Tab1e 4.5 Analysis of Covariance for Effects of Sex Interactions on Observers' Performance on Rules 91: F e Two-way interaction of sex of simulation student 1 1.146 .292 x sex of observer students scholastic aptitude scores. Table 04, Appendix 0, presents the cor- relations for these variables with observers' cognitive performance on rules and "concepts." The best self-reported predictor of per- formance was "math ability," correlating at .32 with concepts and .49 with rules. This variable was used as an independent variable in Hypothesis V. The selection of the best standardized score for an independent variable in Hypothesis VI was limited to high school grade-point average and Michigan State University scores, since there were fewer subjects with scores on the other measures. Of these three the Michigan State University Math score correlated highest, at .70 for "concepts" and .51 for rules. The fifth tested hypothesis is: Null #ypothesis V: Observers' cognitive performance, as measured by a concepts" score, will not significantly correlate with the self-reported math ability of the student being observed in a simulation. Research Hypothesis V: Observers' cognitive performance, as measured by a“concepts" score, will significantly and negatively correlate with the self-reported math ability of students béing observed in a simulation. 65 Table 4.6 depicts the statistics relevant to the two tested hypotheses. In respect to Hypothesis V, the correlation of -.2414 is significant at the .05 level; therefore, the null hypothesis is rejected. Since the observed difference is significant and in the direction predicted, support for the research hypothesis is inferred. Table 4.6 Partial Correlation Between the Self-Reported Math Aptitude of the Simulation Student and the Cognitive Performance of Observers Observers' Cognitive Performance Concepts Rules Simulation students' rp = -.2414 rp = -.0596 self-reported math _ _ ability score p T ('05 p ' >'50 = 54 n = 54 The next hypothesis tested is: Null Hypothesis VI: Observers' cognitive performance, as measured by a rules score, will not significantly correlate with the self- reported math ability of the student being observed in a simula- tion. Research Hypothesis VI: Observers' cognitive performance, as measured by a rules score, will significantly and negatively cor- relate with the self-reported math ability of students being observed in a simulation. Table 4.6 depicts the statistics relevant to Hypothesis VI. Since the probability level of the observed correlation is >.50, the null hypothesis is not rejected. 66 The next two hypotheses test the relationship between simula- tion and observer students, using a standardized aptitude score that was identified as a good predictor of student performance on this task--Michigan State University Math (MSU Math). The seventh tested hypothesis is: Null Hypothesis VII: Observers' cognitive performance, as measured by a “concepts" score, will not significantly corre- late with the MSU Math aptitude score of the student being observed in a simulation. Research Hypothesis VII: Observers' cognitive performance, as measurediby a "Concepts" score, will significantly and negatively_ correlate with the MSU Math aptitude score of students being observed in a simulation. Table 4.7 shows that the relevant correlation of -.3077 is in the predicted direction and is significant at the .05 level. There- fore, the null hypothesis is rejected. Since the observed difference is significant and in the predicted direction, support for the research hypothesis is inferred. Table 4.7 The Correlation.Between the Michigan State University Math Score of the Simulation Student and the Cognitive Performance of Observers Observers' Cognitive Performance Concepts Rules Simulation students' rp = -.3074 rp = .0007 Michigan State University = = Math aptitude score p ('05 p >'499 n = 40-43 n = 40-43 67 The next hypothesis tested is: Null Hypothesis VIII: Observers' cognitive performance, as measured by a rules score, will not significantly correlate with the Michigan State University Math aptitude score of the student being observed in a simulation. Research Hypothesis VIII: Observers' cognitive performance, as‘ measured by a rules score, will significantly and negatively cor- relate with the Michigan State University Math Aptitude score of students being observed in a simulation. As shown in Table 4.7, the relevant correlation (.0005) is not signifi- cant. Therefore, Null Hypothesis VIII is not rejected. Consistent results appear in Hypotheses IV through VIII. Both independent variables--self-reported math ability and Michigan State University Math aptitude score--are related to the observers' cogni- tive score on "concepts." Correlational data alone, of course, are insufficient to infer causation. However, the case for causation can be strengthened on logical grounds by two points. First, it is reason- able to state that the simulation students' aptitude preceded the observers' performance in time. Second, there is a logical link, if only by definition, between specific aptitudes and performances, recog- nizing the limitation that in these studies these measures are on dif- ferent individuals. Measures of Student Affect The final question addressed by this study is: Would students select observation by television of instructional simulations if they had a choice? The first question addressed the issue of the effec- tiveness of television observation of instructional simulations. Even if encouraging evidence were obtained on this issue, if students SC SE ex we the SE) 68 preferred direct observation, the utility of the type of television observation described in this study would be limited. Therefore, affect toward instructional method was assessed using five semantic differential scales develOped by Maatsch et al . (1975b) , on which students rated the: (a) pleasantness, (b) clarity, (c) excitement, (e) effi- ciency, (f) easiness of the method, and (9) whether they preferred repeated use of the method. Analysis of Maatsch's data indicated two identifiable fac- tors for these scales (see Table 08, Appendix 0):. The "a" and "c" scales loaded on one factor and the "b" and "f" scales loaded on a second factor. Therefore, two dependent variables were formed by a linear combination of scores on ratings of (a) pleasantness and excitement (A-1) and (b) clarity and easiness (A-2). These variables were then used as measures of affect in testing for the effects of the two experimental factors of this study--type of observation and sex interactions. The tests were formulated by the following four hypotheses. The complete analysis of variance for affects is presented in Tables 06 and 07, Appendix D. In this chapter the relevant data are presented with each hypothesis. The next tested hypothesis is: Null Hypothesis IX: Observers' satisfaction with instructional method, as measured by a pleasant-exciting score, will not sig- nificantly differ from direct to televised observation. Research Hypothesis IX: Observers' satisfaction with instruc- tional method, as measured by a pleasant-exciting score, will be significantly superior in direct as compared to televised observation. 69 Table 4.8 depicts the statistics relevant to Hypothesis IX. Since the difference is significant at the .05 level, the null hypothesis is rejected. The difference is also in the predicted direction; therefore, support for the research hypothesis is inferred. It appears that if students have a choice they prefer direct obser- vation over the type of television observation described in this study. The implications of this finding are discussed in Chapter V. Further tests of the effects of observation by television follow. Table 4.8 Analysis of Variance for the Influence of Type of Observation on Student Affect as Measured by A-l 7" 91°. F E Type of Observation 1 4.054 .047 1. Direct 2.24 2. Televised 2.64 aLower score indicates higher preference. The next hypothesis tested is: Null Hypothesis X: Observers' satisfaction with instructional method, as measured by a clear-easy score, will not significantly differ from direct to televised observation. Research Hypothesis X: Observers' satisfaction with instructional method, as measured by a clear-easy score, will be significantly superior in direct as compared to televised observation. 70 Table 4.9 indicates that the probability pf the observed dif- ference occurring by chance is .999. Therefore, the null hypothesis is not rejected. The reader should note that the observed difference would not be significant at any alpha level. Although this is insuf- ficient evidence to assert the null hypothesis, one might have some confidence that there is no difference on this measure. If this is the case, it could indicate a reasonable amount of fidelity for the television production since A-2 encompasses a student rating of clarity. Table 4.9 Analysis of Variance for the Influence of Type of Observation on Observers' Affect as Measured By A-2 X3 g1, F 11 Type of Observation 1 .016 .999 1. Direct 2.29 2. Televised 2.27 aLower score indicates higher preference. Sex interactions were identified as a possible variable in this method. The next two hypotheses test this factor on the identi- fied measures of student affect. The next tested hypothesis is: Null Hypothesis XI: Observers' satisfaction with instructional method, as measured by a pleasant-exciting score, will not sig- nificantly differ when simulation participants and their respec- tive observers are of the same sex as compared to when they are of opposite sexes. 71 Research Hypothesis XI: Observers' satisfaction with instruc- tional method, as measured by a pleasant-exciting score, will be significantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of Opposite sexes. Since the interaction in Table 4.10 was not significant, Null Hypothesis XI was not rejected; further analysis to determine the sex mean patterns was not undertaken. The final hypothesis also tests the influence of the sex interaction factor on observers' affect. Table 4.10 Analysis of Variance for the Influence of Sex Interaction as Measured by A-l 91 F E Two-way interaction: sex of simulation student 1 .358 .999 x sex of observer students The next tested hypothesis is: Null Hypothesis XII: Observers' satisfaction with instructional method, as measured by a clear-easy score, will not significantly differ when simulation participants and their respective observers are of the same sex as compared to when they are of opposite sexes. Research Hypothesis XII: Observers' satisfaction with instruc- tional method, as measured by a clear-easy score, will be signifi- cantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of opposite sexes. Table 4.11 indicates that the finding is not significant at the selected alpha level; therefore Null Hypothesis XII is not rejected 72 and further analysis to determine sex mean patterns was not under- taken. Table 4.11 Analysis of Variance for the Influence of Sex Interactions on Observers' Affect as Measured by A-2 Two-way interaction: sex of Simulation student 1 .006 .999 x sex Of observer student Summary of Findings The test of the 12 hypotheses of this study can be summarized in six points. 1. Type of observation--televised versus direct--was not found to be a significant factor in measures of observer learning. 2. Sex interaction between a student in an instructional simulation and observers was not found to be a significant factor in measures of observer learning. 3. Observer learning on "concepts“ was significantly and negatively related to the simulation students' self-reported math ability and the Michigan State University Math aptitude score. 73 Assessing the relationship between the observating and the simulation student using a cognitive rules score produced no significant differences. No significant relationship on student affect was found when using a clear-easy score or a sex-interaction factor. Observers were significantly more satisfied with direct as compared to televised observation, using a pleasant- exciting score. CHAPTER V SUMMARY, CONCLUSIONS/DISCUSSION, AND IMPLICATIONS Introduction This chapter contains general summaries of the problem and purpose of this study, the relevant literature, the study design, and major findings. Based upon this summary, conclusions. discussion, and a number of implications are drawn for further research and educa- tional practice. The Problem Instructional methods research, despite its 50-year history, has had disappointing results. The general conclusion from this lit- erature is that methods fail to produce consistent differences as measured by student achievement (Dubin & Taveggia, 1968). Nevertheless, recent controlled programatic research (Maatsch et al., 1975b) demon- strated that methods consistently rank order themselves for one learn- ing task. As constituted in these studies, the superior method-- instructional simulation (IS)--also appeared to be most inefficient: one instructor taught one student. The purpose of this study was to test the generalizability of an instructional simulation--what was its effectiveness on observer learners as a function of (a) an efficient instructional medium-- television, and (b) different students participating in the simula- tion. This line of research was undertaken with the anticipation 74 that high quL effe unde othe stra 1972 note CODTTV bodie Instr the 1 It is diffi Iems tl‘l'es that 75 that television observation of an instructional simulation (TVIS) might eventually prove to be a highly productive instructional tech- nique. If this study indicated that televised observation was as effective as direct observation, further costing research could be undertaken. TVIS could prove to be highly productive if subsequent research (a) indicated the costs of TVIS were similar to the cost of televising other methods and (b) TVIS replicated the relative superiority demon- strated by instructional simulations (Maatsch et al., 1975b). Literature from social learning (Bandura, 1969; Zimmerman, 1972) and learning theory (Bruner, 1960; Wood et al., 1975) was cited as supporting the effectiveness of observational learning. It was noted that instructional simulations have the potential of overcoming common problems in learning by observation in natural environments. The Literature Reviewed in the second chapter of the study were two general bodies of literature. The first concerned the problems in achieving instructional productivity. The review indicated that in education the term productivity many times is used differently than in economics. It is assumed that until productivity is better understood it will be difficult to develOp educational solutions. In addition, some prob- lems were identified in operationalizing productivity in.service indus- tries such as education. Significantly, the review cited evidence that capital investments like television can increase the productivity of methods such as lecture. abo has pro was in tio tio sev. ODSI and flxi ass" FOP-16 SEX was tote TV-” vati obSe ASSu are 76 The second part of Chapter II reviewed what is currently known about observational learning. Surprisingly little empirical research has been done in this area, considering that much school learning is probably achieved by observation. Research on cognitive outcomes was found to be only a few years old. The central problem identified in learning by observation is one of lack of explicitness in the Opera- tions that are going on in the mind of a model. Unless these Opera- tions are clear to the Observer of a model, the observer's learning is severely handicapped. The value of an instructional simulation for observers is seen to be that many mental Operations of both the teacher and a learning model are explicit in the method. Demo. The design employed to make these tests was a three-factorial, fixed-effects, repeated-measures model. Students were randomly assigned to type of observation--remote television or direct. The remaining two factors were blocking variables: sex of the observers and sex of the simulation student being observed. The simulation student was selected on the basis of sex. Twelve replications produced a total sample size of 12 simulation students and 57 observers. The TV-mediated observation contained 30 students and the direct obser- vation group contained 27 students. A limitation of this study was that the actual size of the observation groups in each replication ranged from one to five. The assumption is made that the learning effects of television Observation are invariant with respect to audience size. An additional limitation 77 of the study was that cost analysis was not performed. The argument for increased instructional effectiveness of instructional simula- tions (Maatsch et al, 1975b) is based on existing research. The simulation treatment observed by both groups was struc- tured as follows. An experimenter presented positive and negative examples of a concept or rule. Next, the simulation student was asked to make a correct verbal interpretation of the stimulus material. The student then was given specific feedback to correct misconceptions. Following this he practiced his knowledge on two problems and was given corrective feedback as needed. The student could question or discuss the learning task at any time. The stimulus material and written student responses were projected onto a screen for viewing by the observation groups. Both of the Observation groups were instructed not to interact with the experimenter or other students. The groups differed in that one (direct observation) met in the same room as the simulation, while the other (television-mediated observation) viewed the simulation on television in a remote room. The seating patterns were the same for both groups, with the television group viewing two television monitors. By employing two monitors the television group was able to View either the material on the projection screen or the experimenter-student interaction. Findings Twelve null hypotheses were tested by the following procedure: Hypotheses I through IV used three-way analysis of covariance, Hypoth- eses V through VIII used partial Pearson product-moment correlation, and 78 Hypotheses IX through XII used three-way analysis of variance. Below is a list of the 12 alternative research hypotheses: I: II: III: IV: VI: VII: VIII: IX: Observers' cognitive performance, as measured by "concepts, will be significantly superior in direct as compared to televised observation. Observers' cognitive performance, as measured by a rules score, will be significantly superior in direct as com- pared to televised observation. Observers' cognitive performance, as measured by a "con- cepts" score, will be significantly better when the simu- lation participants and their respective observers are of the same sex as compared to when they are of opposite sexes. Observers' cognitive performance, as measured by a rules score, will be significantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of Opposite sexes. Observers' cognitive performance, as measured by a "concepts" score, will significantly and negatively_corre- late with a self-reported aptitude of students’being Observed in a simulation. Observers' cognitive performance, as measured by a rules score, will significantly and negatively correlate with a self-reported aptitude of students being observed in a simulation. Observers' cognitive performance, as measured by a "con- cepts" score, will significantly and negatively correlate with a standardized scholastic aptitude score of the students being Observed in a simulation. Observers' cognitive performance, as measured by a rules score, will significantly and negatively correlate with a standardized scholastic aptitude score of the students being observed in a simulation. Observers' satisfaction with instructional method, as measured by a pleasant-exciting score, will be signifi- cantly superior in direct as compared to televised observation. Observers' satisfaction with instructional method, as ~ measured by a clear-easy score, will be significantly superior in direct as compared to televised observation. tho apt nit tio Bot apt din the per- faci inte Slgr IOI‘TT atti vari SEX : Studs 79 XI: Observers' satisfaction with instructional method, as measured by a pleasant-exciting score, will be signifi- cantly better when the simulation participants and their respective observers are of the same sex as compared to when they are of opposite sexes. XII: Observers' satisfaction with instructional method, as measured by a clear-easy score, will be Significantly better when the simulation participants and their respec- tive observers are of the same sex as compared to when they are of opposite sexes. The tests of hypotheses produced two types of findings: those concerned with experimental factors and those concerned with aptitude relationships. Table 5.1 summarizes the findings on cog- nitive measures. The table shows that significant (alpha .05) rela- tionships were found only for the aptitude and concept variables. Both self-reported math ability and Michigan State University Math aptitude score correlated significantly and in the predicted negative direction on the cognitive measure of "concepts." In other words, the lower the aptitude of the participating student the better the performance of observers. No significant differences were produced by the other cognitive measure of rules. Also, the experimental factors--type of observation (direct or television-mediated) and sex interactions between simulation and observing student--produced no significant differences on either measure or observer cognitive per- fOrmance. Table 5.2 summarizes the four hypotheses concerning student attitudes toward instructional method. Here the only independent variables were the experimental factors of type of observation and sex relation of observer to participant. For type of observation, students were found significantly to prefer direct over tell nif 01" 80 television-mediated observation on a pleasant-exciting score. Sig- nificant differences were not found using the sex-interaction factor or the clear-easy measure. Table 5.1 Summary of Findings on Two Measures of Observer Cognitive Performance [flapendent Variables Observer's Cognitive Performance on: Independent Variables Hypotheses I. Experimental factors: 1 A. Type of 2 Observation 8. Interaction of 3 sex of simulation student with sex 4 of observer ll. Simulation aptitude scores on: 5a A. Self Reported Math AblIltY 6 a B. MSU Math 7 8 aNull hypothesis rejected. 81 Table 5.2 Summary of Findings on Two Factors of Observer Satisfaction With Instructional Method Dependent Variables Observer's Satisfaction with Instructional Method on Factors of: Pleasant— Clear- Independent Variables Hypotheses Exciting Easy I. Type. of 9at p = .047 Observation . (Liv e TV) ’=:3:3:1:3:=:1:3:35:35:35.3} 3 II. Interaction of 11 p = .999 sex of simulation 12 355253533535:5:;:;:;:5:5:5:§:§:;:3: p = .999 “W“ with 5°” Of observer jazz-a.-.;.;.;.;.;.;.;.;.;.;.;.;.;.: aNull hypothesis rejected. Conclusions and Discussion This section attempts to relate the findings to the purpose of this study. The reader will recall that the first intent of this research was to assess the effectiveness of television observation compared to direct observation of an instructional simulation. This contrast failed to produce significant differences on the cognitive measures used in this study. Methodologically, no significant differ- ences due to treatment are difficult to prove and certainly require more than the results of one study (POpper, 1959). However, similar trends have been found in the effects of televising other methods of instruction (Chu and Schramm, 1968; Davis, 1967). Therefore, this study can be interpreted as supporting Maddox's (1970) contention that va at sa re ev in TV di CC bi de di in en re: tar obs not 82 that the effect of television instruction probably is not a major variable in the learning of information. This conclusion should be tempered by a number of consider- ations. First, the dependent variables in this study are only a sample of all the possible types of cognitive learning outcomes. The results might be different with other variables. However, there is evidence (pleasant-exciting scale) that students prefer direct instruction. Significantly, this finding has been supported by other TV studies (Davis, 1967). The cumulative effects of a preference for direct instruction in a course could dramatically affect summative cognitive learning over the period of a term or a semester. Another problem in generalizing from this study is the possi- bility of subject reactivity. From the design of this study, stu- dents probably knew that the performances of the two groups--TV versus direct observation--were being compared. This realization could have inflated the performance of either group. The Hawthorne effect .(inflated performance of the TV group) and the John Henry effect (inflated direct observation performance) were both possible. If one of these effects was Operating, the finding of no significant differ- ence could mask a true difference. The implications for further research are dichssed below. The second purpose of this study was to look at the impor- tance of different IS students on the performance of respective observers. Sex interactions between IS students and observers were not fbund on either cognitive or instructional preference measures in this study. Replications of this finding could provide confidence 83 that in fact this variable is not important. If this is not an important variable, it could be because of the relative maturity of the subjects and the intellectual nature of the task. For example, one might expect sex interactions to be important for younger subjects acquiring new social attitudes. The findings can be interpreted as offering some support to the contention that slower participating students facilitate the cognitive learning of observers. Recall that one of the two cogni- tive measures (concepts) displayed this relationship. This isolated finding is consistent with some theories of learning (Berlinger & Gage, l976) and instructional design beliefs (Palmer, 1970). A slow simulation participant or model allows an observer to reSpond first. The participant response can then function as feedback to the observer. Unfortunately, the elegance of this line of reasoning is shaken by the finding on the second cognitive measure-~rules. Here the relationship between the simulation participant's aptitude and the observer's performance of the rules battery was not significant (E_= .999), and the correlation (.0007) failed even to indicate a trend of support. This fact, combined with the somewhat unknown nature of "concepts," precluded definitive conclusions about the rela- tionship of a participant's ability to an observer's cognitive per- formance. Implications Implications for Research A major issue underlying this study was identified as the productivity of instructional methods. This study suggested that 84 television observation is a way to increase the efficiency of instruc- tional simulations. Therefore, it follows that televised simulations should next be contrasted with other televised methods. This could further test the general izability of Maatsch et al . 's (l975b) findings con- cerning the superiority of instructional simulations. Additionally, if costs per student are found to be roughly the same, it would indi- cate that televised instructional simulations could be highly produc- tive. This conclusion would be based on equal cost but superior effectiveness of the televised simulation compared to the televised lecture. As indicated above, the possible effects of the simulation students on the learning of observers is still an Open question. Extensive research on this relationship may be required, because it may be curvilinear. That is to say, if "slow" models are found to facilitate observer learning, it is probable that extremely "slow" models will not. A finding such as this would require some precision in the selection of maximally effective simulation participants. A number of limitations of this study could be addressed by a simple replication. The Hawthorne effect could be controlled for by recruiting additional groups to view the television tapes produced in this experiment. It would not be obvious to these new groups that their performance was being compared to a direct observation group. If the performance of these new television groups was significantly poorer than that of the old ones it could be inferred that student reactivity (Hawthorne effect) had occurred in the original study. 85 These additional groups could also provide further evidence on the relationship between simulation participant characteristics and observer performance. This relationship could even be investi- gated by applying observation methodology (see Simon & Boyer, Mirrors of Behavior, 1965) to the tapes in an attempt to discover new vari- ables with which to predict the learning of observers. The study indicated that student aptitudes for learning are a powerful if not thg_major variable in student outcomes. This find- ing, which is supported in the literature (Kerlinger, 1973), points out the absolute necessity for assessing student aptitudes in both instructional research and evaluation. Random assignment of students to treatments, although desirable in controlling bias, is insufficient in studies intended to assess instructional outcomes. When powerful variables such as aptitudes are not accounted for, they will at best show up as error variance and thus make true treatment difference difficult to find. Another consideration is that assessment is not always easy. Instruments are not always available or easily developed, and testing consumes time and precious resources. This study sug- gests an approach that is substantially different than trying to infer student ability from a test. A simpler and equally effective method is to ask the student for a self-assessment. It appears that by the time students reach higher education, they have had years of experi- ence in determining their own strengths and weaknesses. The evidence suggests that when students are given specific instruction on how to assess their ability, assessment with a simple item can be quite powerful. 86 Implications for Edhcational'Practice Because of the exploratory nature of this study, further research is required to improve the utility of TVIS. The following discussion speculates on what that utility could be. Based on this study and the VIM research program (Maatsch et al., 1975b), TVIS appears to hold promise of increasing instruc- tional productivity. Through increased productivity education generally could at least partially meet the problem of decreased financial support. Some educational resources, however, are not only costly but they may be generally unavailable at any price. An example is faculty for the professions (e.g. medical educators). TVIS is a technique in which the efforts of insufficient numbers of trained faculty might accommodate greater numbers of students. To the extent that the experience and knowledge of these professionals are not available in other media (e.g. print), TVIS would seem to be all the more valuable. Advantages of stable instructional media have been enumerated (Rothkoff, 1976; Kagan, 1973). TV instruction with tape is a stable medium--it can be rerun, producing exactly the same nominal stimuli as the original presentation. This feature presents direct quality- control implications. By a selective process poor tapes can be eliminated and superior ones retained. Even original productions would be expected to improve over time. As demonstrated in the literature on micro teaching (Allen, 1970), videotapes offer faculty a means of more objectively reviewing their skills. As a consequence, opportunities for faculty development with TVIS are available. 87 TV has characteristics of value to instruction and IS could compensate for the general weakness of TV. For example, IS can create student enthusiasm for instruction (Rosenfeld, 1975). TV generally does not fare well on this variable. Therefore televising an IS rather than a lecture might be a way of improving student affect toward TV instruction. Finally, this study suggests that there may be a function in instruction, specifically in demonstrations, that is not well recognized in education--that is, the value of specific performance errors coupled with corrective feedback. It appears that.an instructor and a naive student serve relatively unique roles in the learning of observers. The instructor can serve to insure technical correctness of a performance, while the naive student can identify, by his mis- takes, the critical psychological alternatives for students with similar backgrounds. In summary, the present study has raised more questions than it has answered. However, the consistency of the findings with earlier studies indicates that this is an area of profitable con- tinued research in the search for more effective and productive instruc- tion. APPENDICES 88 APPENDIX A INSTRUMENT FOR ASSESSING STUDENT AFFECT TOWARD INSTRUCTIONAL METHOD 89 APPENDIX A INSTRUMENT FOR ASSESSING STUDENT AFFECT TOWARD INSTRUCTIONAL METHOD Variables in Instructional Methods Test for Magic Squares Name Telephone Number Age Major Sex 1: Before this instructional session, I had: (check one) II. III. 1. constructed a Magic Square and knew the rules. 2. constructed a Magic Square but forgot how. 3. been shown how, but have never constructed one. 4. seen one, but didn't know how to construct one. 5. never seen anything like a Magic Square For me, the instructional session was: (check the place on the scale that best reflects your feeling) pleasant 1 ,1 1 1 unpleasant l 2 '3 W 6 clear 1_1 1 1 1 confusing 1 Q B m 5 easy 1 111 1 1 difficult l 2 B Q S exciting 1 1 1 1 boring 1 Q B Tin B efficient 1 1 1 1 inefficient l 2 *B u 6 I would like this type of instructional method: all the 1 1 1 1 never time 1 Q ‘B n 6 again 90 APPENDIX B INSTRUMENT FOR ASSESSING STUDENT COGNITIVE PERFORMANCE 91 1. APPENDIX B INSTRUMENT FOR ASSESSING STUDENT COGNITIVE PERFORMANCE Try to construct a magic square. First select the correct number series from the alternatives listed below. Secondly, choose the correct empty magic square from the alternatives below. Finally, using_the correct number series, fill out the empty magic square that you’have selected. If’you have forgotten’how to place any number, guess andficircle your guess. Then continue filling out the magic square the best you can. ' Choose the Correct Number Series A. 1, 2, 4, 7 ... D. 100, 99, 98, 97 ... B. 2, 4, 4, 5, 6, 6, 7 ... E. -1, -2, -3, -4 ... C. 3, 5, 7, 9 ... F. -2, -1, O, 1, 2 ... Choose the Correct Empty Magic Square 92 93 2. Draw a Magic Square without numbers that has between 20-30 cells. 3. Generate three completely different number series that could be used in Magic Squares. 4. A Magic Square that has between 70 and 100 cells must have number of columns and number of rows. 94 In questions 4 through 16 you will find a square and some numbers. Try to place the number appearing to the right of the square in its proper cell to form a Magic Square. (Assume that a l, 2, 3, 4 ... series is being used.) 20 22 23 95 17. Place the name of the number assignment rule in the first blank, indicate if this applies to the first or next number in the second blank, and describe where the next number is placed. If you can't recall the name of the rule describe where the rule is applied and how the next number is placed. The rule involves placing first/next number (circ1e onET' (describe wheré)_ The rule involves placing first/next number (circle one)’ (describe Where) The rule involves placing first/next number (circ1e oné) (describe where) The rule involves placing first/next number (circle one)’ (describe where) The rule involves placing first/next number (circle one)* (describe where) The rule involves placing first/next number (circle one) (describe where) 96 18. List the rules that determine whether an empty square (no numbers) could be used to form a complete Magic Square. 19. List the rules used to generate a number series that could be used in a complete Magic Square. 20. List the rules that are used to determine if a filled-in square is a Magic Square. 97 21. In questions 24 through 35 try to select the square that correctly places the largest number in each box. (Assume a l, 2, 3, 4, ... number series has been used.) Circle the letter for the figure you have chosen. A. c. D. 8 7 9 6 A. B c. 22. 8 5 45 a s 7 a] 8 7 7 l 8 98 For questions 33 thru 38 circle the correct number. 33. Which number series should be used 36. In a magic square: in a magic square? 1. There are an odd number 1. ll, l3, l4, 16, 17 ... of rows and columns. 2. -5, -3, -1, 1, 3 ... 2. There can be duplicate 3. 20, 19, 18, 17 ... numbers. 4. 2, 5, 8, ll ... 3. Both 1 & 2. 5. 'All of the above. 4. Neither 1 nor 2. 6. None of the above. 34. In a magic square: 37. In a magic square: 1. The number of rows, columns 1. Number assignment may begin and diagonals are equal. with any positive number. 2. The sum of the diagonals are 2. Any number may be duplicated. equal. 3. Both 1 & 2. 3. Both 1 & 2 4. Neither 1 nor 2. 4. Neither 1 nor 2. 35. In a magic square: 38. In a magic square: 1. The rows have the same sum. 1. The number of rows equal the 2. The columns have the same sum. number of columns which equal 3. Both 1 & 2 the number of diagonals. 4. Neither 1 nor 2. 2. The number of rows is equal to the number of columns. 3. Both 1 & 2 4. Neither 1 nor 2. For questions 39 thru 44 select the alternative that cannot be used because it violates at least twg_rules for a Magic Square number series. . Place the number of your answer in the blank. 39. Answer 42. Answer 1. 2, l, 0, -1... 1 -1, 0, l, 2... 2. 36, 34, 32, 30... 2. 15, l7, 19, 21... 3. 15, 17, 19, 21... 3. 45, 39, 34, 28... 4. 21, 27, 33, 40... 4 33, 35, 37, 39... 40. Answer 43. Answer 1. 54, 48, 44, 4D... 1 -15, -l7, -21, -27... 2. 16, 24, 32, 40... 2. -15, -13, -ll, -9... 3. 33, 30, 26, 23... 3. 21, 24, 27, 30... 4. 66, 69, 72, 75... 4 5, 8, 11, 14... 41, Answer 44. ______Answer 1 -21, -19, -17, 2. 6, 9, 13, 16... 3. 3D, 36, 42, 48... 4 '43, '36: ’30s '24°°° 8, 11, 13, 16... '15..v l. 2. -3o, -27 -24 -21... 3. 54, 57, $9, 6i... 4. 6, 9, 12, 15... 99 For questions 45 through 52 use the following figure. a, b, and 9_ represent row, column, and diagonal sums, respectively. Additionally, X_is the sum of the row sums. Place the number of your answer in the blank. \ a l I I I i X 45. The sum of the third row in the 49. equals the sum of magic square is equal to . the fifth column. 1. a + 2 l X a c 2. a + 3 2 b + 4 3. a + 2 (a positive interval) 3 b + 4 (the interval) 4. a 4 b 46. If all the sums of the rows are 50. equals 5, themselves summed to a number (x) the number of row equals 1 X + b 2 a l. X_ 3 b + a 2. a 4 X 4 a 3. a e X 4. X a a 51. equals b_ 47. The sum of one of the diagonals is equal to 1. c 4 2 l. X 4 2 2. a 2. 2a 3. X a a 3. a 4. a + c 4. a a 2 48. equals the number of columns. 52 equals a, . 1. a l X e b 2. c 2 C 4 2 3. X a a 3 b 4. a + b 4 b 4 c 1130 In questions 53 thru 58 pick an alternative that could be used to answer the question correctly. Place the letter of your answer in the blank. 53. is a row sum. 56. can ngt_be used in a cell. a) 13 c) 15 a) 1 c) -l b) 3 d) 2 b) 4 d) 5 54. is a diagonal sum 57. could be the number of diagonals. a) 13 c) 10 b) 4 d) 15 a) 1 c) unlimited b) 9 d) 2 55. could be the number 58. could be a column sum. of rows. a) 10 c) 14 a) 4 c; 10 b) 12 d) 15 b) 1 d 15 For questions 59 thru 67 circle the correct answer. 59 - '60- 61 - 62 - Magic squares were said to have been discovered l. by King Yu of China 2. by Euramel Muchopolus 3. on a rock from the Yellow River 4. on the shell of a turtle 5. both 1 and 4 6. both 2 and 3 Which statement is true of the very first magic square? each row contained 15 dots when summed each column contained 15 dots when summed any two symmetrical squares contained the same number of dots all of the above none of the above mwa—J o o o o 0 Magic squares have been used for which of the following? 1. to explain the structure Of polyhedrons . in the initial development of catalytic convertor 3. in the formulation of the Pythagorean theorem 4. to support the structural use of guidewires in the construction of towers 2 What century might be termed a "hot-bed" of activity in the development Of magic squares in France? 1. the twelfth century 2. the fifteenth century 3. the seventeenth century 4. the nineteenth century 63 64 65 66 67 101 Which of the following magic squares has been most useful in understanding' structural vectors and stress factors? 1. associate squares 2. diabolic squares 3. treble squares 4. composite squares In China the pattern of the dots of the first magic squares were to be 1. called Lo-shu 2. thought of as mystically significant 3. sewn on shirt pockets 4. both 1 and 2 5. both 1 and 3 6. both 2 and 3 Yokohama used fifth order magic squares to prove the truth of the ages ., to explain the complexities of pyramids to explain loop patterns in silk looms to explain the necessity of keystones in arches awmd Magic squares were introduced into Western culture 1. at about the same time as in Eastern culture 2. centuries after their discovery in Eastern cultures 3. both 1 and 2 4. both 1 and 3 5. both 2 and 3 Albrecht Durer is credited with l the construction of the first ninth order magic square 2 constructing a magic square with the date 1514 in the bottom two cells, in the year 1514 3. the discovery of composite border squares 4 first introducing magic squares into Western culture In questions 68 thru 75, fill in the blanks to make the statement true. 68 - In 1514, constructed a magic square with the date 1514 in the bottom two cclls, in the year 1514. 69 - Euramel Muchopolus introduced magic squares into culture during the 1400's. 70 - Yokohoma used fifth order magic squares to explain the intricate loop patterns necessary to the development of APPENDIX C INSTRUMENT FOR ASSESSING STUDENT SELF-REPORTED APTITUDE 102 APPENDIX C INSTRUMENT FOR ASSESSING STUDENT SELF-REPORTED APTITUDE Name: I. Compared to other courses you have taken, rate: a. Your ability in mathematics and geometry courses. Poor Superior 1 2 3 4 5 l. J. L J, 1 b. How you like mathematics and geometry subjects. Least of all . Most of all 1 2 3 4 5 l J J” 1 I 1 1 d— 11. How Often do you work paper-and-pencil puzzles just for recreation? Never Every day _L_—l 103 APPENDIX D STATISTICAL ANALYSIS 104 APPENDIX D STATISTICAL ANALYSIS Table D1 Factor Analysis of Cognitive Performance Variables Factor 1 Factor 2 Factor 3 .71788 -.34221 -.02627 .76484 -.55326 .05343 .74362 -.31052 -.06683 .76869 -.34938 .06816 .57486 .38831 -.39898 .53547. .21887 -.19881 .75130 .22898 -.18492 .49510 .06275 .02765 .52993 .37485 -.07391 .59977 .33075 .44376 .36540 .30683 .44391 More than 5 iterations required. Variable Communality S62 .63315 $63 .89392 $64 Ru'es .65693 $65 .71452 566 .63889 Eigen- % of Cum. S67 .49838 FaCt°r value Var. Pct. 558 c°"cepts '55°07 1 4.84992 72.0 72.0 569 .24983 2 1.23838 18.4 90.4 $5 .53153 3 .64884 9.6 100.0 560 Knowled e .66604 $61 9 .61888 105 106 Table D2 Analysis of Covariance for Dependent Variable "Concepts" Sources of Variation if df F S'gngi'ffinces Covariates 1 47.185 .001* a. Self-reported ability 1 1.063 .307 b. MSU Math 1 47.185 .001* Main effects 3 .966 .999 I. Observation l .141 .999 a. Live 29.35 b. Televised 26.32 II. Observer's sex 1 1.782 .186 a. Male 27.32 b. Female 28.17 III. Simulation student's sex 1 .186 .999 a. Male 28.46 b. Female 27.31 IV. Two-way interactions 3 .310 .999 a. I x II 1 .189 .999 b. I x III 1 .214 .999 c. II x III 1 .305 .999 V. Three-way interactions I x II x III 1 .215 .999 RESIDUALS 44 TOTAL 53 *Significant at alpha p_= .05. 107 Table 03 Analysis of Covariance for Dependent Variable "Rules" Sources of Variation 7' df F Significances Covariates 2 12.311 .001* a. Self-reported ability 1 8.329 .007* b. Grade point average 1 7.763 .009* Main effects 3 1.945 .140 1. Observation 1 .090 .999 a. Live 31.23 b. Televised 31.00 11. Observer's sex 1 4.267 .044* a. Male 33.80 b. Female 27.97 III. Simulation student's sex 1 .037 .999 a. Male 28.68 b. Female 32.81 IV. Two-way interactions 3 .551 .999 a. I x II 1 .015 .999 b. I x III 1 .490 .999 c. II x III 1 1.146 .292 V. Three-way interactions I x II x III 1 .332 .999 RESIDUALS 44 TOTAL 53 *Significant at alpha p = .05. 108 Table D4 Bivariate Correlations Between Observer Self-Reported Aptitude and Cognitive Performance Self- Dependent Reported . Knowledge Concepts Rules Aptitudes .Var1ables I. Math ability .2657 .3185 .4941*' 55 55 55 .023 .008 .001 II. Math interest .0944 .1458 .2642" 55 55 55 .242 .140 .024 III. Time spent on paper- .0645 .2403 .1643 pencil puzzles 55 55 55 .311 .035 .111 *Cells with values significant at p = .05 or better. Cell Values = (a Eb) degrees of freedom c ) one-tailed significances level ) correlation coefficient and its direction 109 Table 05 Zero-Order Correlations Between Observers' Standardized Scholastic Aptitude Scores and Cognitive Performance Standardized Scholastic Aptitude Measures Knowledge Concepts Rules I. MSU Reading .3855 .3926 * .3942 47 47 47 .003 .003 .003 * II. MSU Math .3789 .7002 .5108 52 52 52 .002 .001 .001 * III. GPA .4944 .5409 .4965 41 41 41 .001 .001 .001 * IV. SAT Verbal .4495 .4327 .4680 35 35 35 .003 .005 .004 * V. SAT Math .5632 .6950 .6477 35 35 35 .001 .001 .001 * VI. ACT English .3054 .5901 .0400 18 18 18 .109 .005 .437 ;‘ VII. ACT Math .0704 .4978 .5691 18 18 18 .391 .018 .006 *Cells with values significant at p = .05 or better. Cell Values = (a) correlation coefficient and its direction (b) degrees of freedom (c) one-tailed significances level 110 Table 06 Analysis of Variance for Affect Dependent Variable Pleasant-Exciting (A-l) Sources of Variation 73 df F Signgfigances Main effects 3 1.452 .238 1. Observation 1 4.045 .047* a. Live 2.24 b. Televised 2.64 II. Observer's sex 1 .425 .999 a. Male 2.38 b. Female 2.52 III. Simulation student's sex 1 .208 .999 a. Male 2.39 b. Female 2.49 IV. Two-way interactions 3 3.92 .999 a. I x II 1 .054 .999 b. I x III 1 .516 .999 c. II x III 1 .358 .999 V. Three-way interactions I x II x III 1 1.575 .213 RESIDUALS 49 TOTAL 56 aLower score indicates higher preference *Significant at alpha p = .05. 111 Table Analysis of Variance for Affect Dependent Variable C1ear-Easy(A-2) D7 Sources of Variation X3 df F Signgfigances Main effects 3 .152 .999 I. Observation 1 .016 .999 a. Live 2.29 b. Televised 2.27 II. Observer's sex 1 .022 .999 a. Male 2.26 b. Female 2.30 III. Simulation student's sex 1 .434 .999 a. Male 2.19 b. Female 2.34 IV. Two-way interactions 3 .596 .999 a. I x II 1 .565 .999 b. I x III 1 .659 .999 c. II x III 1 .006 .999 V. Three-way interactions I x II x III 1 .007 .999 RESIDUALS 49 TOTAL 56 aLower score indicates higher preference. Factor Analysis of Affect Scales 112 Table 08 Factor 1 Factor 2 S73 pleasant .46128 -.ll370 S74 clear -.25215 .80008 575 easy -.04417 .19958 $76 exciting .40225 -.l3858 S77 efficient .17607 .04058 $78 all the time .14671 .04693 APPENDIX E DESCRIPTION OF EXPERIMENT AVAILABLE TO STUDENT AT THE TIME OF SIGN-UP 113 APPENDIX E DESCRIPTION OF EXPERIMENT AVAILABLE TO STUDENT AT THE TIME OF SIGN-UP Variables in Instructional Methods(VIM)_ This program will identify major variables affecting a variety of instructional models utilized in higher education and professional train ng. Students will be asked to take individual different test on learning preferences; to undergo a brief instructional period and then to take tests on the materials presented during the instruction. The information derived from this research program will assist educators in making instruction more interesting and effective for students. Students will be asked to participate in two one-hour sessions in E-2 East Fee Hall. They will be given a more detailed explanation of the study during the final session. Investigators: Jack L. Maatsch, Ph.D., Dennis Hoban, Ed.D., Dan Tortora, Ph.D., Tom Holmes, M.A. If questions, call Shirley Ballentine, secretary, Office of Medical Education Research and Development, 353-2037. 114 APPENDIX F PROCEDURAL DIRECTIONS GIVEN BY EXPERIMENTER-INSTRUCTOR 115 APPENDIX F PROCEDURAL DIRECTIONS GIVEN BY EXPERIMENTER-INSTRUCTOR Procedures for Instruction A. This experiment consists of two parts. During the first part, you will receive instruction on a mathematical task called Magic Squares. The second part of the experiment consists of an exam to measure how much you have learned. Before we begin, I would like you to fill out this one-page self- assessment form. While you are filling out the form, I will distribute playing cards that will be used to form two groups. Those of you who have black (red) cards come with me to another room (television-mediated group). (Instructions'UJtelevision-mediated group.) Please be seated-- you will receive the rest of your instruction over the t.v. monitor. (Instructions to both groups.) There are two student roles dur- ing this instructional period--the participant and the observer. The participant will actively participate and interact with me during instruction. The rest of you are Observers and I would like you to try to learn what I am teaching the participant. However, as observers I am asking you not to ask questions, take notes, or discuss thelearning task. At the end Of instruction, you will all take the same test. 116 II. 117 Instructions for Post-Test A. You have just been instructed on what a Magic Square is and how to construct it. You are now going to take a test that is designed to determine how effective the instructions have been in teaching you about Magic Squares. This is not a test of your mathematical ability nor of your intelligence. It is simply a test used to evaluate the instruc- tional method utilized. The results of this test are confiden- tial. DO not be discouraged by the difficulty of the first few ques- tions. 00 the best you can with them, and then continue on. The questions become less difficult. Please answer each question in the order given in the test. This is very important for the experiment. 1. Do not look through the test before beginning. 2. Once you have answered.all questions on a page, proceed to the next page and do not turn back to previous pages. When you have completed the test, then turn your test booklet over. The instructor will collect your booklet. Are there any questions? - ADMINISTER TEST - APPENDIX G GRAPHIC STIMULUS MATERIAL USED BY SIMULATION INSTRUCTOR 118 APPENDIX G GRAPHIC STIMULUS MATERIAL USED BY SIMULATION INSTRUCTOR FIRST NUMBER RULE NO YES SECOND NUMBER ASSIGNMENT NO YES I 1 8 2 2 THIRD NUMBER ASSIGNMENT NO YES 9 3 3 2 2 119 120 FOURTH NUMBER ASSIGNMENT NO YES 4 4 10 3 3 FIFTH NUMBER ASSIGNMENT NO YES 5 " 4 11 5 SIXTH NUMBER ASSIGNMENT NO YES 6 6 1 5 11 5 4 4 6 12 121 NINTH NUMBER ASSIGNMENT NO YES 8 8 9 9 9 TENTH NUMBER ASSIGNMENT NO .YES 10 IO 10 9 9 9 SIXTEENTH NUMBER ASSIGNMENT NO ~ YES 16 15 15 15 I6 I6 13 14 15 whN NOUI 10 l2 24 24 24 122 WORKSHEEI 2 12 IO 14 18 4 14 I2 13 15 32 32 31 “(901) “who 9069!» 14 34 12 16 34 2O 32' 4. 24 25 24 21 ,4 21 21 21 21. 21 21 36 36 36 20 33 36 36 36 33 123 3O 30 3O 3O 30 30 50 50 51 50 50 5O 50 5° 53 2O 2O 2O 41 41 41 \ 53 16 16 16 24 x 24 24 24 24 24 24 24 6O 60 60 6O 56 so 56 60 124 6- A. 2,3,4,s. .. a. 1, 2,4,7,11. . . c. 5,11,11,14 . . . o. 9,7,5,3 . .. 5. a,7,7,e,9,9... r. 17,21,25,2933,37... 4, 9, 14,19. . . 3,5,29 e e e 125 MAGIC SOUAR ES 51, NO res '6 3+1+3=15 9 2 7 9+2+1=13 a s- 7 3+5+1=15 4 . . 4+6+8=18 4 '° 2 4+1U+2 =13 5 1° 3 5+111+3=13 4 .8. 4+21+3=32 29121 29+1+21=51 . .... 5+11+14=31 . was 9+11+25=51 12 2 16 12. 2.15530 13 33 5 13.33.5=5] MAGIC SQUARES NO YES 1 IO 15 9 2 7 13 7 5 3 15 4 a 8 13 a 1 8 15 5 I0 3 13 14 13 15 13 13 13 12 2 I6 311 29 1 21 51 14 IO 8 311 9 17 25 5 18 7 311 13 33 5 51 31 311 23 51 51 51 126 , MAGIC SQUARES NO YES . - .. 25 . 2 7 13 3 Is 7 25 4 a 8 13 '4 2 2 25 s .a 2 13 45 25 25 25 25 13 18 18 18 13 26 2 12 40 29 1 21 5| 5 20 14 40 9 17 25 51 8 18 14 40 13 33 5 5| 1 40 40 4 511 51/ 51 51 51 1 NUMBER SERIES THAT CAN BE USED IN A MAGIC SOUARE NO YES A. 5, 4, 3,2,1... A. 1, 2, 3, 4, 5... B. 1, 3, 4, a, 7... B. 2, 4, a, 8, 10... Co .3, .2, .3, .4... C. 5: 97 '3! ‘77 D. '3,'1,1, 3... D. 4, 7; IO, 13, 16... 6 E. 20, 23, 26, 29... 127 MAGIC SQUARES NO YES 5 2 2 2 I 2 5 1 2 2 2 9 2 MAGIC SQUARES NO YES APPENDIX H NORKBOOK USED BY STUDENT PARTICIPATING IN A SIMULATION 128 APPENDIX H NORKBOOK USED BY STUDENT PARTICIPATING IN A SIMULATION WORKSHEEI I. 7 I 5 I6 2 12 4 4 6 6 I014 3 8 2 8 I8 4 2. 3. 114924 1481232 881024 8111332 129 (160960 “(a)“ “a“ 14 34 I2 16 34 2O 32 4. LJ_._J 24 5 24 5. 2, ,4 21 , 21 21 21 21 21 20 33 36 36 36 36 36 36 33 130 30 3O 3O 3030 30 5O 50 50 50 50 50 50 53 2O 2O 2O 41 41 41 \ 53 I6 16 16 24 / 24 24 24 24 24 24 24 6O 6O 6O 6O 58 58 58 60 131 6- A. 2,3,45. . . - B.1,2,4,7,11... c. 5,8,11,14 . . . a. 9,753 . .. s. 8,7,7,8,9,9... r. 17,21,25,29,33,37... ' 7. 4,9,14,19. . . 3,5,739 . . . BIBLIOGRAPHY 132 BIBLIOGRAPHY Abbott, Lawrence. Economics and the modern world (2nd ed.). New York: 'Harcourt, Brace & World, Inc., 1967. Allen, Dwight William, & Ryan, Kevin. Microteachigg, Reading, Mass.: Addison-Wesley Pub. Co., 1969. Anderson, Richard C. et al. (Eds.). Current research on instruction. Englewood Cliffs, New Jersey: Prentice-Hall, 1969. , & Faust, Gerald W. Educational psychology, New York: Dodd, Mead & Co., 1973. Anderson, Scarvin 8., Ball, Samuel, Murphy, Richard T., & associates. EncyclOpedia of educational evaluation. London: Jossey- Bass PubliShers, 1975. Armsey, James W., & Dahl, Norman C. An inquiry into the uses of instructional technology, New York: 1973. Bandura, A. Principles of behavior modification. New York: Holt, Rinehart 8 Winston, 1969. Baron, Stanely J., & Meyer, Timothy P. Initiation and identification: Two compatible approaches to social learning from electronic media. A-V Communications Review, 1974, gg_(2), 167-179. Berlinger, David C., 81 Gage, N. L. The psychology of teaching methods. In N. L. Gage (Ed.), The Psychology of Teaching Methods. Chicago: The National Society for the Study of ' Education, 1976 Bindra, Dalbin. Motivational view of learning performance, and beha- vior modification. Psychological Review, 1974, 81 (3), 199- 213. Bjerstedt, Ake. Educational technology, New York: Wiley-Interscience, 1972. Blalock, Hubert M. Theory construction. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1969. 133 134 Briggs, Leslie J. Instruction media: A procedure for the design of multi-media instruction, a critical review of research, and suggestions for future research. Pittsburgh, Pennsylvania: American Institute for Research, 1967. Sequencinggof instruction in relation to hierarchies of competences. Pittsburgh, PennsyTvania: American Institute for Research, 1968. Bruner, Jerome S. The process of education. Cambridge, Mass.: 1 Harvard University Press, 1960. Caffarella, Edward P., Jr. The cost-effectiveness of instructional technology: A grapositicnal ifiVentory of the‘literature. Unpublished doctoraiidiSSertation, Michigan State University, 1973. The Carnegie Commission on Higher Education. The fourth revolution: Instructional technology in higher education. New York: McGraw-Hill,’1972. . The more effective use of resources. New York: McGraw- Hill Book Co., 1972. Carroll, John. The potentials and limitations of print as a medium of instruction. In David R. Olson (Ed.), Media and symbols: The forms of expression. Chicago, 111.: University of Chicago, 19741 Chalmers, Douglas K., 8 Rosenbaum, Milton E. Learning by observation versus learning by doing. Journal of Educational Psychology, 1974, 66, 216-224. Chu, G. C., & Schramm, W. Learning from television: What the research says. Washington, D.C.: National Association ofiBroadcasters, 1968. Cochran, W. G. Analysis of covariance--Its nature and uses. Biometrics, September 1957, 261-281. Cohn, Elchanan. The economics of education. Lexington, Mass.: 0. C. Heath & Co., 1972. Commission on Instructional Technology. To improve learning. Washing- ton, D.C.: U.S. Government Printing Office, 1970. Cooney, J. G. The first year of Sesame Street: A history and overview. Final Report. Vol. I} Neinork: Children‘s Television WorkShOp, 1970. Cornfield, J., & Tukey, J. W. Average values of mean squares in fac- torials. Annals of Mathematical-Statistics, 21, 907-949. 135 Cox, D. R. Planning of experiments. New York: John Wiley & Sons, Inc., 1958. Davis, Robert, Johnson, Craig, & Dietrich, John. Student attitudes, motivations shown to influence reception to televised lectures. College and University Business, May 1969. DeCecco, J. P. Class size and co-ordinated instruction. British Journal of Educational Psychology, 1964, 34, 65-74. The psychology of learning and instruction. Toronto: Prentice-Hall, nc., 1968. Dewey, John. Democracy and education. New York: Macmillan Co., 1916. Dubin, R., & Hedley, R. The megjummay be related to the message: College instruction by TV. .Eugene, Oregon: University of Oregon Press, 1969. Dubin, Robert, & Taveggia, Thomas C. The teachingelearniggparadox. Eugene, Oregon: UniVersity of Oregon Press, 1968. Dunkin, Michael J., & Biddle, Bruce J. The study of teaching. New York: Holt, Rinehart & Winston, Inc., 1974. Gage, Nathaniel Lees. Handbook of research on teaching. American Education Research‘Association. Chicago: Rand’McNally, 1963. Gagne, Robert M. The conditions of learning, New York: Holt, Rinehart and Winston, Inc., 1965. . Simulators. In R. Glaser (Ed.). Training research and education. AD-263 439. Washington, D.C.: Office of Naval Research, Psychological Services Division, Personnel and Training Branch, 1961. Gilbert, Thomas F. On the relevance of laboratory investigation on learning to self-instructional programming. In A. A. Lumsdaine and Robert Glaser (Eds.). Teaching machines and programmed learning: A source book. Washington, D.C.: Department of Audio-Visual Instruction, National Education Association, 1960. Glass, Gene V., & Stanley, Julian C. Statistical methods in education and psychology. Englewood Cliffs, New Jersey: Prentice-Hall, 1970. Good, Carter V. Dictionary of education. New York: McGraw-Hill, . 1973. 136 Greenblat, Cathy 5. Basic concepts and linkages. In Cathy S. Green- blat and Richard 0. Duke (Eds.). Gaming-simulation rationale, design, and application. New York: John Wiley & Sons, 1975. Greenwald, Douglas. The Mc-Graw-Hill dictionary of modern economics. New York: McGraw-Hill, 1965. Griffiths, Daniel E. Behavioral science and educational administration. Chicago: The National Society for the Study of Education, 1964. Gross, Bertram M. The managingyof organizations: The administrative struggle (2 vols.). London: Collier-Macmillan, 1964. Haggerty, Patrick E. R & D and prodgctivity in education. Paper pre- sented at the annual'meeting of the American Eaucational Research Association, Chicago, Illinois, April 18, 1974. Hansen, W. Lee (Ed.). Education income and human capital. New York: National Bureau of Economic Research, 1970. , & Weisbron, Burton A. Benefits, costs, and finance of public higher education. Chicago: Markham Pub. Co., l969. Harrison, Shelby A., and Stolurow, Lawrence M. (Eds.). Im rovin instructional productivity in higher education. Englewood Cliffs, New Jersey: Educational Technology Publications, 1975. Herbert, J. J., and Harsh, C. M. Observational learning by cat. Journal of Comparative Psychology, 1944, 31, 81-95. Hitchens, Howard B. The benefits of instructional technology. In Sidney Trickton (Ed.). To improve learning: An evaluation of instructional technology_(Vol. 1). New York: R. R. Bowker & Co., 19722 Hoban, Charles F. A current view of the future of theory and research in educational communication. Paper delivered to the Research and Theory Division, Association for Educational Communications & Technology at the AECT Convention, Las Vegas, Nevada, April 11, 1973. Jamison, Dean, Suppes, Patrick, & Wells, Stuart. The effectiveness of alternative instructional media: A survey. Review of educa- tional research, Winter 1974, 44 (1), 1-67. Jason, Hilliard. Educational uses of simulations; Attributes, assump- tions and applications. Keynote speech of Symposium on Simula- tion in Medicine. East Lansing, Mich.: Office of Merical Education Research and DevelOpment, 1973. 137 Jones, Gardner, Johnson, Craig, & Dietrich, John. Unit costs provide basis for meaningful evaluation of efficiency of TV courses. College and University Business, May 1969. PP- 124-130- Kagan, Norman. Can technology help us toward reliability in influenc- ing human interaction? Educational Technology, February 1973, pp. 44-51. Kemp, Jerrold E. Planning andyproducing audiovisual materials (2nd ed.). San Francisco: Chandler Publishing Co.,ll968. Kerlinger, Fred N. Foundations of behavioral research. New York: Holt, Rinehart & Winston, Inc., 19642 . Review of research in education. Itasca, Illinois: F. E. Peacock, Inc., 1973. , & Pedhazur, Elazor J. Multiple regression in behavioral research.' New York: Holt, Rinehart & Wihston, 1973. Kirk, Roger. Experimental design procedures for the behavioral sciences. Belmont, Calif.: Brooks/Cole Pub. Co., 1968. Koran, Mary Lou, & Snow, Richard E. Teacher aptitude and Observational learning of a teaching skill. Journal of educationfil psyChology, 1971, 62 (3), 219-228. Krugman, H. E., & Hartley, E. L. Passive learning from television. Public Opinion Quarterly, 1970, 34 (2), 184-190. Kuhn, Thomas S. The structure of scientific revolutions. Chicago: The University of Chicago Press, 1970. Lessinger, Zeon. Engineering accountability for results in public education. Phi Delta Kappan, December 1970, pp. 217-225. Levin, Henry M. A cost-effectiveness analysis of teacher selection. In Daniel C. Rogers & Husch S. Ruchlin (Eds.). Economics and education: Principles and applications. New York: Free Press, 1971. Maatsch, Jack. A general theory of behavioral modification: Model 11. Unpublished manuscript, Michigan State University,l975. An introduction to patient games: Some fundamentals of clinicél instruction. East Lansing: Michigan State Univer- sity, 1974. ., et a1. Variables in instructional methods. Symposia presented at the Association of American Medical Colleges, Washington, D.C., November 1975. 138 MacDonald-Ross, Michael. Behavioural objectives--A critical review. Instructional Science, 1972, 2, 1-52. Maddox, H. University teaching methods: A review. University Quarterly, Spring 1970, pp. 157-165. Magnusson, David. Test theory. Reading, Massachusetts: Addison- Wesley Pub. Co., 1966. Markle, Susan M. They teach concepts, don't they. Educational Researcher, June 1975, pp. 3-9. McCarty, S. Viterbo. Differential V-O ability twenty years later. Review of Educational Research, 4§_(2), 263-282. McKeachie, W. J. The decline and fall of the laws of learning. Educational Researcher, March 1974, pp. 7-11. McLeish, John. The lecture method. Cambridge, England: Cambridge Institute of Education, 1968. Mecklenburger, James A. Recommendations regarding performance con- tracting. Educational Technology, June 1972, pp. 27-28. Miller, Gary G. Some considerations in the design and utilization Of simulations for technical training. AFHRL-TR-74-65. Colorado: TeChnical Training Division, Lowry Air Force Base, August 1974. 2 Miller, Neal E., & Dollard, John. Social learning and imitation.. New Haven, Conn.: Yale University Press, 1941. Minow, Newton. Cost of instructional technology. In Sidney Tickton (Ed.). To improve learning; An evaluation of instructional technology_(Vol. l). lNew York: R. R. Bowker Co., 1972. Nefzger, M. D., & Drasgron, J. The needless assumption of normality in Pearson's r. American Psychologist, 1957, 12, 623-625. O'Connor, Kathleen. Learning: An Introduction. ,Glenview, Illinois: Scott, Foresman & Co., 1968. O'Donoghue, Martin. Economic dimensions in education. New York: Aldine-Atherton, 1971. Olson, David R., & Bruner, Jerome S. Learning through experience and learning through media. In David R. Olson (Ed.), Media and symbols: The forms Of expression, communication, and educa- tion. Chicago, Illinois: University of Chicago Press, 1974. 139 Palmer, E. L. Research at the Children's Television WorkshOp. Educational Broadcasting Review, 1969, §_(5), 43-48. Platt, John R. Strong inference.. Science, 1963, 14§_(3642), 347-352. POpper, Karl. The logic of scientific discovery, New York: Harper & Row, 1959. Powell, J. P. Experimentation and teaching in higher education. Education Research, 1964, p, 179-191. Reeves, 8. F. The first year of Sesame Street: The formative research. Final Report (Vol. 2). New York: Children's Television Work- shop, December 1970. Reynolds, Paul Davidson. A primer in theory construction. New York: Bobbs-Merrill Co., 1971. Rogers, Carl R. Freedom to learn. Columbus, Ohio: Charles E. Merrill Publishing Co., 1969. Rogers, Daniel, & Jamison, Dean. Economics and educational technology in the United States. Paper presented at the annuallmeeting 6T2theTAmerican Educational Research Association, Chicago, Illinois, April 18, 1974. Rogers, Daniel C., & Ruchlin, Hirsch 5. Economics and education: Principles and applications. New York: Collier-Macmillan, 1971. Rosenfeld, Frank H. "The educational effectiveness of simulation games: A synthesis of recent findings. In Cathy S. Greenblat and Richard 0. Duke, (Eds.), Gaming-simulation: Rationale, design, and application._ New York: John Wiley & Sons, 1975. Rothkopf, Ernst L. Writing to teach and reading to learn: A perspec- tive on the psychology of written instruction. In N. L. Gage -(Ed.), The psychology of teaching methods. Chicago: The National Society for the Study of Education, 1976. Rutherford, Robert Bruce, Jr. The effects of a model videotape and feedback videotapes on the teaching styles of teachers in training. Journal of Experimental Education, 1973, 42_(1), 65-69. _ Saettler, Paul. A history of instructional technology. New York: McGraw-Hill, 1968. 140 Salomon, Gavriel. What is learned and how it is taught: The interaction between media, message, task and learners. In David R. Olson (Ed.), Media and symbols: The forms of expression, communication, ggg4education. Chicago, Illinois: University of Chicago Press, Samuelson, Paul A. Economics (8th ed.). New York: McGraw-Hill, 1970. Scanlon, Robert G. improving educational productivity through the use of technology, Paper presented at the annual meeting of the American Educational Research Association, Chicago, Illinois, April 18, 1974. Scriven, Michael. Problems and prospects for individualization. In Harriet Talmage (Ed.), Systems of individualized education. Berkeley, Calif.: McCutcheon Publishing Co., 1975. Shirts, Garry R. Notes on defining simulation. In Cathy S. Greenblat and Richard 0. Duke (Eds.), Gaming-simulation: Rationale, design and application. New York: John Wiley & Sons, 1975. Simon, Anita, & Boyer, E. Gil. Mirrors for Behavior (Vols l-l4). Philadelphia, Pa.: Research for Better Schools, Inc., 1969. Simon, Herbert A. The sciences of the artificial. Cambridge, Mass.: MIT Press, 1969. Simpson, Peter K., and Maltley, Gregory P. Teaching the large class at the undergraduate level. Eugene, Oregon: Lane Community College, 1972. (ERIC Document Reproduction Service No. ED 061 907 Sisson, Roger L. On making decisions about technology in elementary and secondgry schools. Paper presented at the annual meeting of the American Educational Research Association, Chicago, Illinois, April 18, 1974. Skinner, B. F. The science of learning and the art of teaching. Harvard Educational Review, 1954, 24, 86-97. Snider, Robert C. Instructional Technology today. In Sidney G. Tickton (Ed.), To Improve Learning: An Evaluation (Vol. 1). New York: R. R. Bowker Co., 1972. Spangenberg, Ronald W. The motion variable in procedural learning. A-V Communications Review, 1973, 21, 419-437. Stigum, Brent P., & Stigum, Marcia L. Economics (2nd ed.). Reading, Mass.: Addison-Wesley Publishing Co., 1972. Suppes, Patrick. The place of theory in educational research. Educational Research, June 1974, pp. 3-9. 141 Thiagarajan, Swasailam. The programing process: Apracticalguide. Worthington, Ohio: Charles A. Jones Publishing Co., 1971. Thorndike, Edward L. The fundamentals of learning. New York: Teachers College, Columbia University, 1932. Thorndike, Robert L. (Ed.). Educational measurement (2nd ed.). Washington, D.C.: American Council on Education, 1971. Tickton, Sidney G. To improve learning; An evaluation of instruc- tional technology (V61. 1). New York: R. R. Bowker Co., 1972. I Tollett, Kenneth S. Higher education and the public sector. In Ben Lawrence (Ed.), Outputs of higher education: Their identifica- tion, measurement, and evaluation. BoUlder, Colorado: Western Interstate Commission Tor Higher Education, 1970. Travers, Robert M. W. Second handbook of research on teaching, Chicago: Rand McNally & Co., 1973. Wilkinson, Gene L. Cost evaluation of instructional strategies. n;y_ Communication Review, 1973, 21 (1), 11-30. Wood, David, et al. The role of tutoring in problem solving, Research conducted at the Center for Cognitive Studies at Harvard Uni- versity. (Mimeographed) Woodhall, Maureen, & Blaug, Mark. Productivity trends in British university education, 1938-62. In Daniel C. Rogers & Hirsch S. Rucklin (Eds.), Economics and education: Principles and applications. New York: Free Press, 1971. Yelon, Steven (Assistant Director of Learning and Evaluation Service, Michigan State University). Personal communication. February 15, 1975. Zimmerman, Barry J. Modification of young children's grouping strate- gies: The effects of modeling, verbalization, incentives, and age. Child Develgpment, 1974, 45, 1032-1041. Social learning research and television. Paper presented at the American Educational Research Association Convention, Washington, D.C., April 1975. Zimmerman, Barry, & Ghozeil. Modeling as a teaching technique. IDS. Elementary School Journal, pp. 441-446. Zuckerman, David, & Horn, Robert E. The guide to simulations/games for education and training. Lexington, Mass.: Information Resources, Inc., 1973. ~_ MICHIGAN STRTE UNIV. LIBRRRIES 11111111111111111111111111111111111 31293006772796