(I) In 'r1 '(‘7 ABSTRACT A.DESCRIPTIVE-ANALYTICAL STUDY OF THE BASIC RJBLIC SPEAKING COURSE AT MICHIGAN STATE UNIVERSITY by William Bradshaw Lashbrook This study was planned and executed to provide a description and critical analysis of the deve10pment and logistics of Speech 101 at Michigan State University. Special emphasis is given to the techniques for the eval- uation of student performance employed in the course. The study contains no explicit hypothesis to be tested in some statistical manner, though statistics have been used as methodological tools of description and evalua- tion. Data for the study covers the entire period of Speech 101's existence. The major objective of Speech 101 is to train students to be more proficient agents of change in public speaking situations. An examination of the developmental stages of the course shows the definite influence of a desire to handle, efficiently and effectively, a large enrollment on the means used to obtain this objective. The most dramatic representation of this influence may be found in terms of the basic structure of Speech 101. By integrating the concept of "peer grouping" the recitation sections with the use of televised common lectures, the course is able to process a large number of students with a minimum number of staff members. The course also makes use of modern.com- puterized techniques of data processing for many of the William Bradshaw Lashbrook tedious routines commonly associated with large enroll- ments. Multiple-choice examinations are employed in Speech 101 as one of the techniques for the evaluation of student performance. Test items undergo a significant amount of scrutiny both before and after their use in a specific examination. Each question is subjected to computerized techniques of item analysis and then evaluated with respect to its difficulty, discriminating ability, and the relevance of its Options. The study contains a detailed description of the processes of test item validation employed in Speech 101. The study traces the deve10pment of and rationale for the rating scale used in Speech 101 to evaluate student oral performance. By use of the statistical technique of factor analysis, a multidimensional speech rating scale was developed for use in the course. Analysis of the results of student usage of the scale in both experimental and classroom situations shows a definite and consistent factor structure as a basis for speech evaluation. A.DESCBIPTIVE-ANALYTICAL STUDY OF THE BASIC PUBLIC SPEAKING COURSE AT MICHIGAN STATE UNIVEPBITY By William Bradshaw Lashbrook A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR.OF PHILOSOPHY Department of Speech 1965 11 Copyright by WILLIAM BRADSHAW LASHBROOK 1966 ACKNOWLEDGMENTS The writer wishes to acknowledge the assistance that he received from his guidance committee in the preparation of this manuscript. Special thanks go to Dr; David C. Ralph who directed the study. 111 TABLE OF CONTENTS Acmo NLEDGI‘IENTS O O O O O O O O O O O O O O O O O 0 LIST OF TABLES O O O O O O O O O O O O O O O O O 0 LIST OF APPENDICES O O O O O O O O O O O O O O 0 Chapter I. II. III. INT BODU CT ION O O O O O 0 O O O O O 0 O O O The Purpose of the Study . . . . . . . . The Mode of Research for the Study . . Evaluative Techniques to be studied- 0 O O O The Significance of the Study The Limitations of the Study . The Structure of the Study . . THE DEVELOPMENT, OBJECTIVES AND LOGISTICS OF SPEECH 101 . . . . . . . The Development of the Course The Objectives of Speech 101 . The LOgistics of Speech 101 . Summary of Chapter II. . . . . THE EVALUATION OF STUDENT PERFORMANCE USING WRITTEN EXAMINATIONS . . . . . . The Objectives of the Testing Procedures. 0 O O O O O O O O O O O 0 Test Construction and Administration . . . . . . . . . . Results of the Examination Procedures . . . . . . . . . . . . Summary of Chapter III . . . . . . . . . THE EVALUATION OF STUDENT PERFORMANCE USING THE SPEECH 101 RATING SCALE . . Rationale for the Use of Speech RatingScaleS... ee eeeee Review of Recent Literature with Respect to SpeeCh Rating scales 0 e e e e e e The Development of the Speech 101 Rating Scale 0 O O O O O O 0 O O 0 0 iv Page iii vi viii #‘WUN l-‘H l-‘ 23 3o 41 42 #2 45 64 67 67 7o 91 Chapter Summary of Chapter IV . V. SUMMARY, CONCLUSIONS AND RECOMMENDATIONS. Summary of Material . General Conclusions . Recommendations . . . B IBL IOGBAPHY O O O O O O O O O O Page 116 120 120 124 127 130 Table 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. ll. 12. 13. 14. 15. 16. 17. 18. 19. LIST OF TABLES Speech 101 Enrollment for Three-Quarter Academic Year, l960-6l to 1964-65 101 Enrollments by Percentage According to College, 1964-65. . . . . . . . Item Analysis of Question . . . . . Item Analysis of Question Item Analysis of Question Item Analysis of Question Item Analysis of Question Item Analysis of Question Item Analysis of Question Item Analysis of Question \0 (I) \1 0\ U1 (2" Ki) N l-' O O 0 Item Analysis of Question H 0 Item Analysis of Question Student Factor Matrix for Twenty-Five Scale Items. . . . . . . . . . . . Faculty Factor Matrix for Twenty-Five scale Items 0 O O O O O O O O O 0 Student/Faculty Loading Agreement . . Student Factor Matrix for Twelve Scale Items . . . . . . . . . . . Three-Factor Matrix, Research Day, Fall 1964 e e e e e e e e e e e e Two-Factor Matrix, Research Day, Fall 1904 e o o e e e e e e e e e Three-Factor Matrix for High Quality Films 0 O O O O O O O O O O O O 0 vi Page 82 83 86 87 92 93 95 Table Page 20. Two-Factor Matrix for High Quality Films . 96 21. Three-Factor Matrix for Middle Quality Filmseeeeeeeeeeeeeeeee 97 22. Three-Factor Matrix for Low Quality Films. 98 23. Rwo-Factor Matrix for Low Quality Films. . 99 24. Three-Factor Summary of Results of Research Day, Fall 1964 . . . . . . . . 100 25. Two-Factor Summary of Results of Research Day, Fall 1964 . . . . . . . . . . . . 101 26. Percentage of Scale Variance Accounted for by Three Factors . . . . . . . . . 102 27. Percentage of Scale Variance Accounted for by Two Factors . . . . . . . . . . 102 28. Median Reliability Estimates and Their Ranges per Scale Item . . . . . . . . . 108 29. Three-Factor Matrix, Peer Groups, Fall .196u'eeeeeeeeeeeeeeeee109 30. Two-Factor Matrix, Peer Groups, Fall 1961‘“ O O O O O O I O O O O O O 0 O O O 110 31. Three-Factor Matrix, Research Day, Spring 1965 O O O O O O O O 0 O O O O 0 111+ 32. Two-Factor Matrix, Research Day, Spring 1965 e e e e e e e o e e e e e e 115 vii LIST OF APPENDICES Appendix A. Speech 101 Course Syllabus B. Sample Class Schedule C. Student Instructions for Peer Group Operation D. Sample Speech Plan E. Student Information Form F. Student lOl Instructor's Manual G. Original Speech 101 Evaluation Form H. Forty-Six Item Rating Scale I. Twenty-Five Item Rating Scale J. Twelve Item Rating Scale K. First Edition of the Speech 101 Rating Scale L. Second Edition of the Speech 101 Rating Scale viii CHAPTER I INTRODUCTION The Purpose of the Study The purpose of this study is three-fold: (l) to provide a detailed description of the development of Speech 101 at Michigan State University; (2) to examine the objectives and logistics of the evaluation of student performance in Speech 101; (3) to provide a critical analysis of the techniques of the evaluation of student performance, making use of data, statistical methodology, and results of the general research project conducted concurrently with this study.1 The Mode of Research for the Study The mode of research for the study is descriptive- analytical. It is descriptive in that an attempt is made to explore and define some of the motivations for the development of Speech 101 as well as the evaluative tech- niques employed in the course.2 It is analytical in that 1For the 1964/65 academic year, under the sponsorship of the Educational Development Program of Michigan State University, a research project aimed at expanding and im- proving the basic eourse in public speaking was conducted. 2The processes of exploration and definition to be used in the study fall under the label "descriptive" empi- rical research as discussed by Thomas L. Dahle and Alan H. Monroe, ”The Empirical Approach," in An Intrggugt%on to Gra- duate Study in Speech and Theatre, ed. Clyde Dow East Lan- sing: Michigan State University Press, 1961), Chapter 9, pp. 173-200. the study attempts to examine critically the results of using these evaluative techniques within the context of Speech 101 as well as the procedures of interpretation and unplementation of these results in terms of the objectives of the course.3 This study does not attempt to justify the techniques Of evaluation nor the procedures of interpre- tation by direct comparison with other techniques and pro- cedures not used in connection with Speech 101 or its associated research projects. The study contains no ex- plicit hypothesis to be tested in some statistical manner, though statistics have been used as methodological tools of description and evaluation. In short, the study represents a bi-modal examination of specific aspects of a unique course in beginning public speaking. Evaluative Techniques to be Studied The study describes and analyzes two evaluative tech- niques of student performance employed in Speech 101 at Michigan State University. The first of these techniques is that of the written examination. The study attempts to describe the processes of test construction and item analysis used in Speech 101, and to analyze the procedures of interpretation and implementation of test results in light of the general objectives of the academic course. 3The critical examination of procedures falls under the label ”critical" (analytical) research as discussed by Elton S. Carter'and Iline Fife, ”The Critical Approach," in Iblde, Chapter 5’ pp. 81-103. The second technique examined is that of speech evalua- tion, both by students in and instructors of Speech 101. The study describes the development of the Speech 101 rating scale and provides a critical analysis of the scale's use in experimental and classroom situations. The The Significance of the Study The significance of the study lies in three important areas. First, the study describes the development of a course in public speaking which attempts to meet the gene- ral problem of increased enrollment in institutions of higher learning. Second, the study provides a detailed description and critical analysis of a systematic attack on the problem of the evaluation of student performance within the context of a mass approach to speech education. Third, the study provides a descriptive and critical look at a set of evaluative techniques which are particularly and peculiarly adapted to computers. The Limitations of the Study The major limitation of the study stems from its mode of researdh. A descriptive-analytical study is not direc- ted at the testing of theory which can be used to explain and/or predict the occurrence of specific events. This limitation of design can only be successfully overcome if, at the end of such a study, certain avenues of thought are open which were previously closed because of lack of infor- mation about the nature of general problems and procedures by which those problems might be solved. This limitation seems inherent in modern education when pragmatic conside- rations (such things as increasing enrollments) have a greater impact on changing curricula than those of theo- retical rationality. The writer would hOpe that this study provides a much needed bridge between theoretical and prag- matic approaches to the modern training of student speakers in.the belief that both considerations have great relevance not only to the field of speech, but to the whole issue of the betterment of society through education. The Structure of the Study The remaining aspects of the study are divided into four chapters. Chapter II is entitled ”THE DEVELOPMENT, OBJECTIVES AND LOGISTICS 0F SPEECH 101.” This chapter discusses the deve10pment of the course, the initial definitions and eventual refinements in the course objec- tives, the rationale for and procedures of peer-grouping, and the basis for student evaluations and grade determina- tion. Chapter III is entitled "THE EVALUATION OF STUDENT PERFORMANCE USING WRITTEN EXAMINATIONS." This chapter deals with the objectives of testing procedures in Speech 101, the development and evaluation of examinations, and a critical analysis of the use of test results as measured against the objectives of the course. Chapter IV is en- 4521 Eur; 5 titled "THE EVALUATION OF STUDENT PERFORMANCE USING THE SPEECH 101 RATING SCALE.” This chapter concerns itself with the development and evaluation of the Speech 101 rating scales, an examination of the use of these scales by students in and instructors of Speech 101, and a criti- cal analysis of the results of using such scales as measured against the objectives of the course. Chapter V is entitled "SUMMARY, CONCLUSIONS AND RECOMMENDATIONS." Each chapter will be descriptive-analytical in mode and presentation. At the outset of this study it should be noted that a large percentage of the material on which the investi- gation is based is the product of the efforts of persons other than this researcher. The reader will become aware that the following chapters which deal with the development of Speech 101 and the development and analysis of evalua- tion techniques used in the course, are so structured as to provide both a report of significant findings and state- ments which determined the direction that Speech 101 took as well as an interpretation of the results of certain observed changes in the format of the course. The writer makes no claim with respect to the originality of the ma- terial being reported. He accepts only as his responsibility the obligation to be as factual and objective as possible. The interpretations of the results of course changes are primarily original with theeriter. These interpretations are generally presented under the label heading "results" in the chapters that follow. This method of distinguishing between the original and non-original aspects of this study with respect to the contributions of the writer, seems justifiable in light of a descriptive-analytical mode of research for the study, and is somewhat akin to the approach taken with the historical/critical mode of research which is common to the field of Speech. CHAPTER II THE DEVELOPMENT, OBJECTIVES AND LOGISTICS OF SPEECH 101 The Development of the Course Speech 101 officially became part of the curriculum at Michigan State University with the beginning of the Fall academic term of 1960. Actually, the birth of the course came a few months previously, on February 24, when by action of the faculty of the Department of Speech, Speech 201, "Principles of Speaking," a five credit course, was altered to: Speech 101 - Public speaking 3(4-0), Fall, Winter, Spring, Principles and practices of effective speaking in both informal and formal situations. 1 Approved as a "minor change" by the University, this alters nation signaled a significant reshaping of the structure and administration of the basic course in public speaking.2 The predecessor to Speech 101, Speech 201, combined the study of public Speaking, group discussion, and argu- mentation into a single course offered for five credits under the direction of the course chairman, Dr. Jack Rain. In 1Speech 101 Committee, "Committee Report on Speech 101: Public Speaking,” Michigan State University, Department of Speech, May 18, 1960, p. 1. (Dittoed and in the files of the Department. ) 2Ibid. January of 1960 a committee composed of Drs. John Dietrich, Kenneth Rance, David Ralph, Murray Rewgill, and Fred Alex- ander, all members of the Department of Speech faculty, recommended that Speech 201 should be abolished and that its essential content should be divided between two new courses: Speech 101, Public Speaking; and Speech 116, Group Discussion. This committee went on to suggest that: 1. Speech 101 should be a 3-credit course. 2. Speech 101 should meet in mass lecture for one hour per week and in recitation for three hours per week, a total of four'hours per week. 3. The co-ordinator and lecturer for Speech 101 should be Dru David Ralph. 4. Speech 101 should be a departmental service course, not associated with a specific area of the department. 5. A working committee composed of David Ralph - Chairman, Fred Alexander, Murray Hewgill and Gordon Thomas, should present to the faculty of the Department of Speech, for their con- sideration, complete syllabi, procedures for operation, recommended textbooks and all other material necessary for the proper functioning of the course. The recommendations of the Speech 201 committee were adopted in total and the following course description appeared in.the 1960-1961 Michigan State University Catalogue Issue: 3Speech 201 Committee, ”Committee Report on Speech 201: Public Speaking," Michigan State University, Depart- ment of Speech, January 14, 1960, p. 1. (Dittoed and in the files of the Department.) “I. Ca 5...... 101. Public Speaking (201) Fall, Winter, Spring 3 (4-0) Principles and practices of effective speaking in both informal and formal situations. There has been no change in this course description since that time . Registration figures (Table 1) show a steady increase in the population of Speech 101 since its inception in the Fall of 1960. This growing enrollment has had a marked influence on both the administration and 10gistics of the 00111380 0 TABLE 1 SPEECH 101 ENROLLMENT FOR THREE-QUARTER.ACADEMIC YEARS 1960-61 to 1964-65 Year Fall Winter Spring Total 1960-61 363 334 280 977 1961-62 525 367 320 1212 1962-62 562 417 323 1302 1963-64 642 513 437 1592 1964-65 742 770 530 2042 In terms of the administration of the course, Dr. David Ralph, from the outset, has been the chief admin- istrator of Speech 101. He initially served both as “Michigan State Universityk Michigan State University 9 Catalo e Issue 1 60-61 (Vol. 5 No. 13; East Lansing: M chigan S to University Press, May 1960), p. 88. I . I 1 I 1 I -\i -\i Fl. pt rt‘bni 10 lecturer for the course and co-ordinator for the various instructors. The primary responsibility of those instruc- tors was to conduct the recitation sections in which stu- dents practiced the principles of effective public speaking as explained and amplified in the text and during the once-weekly lecture period. As enrollment grew in the course, it was necessary that this administrative organi- zation be changed. Beginning with the Fall of 1961, Dr. Ralph became the chairman of the course and assumed the full responsibility for determining its content as well as the supervision and instruction of the Speech 101 teaching staff. It was felt that at the time such an administrative position was needed because the major burden for conducting recitation sections was falling on graduate students, some of whom had little teaching experience and most of whom had little prior knowledge of a mass approach to the teaching of public speaking.5 Dr. Ralph continued to be the course lecturer. For the Fall term of 1962, an administrative assistant was appointed by the department Chairman from among the teaching staff of the course. The name of the person for this position was recommended by Dr. Ralph. The primary motivation for the appointment of an adminis- trative assistant was the decision to put the course lectures on video tape and the ensuing necessity to release 5John Barson, "Phase I: Interview with Dr. David Ralph, Speech 101," Michi State University, Department of Speech, May 13, 19 4. (Varifaxed and in the files of the Department.) ff ('0’ 11 Dr. Ralph from some of the routine work of the course in order that he could devote time to the preparation of the tapes. With the use of taped lectures came increasing administrative details. The position of the administrative assistant became a permanent one within the organization of Speech 101. Starting with the Fall of 1964, the posi- tion of research assistant was integrated into the adminis- trative staff of Speech 101. This appointment was made Pnecessary by the increased amount of research carried on in conjunction with the course.6 In terms of the logistics of the course, the previously mentioned problem of increased enrollment had a profound and dramatic effect. Early in the 1950's the University administration made the decision to do everything in its power to achieve the goal of providing a quality education to as many students as possible. It was realized that the meeting of this responsibility would involve a significant expansion of the facilities and staff of the University; but it was also felt that the University should examine its curriculum in terms of devising new methods and techniques for’handling larger numbers of students while utilizing the existing facilities and staff. This latter challenge was presented in a speech addressed to the Academic Senate 6The position of research assistant was supported by funds provided by the Michigan State University Educational Development Program. The person filling the position was a member of the Speech 101 staff but did not teach the course. 12 of Michigan State University by President John.Rannah in the Fall of 1960. At the suggestion of the then Chairman of the Department of Speech, Dr; John Dietrich, Dr; Ralph and the staff of Speech 101 began giving consideration to the problems of handling an increasing number of Speech 101 students within the context of the newly developed course. From the beginning, a syllabus was used to provide the structure for the recitation sections. This syllabus contained primarily oral performance and text assignments. For the first full-year of the course the problem of sche- duling of activities for the recitation sections was left to the recitation instructor. The requirement of handling as many students as possible made necessary a greater uni- formity between recitation sections in the day by day scheduling of activities. In the Summer of 1961, the initial syllabus was revised to meet the demands of the anticipated enrollment of Fall 1961 and the scheduling of recitation section activities became the responsibility of the course's administrative staff.7 It was at the time of the revision of the course syllabus that the idea of ”peer grouping" the recitation sections came into focus. "Peer grouping" as deveIOped at Michigan State Univer- 7Funds for these revisions came from a Michigan State University All-University Research Grant. 13 city is primarily built around the idea of structuring the recitation portion of a public speaking class in such a manner as to allow the instructor to evaluate and critique half of the enrolled student's speeches, and for a group of the student's fellow enrollees to evaluate and critique the remaining oral performances required by the course. The revised syllabus for Speech 101 outlined in de- tail six oral assignments for each student enrolled in a recitation section. Under the peer group system, the recitation instructor hears and gives critiques on three of the six oral performances by each student. The remaining three oral performances are evaluated by a group of the student's peers without the presence of the recitation instructor. ‘Under the peer group system, therefore, since the recitation instructor is required to be with the class only half the time, the size of the normal recitation sec- tion can be doubled. It was reasoned that if it could be demonstrated that peer grouping represented a valid approach to practice in public speaking within the context of Speech 101, it also could be used as a technique for handling larger*numbers of students without an increase in the staff of the course. During the first year of Speech 101, the recitation sections were designed for an enrollment of twenty-two students. The revised syllabus was geared to a maximum recitation enrollment of twenty-five students, though the 14 idea of peer grouping was not instituted until a year later. In addition to revising the course syllabus in the Summer of 1961, a day by day schedule for the recitation sections was developed. Both the revised syllabus and the daily scheduling plans became part of the structure of the course beginning with the Fall term of 1961. It was decided that the idea of peer grouping had merit but needed further development and experimentation before it could become an integral part of Speech 101. The first experiment with peer grouping came during the Winter term of 1962. The concentration of the initial ex- perimentation with peer grouping centered almost entirely on the logistics of handling fifty students in one recita- tion section under the direction of one instructor. Results of this experimentation were limited to the subjective judg- ments of the recitation section instructors involved. These judgments were: 1. The operation of the course is mechanically possible without undue strain on the in- structor or students. 2. The students' morale appears very high. Students apparently welcome the oppor- tunity to assume some of the responsibility for the operation of the course. 3. Attendance is as good, on a given day, in the section which is student operated as it is in the section which is instructor operated. 4. Assignments are carefully prepared and presented on schedule in the student Operated classroom. 15 5. Examination scores average as high for the ”peer grouped" sections as for the regular sections of the course. 6. The quality of student speaking seems as high as in the regular sections taught by this instructor last year. 7. The instructor and the student evaluators appear to agree in their abilities to discriminate between good, fair and poor Speakers. On the basis of these conclusions it was decided to try two additional peer group recitation sections for the Spring term of 1962.9 This time an attempt was made to compare the students in the peer group with their counter- parts in the regular 101 recitation sections. Two areas of student performance, the mid-term exami- nation and a particular oral assignment outlined in the course syllabus, were selected to reflect this comparison. Mid-term examination results showed no significant difference between students in the peer and non-peer recitation sec- tions.10 The comparisons on the oral assignments were handled in the following manner: 8David c. Ralph, "Memo to Dr. John Dietrich, Chairman, Department of Speech, Michigan State University," no date available. (Typewritten and in the files of the Depart- ment. 9Two members of the Speech 101 staff were involved with the peer group recitation sections for the Spring term of 1962: Dr; David Ralph and Mr. Jerry Anderson. 1°William B. Lashbrook, "Speech 101 Experiment: Pro- ject 1,” Michigan State University, Department of Speech, April 25, 1962. (Dittoed and in the files of the Depart- ment. 16 Two samples were randomly drawn. Sample #1 represented fourteen students from the "peer group" section of Speech 101. Sample #2 rep- resented seventeen students from a non-peer recitation section of Speech 101. Each student in the sample groups gave a speech as outlined in the Speech 101 syllabus under the heading Topic III. The speeches were evaluated by use of an interval scale by three instructors of the staff of Speech 101. These evaluations took the form of a rating of one to eleven on the general criterion of "effectiveness.” A mean rating score was computed for each student and recorded. A.statistica1 analysis was then made on the meanzrating scores using the Mann- Whitney U Test. Results of this project showed no significant difference between the speeches given by peer and non-peer students. Encouraged by the results of the first two terms of peer group teaching and with no evidence to suggest that peer grouping in the Speech 101 recitation sections had a significant effect on student performances, a decision was made to convert the course to this new concept begin- ning with the Fall term of 1962.13 At the same time it was realized that the peer group concept needed more in- tense experimentation, both in terms of its pragmatic value for'handling increased numbers of students and its educa- tional value as a valid technique for use in a course in public speaking. Thus, the Fall term of 1962 represented 11These instructors had no students involved in the experiment. 12Lashbrook, loc. cit. 13This decision was made by Dr» David Ralph, course chairman, and the Speech 101 Committee and was supported by the Chairman of the Department of Speech, Dr. John Dietrich. 17 not only the conversion of Speech 101 to peer grouping, but also the integration of conjunctive research as part of the course. The Fall term of 1962 brought a significant change in the lecture portion of the course. During the two previous years of Speech 101, the lectures were presented once a week to each student in a face-to-face presentation by the course lecturer.“+ As mentioned previously, during the second year of the course, a decision was made to put the lectures on video tape. This project began during the Fall ten of 1962. In order to become familiar with the tech- niques of television, the course lecturer, Dr. David Ralph, decided that for the Fall tem of 1962, the first lecture on a given topic would be delivered face-to-face to approx- imately half of the Speech 101 students and a second ver- sion of the same lecture would be given on live closed circuit television to the remaining half of the students enrolled in the course. A comparison of mid-term and final test results with particular attention given to those por- tions of each test coming directly from the lecture material showed no significant difference between test scores of students receiving the lectures on television and face-to- 1“Time same lecture was presented at two different times on Mondays so as to accommodate all the students enrolled in the course. 18 face.15 The first four lectures of Speech 101 were recorded on video tape during December of 1962 for showing during the Winter term of 1963. The remaining lectures were recorded on video tape during the first few weeks of the Winter tenn of 1963 and shown during the latter'half of that term. An attempt was made to check to determine if the use of video taped lectures in the course had a noticeable effect on either student lecture attendance or'morale. No difference was found between the percentage of lecture attendance as a result of giving video taped lectures and the pro-tele- vision lecture attendance.16 Data provided from the use of an open-ended questionnare were interpreted as showing no objection on the part of the students to receiving their lectures on television whether live or on video tape.17 During the Spring term of 1963, an attempt was made to investigate the degrees of agreement among student evalu- ators in the peer groups and the degree of correlation between grades assigned by recitation instructors and peer group evaluators. Using Kendall's Coefficient of Concordance on twenty- 15Rohert Kinstle, "Student Attitude as a Function of the Mode of Presentation in a Lecture Segment of a Course in Basic Public Speaking,” Michigan State University, De- partment of Speech, April 1963. (Dittoed and in.the files of the Department.) 1611316.. 17Ibld. 19 one panels of peer group evaluators, it was concluded that while the peer group evaluators tended to agree quite strongly most of the time, they occasionally were unable to agree on their'discriminations of student'speakers' abi- lities.18 The second study of the correlation between instructors' grades and those assigned by peer group evaluators was in- conclusive.19 It should be stated that the research conducted in Speech 101 during the 1962-63 academic year suffered in both technique and rationale. The techniques employed were generally suited for small groups, but the meager control of the variables in the cited examples and the failure to replicate the experiment leaves the issue of the power (the ability of the technique to recOgnize a false null-hypothe- sis) of the statistics as used in the controlled situation unresolved. In the cases cited, there seems a desire to test a theoretical hypothesis of no significant difference between groups. One can legitimately question the rationa- lity when such a hypothesis is tested by statistics based themselves on the assumption of no difference. In general, 18Robert Kinstle, "Analysis of Agreement Among Student-Assigned Evaluative Rankings: Project 002," Michigan State University, Department of Speech, May 1963. (Dittoed and in the files of the Department.) 19Robert Kinstle, "Correlational Analysis of Speech Grades Assigned by Instructors and Peer Evaluators," Michigan State University, Department of Speech, June 1, 1963. (Dittoed and in the files of the Department.) 20 it can be said of the early researoh connected with Speech 101 that there was a noticeable willingness to commit type II error and that if such were committed, it tended to rein- force the intuitive observations of instructors teaching the course, thus creating a bias in the direction of change. Beginning with the Fall term of 1962, the textbook used in Speech 101 was shifted from Fundamentals of Public Speaking by Bryant and Wallace to Principles of Speaking by Hance, Ralph, and Wiksell. Two reasons suggested this change: (1) the need to up_date the examples of speeches and terminolOgy used in the course; (2) the need to use a text with assignments suitable to the concept of peer group recitation sections.20 A significant reorganization of the research of Speech 101 came with the Fall term of 1963. Dr; Craig Johnson joined the staff of the Speech Department and took over the direction of the major research related to the course. At that time a decision was made to concentrate on two areas of investigation: (1) the develOpment of valid and discri- minating items for the written examinations used in the course and (2) the develOpment of valid and discriminating criteria to be used in the evaluation of oral performances in the recitation sections. It was also decided that be- ginning with the Fall term of 1963, a special period would be assigned for researeh within the schedule of activities 20 cit. Barson, "Interview with Dr» David C. Ralph," loc. 21 for the recitation sections. Priority for the use of this period would be given to those projects related to the course, and if no such project was in progress, the period would be made available to other’departmental projects re- quiring the use of large numbers of subjects. It will suffice for this section of the study, dealing with course development, merely to note the integration of the concept of research into the context of Speech 101. In later chapters related to the specific techniques of the evalua- tion of student performance, a detailed discussion will be made of the develOpment of written examinations and the speech evaluation forms used in the course. As was previously mentioned, beginning with the Fall term of 196#, the position of research assistant was added to the administrative staff of Speech 101. At the same time Dre Johnson moved from the Department of Speech to a position within the central administration of the Univerb sity.21 The two major areas of investigation cited for the previous year remained the prime emphasis of the research conducted during the 196n-65 academic year. Despite his change of assignment, Dr. Johnson maintained a keen inte- 21Dr. Craig Johnson and the Chairman of the Department of Speech, Dr. John Dietrich both left the Department of Speech during the 1963-64 year. ZDr. Dietridh became Assistant Provost for'the University in charge of the Educational Development PrOgram and Dr; Johnson went joint- ly with EDP and the Office of Institutional Research. 22 rest in the Speech 101 research projects and served as an advisor for the research assistant in the course. The Fall tenm of 1964 also signaled the beginning of a significant development of student evaluation within the context of the course. During the summer*months of l96h, a decision was made to convert the procedures for grade determination in the course so as to be handled by the University's CDC 3600 digital computer. Two FORTRAN IV programs were written, one dealing with the mid-term grade estimates and the other with final grade determinations. Both pragrams became part of the course during the Fall term of 196“.22 A.more detailed discussion of the pro- cedures for grade determination as well as the rationale for their use will be provided in a later section of this chapter. It should also be noted that starting with the Fall term of l96#, procedures for the evaluation of items used on the written examinations also became computerized.23 Summagz of the Development ofgSpeech lgl. -- In a real sense, Speech 101 from its inception has represented a unique approach to the teaching of public speaking. From the beginning in.the Fall of 1960, an overriding conside- 22The programs dealing with Speech 101 grade determi- nation were authored by the research assistant for’the course during the 1964-65 year. 23The procedures were not changed from those used during the previous year, but the mechanics of doing them were prOgrammed for the 3600 computer. This pragramming was done by the Office of Evaluation Services. 23 ration for the handling of the problem of increased en- rollments had a marked effect on the development of the course. Speech 101 represents a mass attack on the problems of Speech education and frequently the necessity to expand within the limits set on the staff and facilities has played the major part in the decisions to change the struc- ture of the course. It is obvious that these decisions were made more intuitively than scientifically and that the eval- uation of the course in terms of objectives relevant to public speaking has taken a backseat to the 10gistics of handling large numbers of students. It can be said of the deve10pment of Speech 101 that it was never restrained by the rigors of tradition and scientific methodology. But such freedom is not without cost, for it allows the circum- locution of the issues of educational value and reliable pedagogy. These issues have always been of concern to those directly involved with the course. When the structure of Speech 101 became formalized and incorporated organized research as a part of the course, it became possible for educational value and reliable pedagogy to be seriously cons idered . The Objectives of Speech 10121+ Speech 101 is viewed both by the University and the Department of Speech primarily as a service course. In 2“The current version of Speech 101 will be that of the l96#-65 academic year. 24 general, a service course is one in which an attempt is made to develop and refine a specific skill or skills which are judged to be of value to all educated persons regard- less of their specific academic disciplines. Although none of the colleges of Michigan State University actually re- quires Speech 101 as part of its curricula, there is some evidence to suggest that many subject areas do recognize the value of a proficiency in public Speaking. The College of Education, for example, does require of its candidates for secondary teaching certificates, a demonstrated abi- lity in oral communication. Such a requirement can be satisfied in one of two ways: (1) successful completion of one of several speech courses, including Speech 101, or (2) the passing of a Speech Proficiency Test administered by the Department of Speech. Most students seem to prefer the former'to the latter'method of meeting this require- ment, and Speech 101 is elected by more students than all of the other eligible speech courses combined. Other disciplines within the University appear, at least, to re- commend Speech lOl for'ninety-three per cent of the students enrolled in the course come from colleges other than the College of Communication Arts. The Department of Speech requires that its majors concentrating in public address take Speech 101. This accounts for about two to five per cent of the enrollment for any given term. 25 TABLE 2 101 ENROLLMENTS BY PERCENTAGES ACCORDING TO COLLEGE 1964/65 COLLEGE Fall Winter Spring COLLEGE OF COMMUNICATION ARTS % % % (a) Advertising 0 l O (b) Journalism .5 .5 .5 (c) Television & Radio 3 1.5 1.5 (d) Communication .5 .5 0 (e) Speech 5 4 1. TOTAL 9 7.5 3. College of Agriculture 5 9 10 College of Arts & Letters 11 9 12 College of Business 4 6 6 College of Education 32 29 20 College of Engineering .5 .5 1 College of Home Economics 4 5 7 College of Venerinary Medicine 0 O .5 College of Natural Science 4 6 7 College of Social Science l8 16 18 University College 12 12 14 From the beginning, the general goal of Speech 101 has been as follows: To assist students, through knowledge of and experience in the principles and methods of speaking; to Operate more effectively as agents of change in public speaking situations.25 2 5Speech lOl Administrative Staff, "Syllabus for Speech 101: Public Speaking," Michigan State University, Depart- ment of Speech, May 18, 1960. (Mimeographed.) 26 It is significant to note that this general objective of the course remained unaltered despite the many changes that Speech 101 has undergone Since its creation. While it is difficult to deduce specific criteria for the evalu- ation of a course from such a general objective, it does appear’that one rather important criterion is implied; that students be trained in such a way as to make them more effective public speakers as a result of taking Speech 101. The original course syllabus listed the following specific goals for'the course: A. B. C. D. E. To help the student 1earn.and put into prac- tice the principles of good speaking; dis- covering or adapting the topic; finding, recording and interpreting materials of speaking; adapting to the audience; organi- zing and outlining the Speech; developing and using language for speaking; and prac- ticing and presenting the speech. To help the student feel more comfortable in the speaking situation by assisting him in a personal adjustment to his role as speaker. To help the student understand the role of speaking in our society. To help the student understand and accept the responsibility of the speaker to his society. To help the student develop the ability to analyze, criticize, 2nd pass judgment on the speaking of others.2 Beginning with the Fall tenm of 1962, an additional goal was added to the course syllabus: 26 Ibid. 27 To help the student (you) understand and make effective use of the materials of speaking - materials of development personal proof, and materials of eXperience.27 It was felt that this change from the listing of goals in the original syllabus was made necessary because of the 28 There terminology of the new textbook used in the course. was also a modification made in Goal A of the original syllabus. It was changed to: To help put into practice the principles of good speaking - discovering or limiting the topic; adapting to the audience; organizing and out- lining the speech; developing and using language for speaging; practicing and presenting the speech. This change, primarily in the area of evidence, was made in order that the goals of the course might reflect the method of organization of the new textbook adopted for the Fall term of 1962. The rhetorical objectives of the course are best rep- resented in that part of the syllabus which describes the purpose of each of the oral assignments contained therein: 1. To provide the student with experience in using and evaluating evidence in a speech; experience in analyzing a topic; experience before a classroom audience. 2. To provide the student with experience in 271bid., ed. 1962. 28Kenneth G. Hance, David C. Ralph, and Milton J. Wiksell, Principles of Speaking (Belmont, California: Wadsworth Publishing Co., 1962)- 29Speech 101 Administrative Staff, 100. cit. 3. 5. 28 analyzing and adapting to an audience and an occasion; experience in arresting and holding the attention of a group of listeners; experience in the use of motive appeals; ex- perience in adapting logical materials to an audience. To provide the student with the materials of speaking and methods for putting them together in a pattern which will produce an acceptable public speech. To provide the student with experience in considering the language necessary to "put across" a speech employing the inductive pattern; experience in utilizing the prin- ciples of effective delivery in speaking. To provide the student with experience in organizing, outlining and presenting an informative speech with the use of visual aids. To provide the student with experience in the complete preparation and presentation of a speech of advocacy, including analysis of the audience, occasion, subject, and speaker; experience in the selection of the appropriate materials of speaking; experience in the organization of the Speech in terms of the plan best suited to the situation (including the possibility 08 indirect approaches to the subject.)3 These purposes seem to correspond closely to the traditional cannons of rhetorical theory: invention, arrangement, style, and delivery. 31 Summary of Objectives of Speech lgl. -- The goals and objectives of Speech 101 as stated in the course syllabus 3° Ibid. 31The approach to the so-called "lost" canon of "memory" used by Speech 101 is putting emphasis on the extempora- neous mode of Speech delivery. 29 are not so defined that it can be objectively determined whether'they are achieved. If these objectives were phrased in such a manner as to suggest Specific behavior patterns, the task of evaluation of the course could be scientifically approached. One com- ment may be made as a result of comparing the phraseology of the general goal of the course with its specific goals and purposes. It appears that the philos0phy of the course assumes that the experience it provides for each student is rhetorically sound; furthermore, it assumes that the mere act of experience on the part of the student, as struc- tured by the course syllabus, helps to make him more effec- tive in public speaking situations. It is not said that the experience provided will make the student an effective agent of change but that he will be more effective than before. In general, it can be said of Speech 101, that its objectives are so phrased that it would be difficult to avoid meeting them. Two general statements need to be added with respect to the lack of behaviorally oriented goals for Speech 101. First, work is now being done within the framework of the course to phrase its objectives in terms of observable behavior. Second, the list of goals as it now stands is TPrObably more complete and more integrated with the assign- ments than most beginning speech courses. 30 The LOgistics of Speech 101 The present assignments of Speech 101 can be divided into two classifications: written and oral. The present written assignments can be subdivided into two categories: those dealing with Specific recitation assignments, and the written examinations. All assignments are outlined in the course syllabus.32 There are two written examinations given during each term of Speech 101: a mid-term and a final. The mid-term examination is given at the fifth lecture period of the course and covers approximately 40% of the course material (lecture and textbook assignments.) The mid-term contains fifty five-option multiple choice items. These items are written by the recitation instructors and material for them is taken from the course lectures and textbook assignments. No letter grade is given for a score on the mid-term exami- nation and the student is informed only of his raw Score. A frequency distribution is made of the raw scores; the mean and standard deviation are computed and are available for distribution to the students by the recitation instruc- tors. The final examination contains one hundred five-op- tion multiple choice items and covers all lecture and text- book assignments for the course. About 40% of the final examination covers the same material as the mid-term; no letter grade is given for a raw score on the examination. 32$peech 101 Administrative Staff, op. cit., ed. 1965. 31 The final examination is given according to the Michigan State University Final Examination Schedule, as contained in the ”Schedule of Classes" for the particular term.33 The specific written assignments related to activities carried on within the recitation sections tend to vary from term.to term and from one recitation section to another. The only uniform requirement of the course in this area is that a written speech plan is to be prepared by the student for each of his oral assignments. These speech plans are handed to the particular recitation instructor prior to the delivery of the speech and are evaluated by him. At the discretion of the instructor, the evaluation of the speech plan may modify the score given to the student on his oral presentation. An examination of the course syllabus (see Appendix A) reveals references to other _ written material in conjunction with given speech assign- ments. The completion of these assignments is optional with the recitation instructor and is, in part, dependent on the size of the recitation section. Six oral assignments comprise the student's public speaking experience in Speech 101. All speeches are given in the recitation sections. They are so scheduled that the student speaks once every five meetings of the recitation 33Michigan State University, Time Schedule for Classes: Springgl965 (Vol. 59, No. 9; East Lansing: Michi- gan State University Press, February 1965), p. 73. 32 section (on the Monday-Wednesday-Friday pattern). Each of the oral assignments has a Specific purpoSe (these purposes have been stated as the rhetorical objectives of the course). All oral presentations in Speech 101 require the student to speak extemporaneously. Any other*mode of deli- very is discouraged and penalized at the discretion of the recitation instructor. The Speeches are evaluated (either by "peers" or by the recitation instructor) via the use of a specialized "speech evaluation form." Each Speech has a weighting factor as follows: Speech I 2 WF Speech II 2 WF Speech III 3 WF Speech IV 3 WF Speech V 5 WF 4 Speech VI 5 WF 3 The major 10gistical problem that confronts Speech 101 is the scheduling of written and oral assignments men- tioned above within the context of the peer group recita- tion section.35 Recitation sections for Speech 101 are offered in three sequential patterns.36 Some of the sec- 3“Speech 101 Administrative Staff, "Speech 101 In- structor's Manual," Michigan State University, Department of Speech, 1962, p. 10. (MimeOgraphed.) 353peech 101 was converted to peer group recitation sections in the Fall of 1962. This conversion affected the total procedures of the course. Non-peered recitation sec- tions, when they were made necessary because of enrollment irregularity, involved a modification of these procedures. 36Speech 101 lectures occur on Monday at three speci- fic times in order to accommodate the various schedules of the students enrolled in the course. 33 tions follow a Monday-Wednesday-Friday or Tuesday-Wednesday- Thursday pattern, each class period being fifty minutes in length. The remaining sections follow a Tuesday-Thursday pattern, each period being eighty minutes in length. For our purposes, the first two patterns can be combined so that the discussion of the logistics of the recitation period involves sections that meet for three fifty-minute periods per week and sections that meet for two eighty-minute periods per week. As was stated previously, the instructor of the peer group recitation section is responsible for fifty students. Two adjacent rooms are made available for each peer group recitation section. For the first two class periods the recitation instructor meets with all fifty students in the larger of the two classrooms assigned.37 During these periods the instructor explains the general procedures of the course, talks about the nature of peer grouping and the responsibilitiesthat the student is expected to assume as a result of being a peer group member, assigns a reci- tation number to each student, hands out a schedule of recitation activities based on the assigned numbers, and makes an initial reading assignment from the course text- book. :During the orientation periods a member of the ad- ministrative staff of Speech 101 assigns each student to a Two class periods are provided for course orientation regardless of the daily sequence of the recitation periods. t e be. DO 34 room in which he can view the course lectures on video tape. The student receives during the first period of the recitation section a copy of the course syllabus (Appendix A) and a schedule of events (Appendix B), a set of instruc- tions explaining the procedures of peer grouping (Appendix C), a sample speech plan (Appendix D), and an information form to be filled out and returned at the next recitation ).38 The Operation of the peer section meeting (Appendix E group is also role-played under the direction of the reci- tation instructor*during the orientation periods. A.more detailed discussion of the operation of the orientation portion of the course can be found in the ”Speech 101 Instructor's Manual" (Appendix F).39 Once the orientation periods have been completed, actual peer grouping begins. The students with recitation numbers 1-25 meet in one of the assigned rooms and the stu- dents with numbers 26-50 in the other. These rooms are used for the activities of the recitation section through- out the term. For convenience, students in one room are referred to as Group A and the students in the other room as Group B. The recitation instructor spends approximately half 38Speech 101 Administrative Staff, loc. cit., pp. 11-15. 39In many cases these room assignments correspond to the recitation sections in which the student originally enrolled. In the scheduling of recitation sections, the University does not make reference to the concept of peer grouping. 35 the term with Group A and half with Group B. In actuality, the instructor alternates between the groups according to the oral assignments outlined in the course syllabus. Thus, he is with Group A for TOpics I, III, and VI and with Group B for Topics II, IV, and VI.“0 The remaining oral assign- ments for each group are given before the class without the recitation instructor's presence. The peer grouping procedure relies a great deal upon the student's assumption of some of the responsibilities for the course administration. When the instructor is pre- sent, one member of the group serves as timekeeper for stu- dent speakers and another member as chairman for the day. The chairman is responsible for the scheduling of the day's activities. The recitation section schedule (Appendix B) is so structured as to allow for a day by day change of the students who serve as timekeepers and chairmen. When the instructor is not present, the chairman also takes the roll for the day, distributes the necessary materials to the peer evaluators and schedules the day's activities. In addition, five to eight members of the group serve as an evaluative panel for the day's speeches. They rate each speaker using a prepared speech evaluation form and give an oral critique of each Speech. The chairman for the day gathers up the completed speech evaluation forms and the quor Group B, TOpics V and VI are transposed so that all TOpic VI Speeches are heard by the recitation instruc- tor. 36 speech plans and returns them to the instructor at the end of the recitation period. The peer evaluators are changed daily. Furthermore, the schedule is so arranged that a speaker'does not evaluate those who evaluate him on a given speech tOpic. When the recitation instructor is present he replaces the peer evaluators. He hears all the Speeches for the topic in question, fills out a speech evaluation form and gives an oral critique. Because of the tight scheduling, it sometimes becomes necessary for the timekeeper to regulate the oral critiques given by the recitation in- structor. Since a major portion of this study is concerned with the Specific methods used in the evaluation of student perb formances, a detailed discussion of them will be provided in succeeding sections. However, since we are here con- cerned with the lOgistics of Speech 101, it seems fitting to give some consideration to the evaluation of student performance as it relates to grade determination. Before the conversion of Speech 101 to peer grouping, a student's grade was determined by his accumulation of letter grades on each of six to eight speeches, on the mid- term and final examinations, and additional written work required at the option of the recitation instructor. Weights ‘were given to each of these listed segments in such a way that approximately 60% of the student's grade was deterb 37 mined by his oral performance and approximately 40% by the grade he received on the written work of the course. Each recitation.instructor’was responsible for'the deter» mination of his particular*students' final grades, although what constituted a specific letter grade on both the mid- term and final examinations was the decision of the admini- strative staff of the course. No requirements were made as to the distribution of grades either for the recitation sections or for the course in general. At the time of the decision to convert the course to peer groups certain modi- fications were made in the grade determining procedures. The 60/40 ratio was still maintained, but the number of speeches involved in the final grade determination was reduced to the three Speeches heard by the recitation instructor. In addition, the written work required in the recitation sections became a variable used to modify the particular grade on an oral performance rather'than a di- rect factor in the final grade determination. Thus, since the Fall of 1962, a student's grade in Speech 101 has been a function of his score on three instructor evaluated Speeches and his scores on the mid-term and final examina- tions.“1 The policy of establishing a specific letter ulThe course syllabus states that a student must achieve a passing grade in both the oral and written as- pects of the course in order to successfully complete the course. The degree to which this requirement has been en- forced has varied from term to term. This was particularly true before peer grouping was begun. With the act of con- 38 grade for any given item was abandoned in favor of points which could be weighted according to the value of the item. It was reasoned that a student's grade would then be a rep- resentation of his total accumulation of points for the course. For the first two years in which the course was peer grouped, a distribution of the total number of points for all students was made and intervals were established in order to make estimates of final grades based on this dis- tribution. However, the responsibility for the final grade for a student rested with the recitation instructor. When Dr; Craig Johnson took over the responsibility of the research projects directly related to Speech 101, he chose to concentrate on the determination of valid measure ing instruments in the form of written examinations and Speech evaluation forms. His initial research provided some interesting information with regard to the effects of par- ticular items used to determine the final grades in Speech 101. Using the techniques of regression and factor analysis, he found that approximately 67% of a student's final grade in Speech 101 was determined by his Scores on the written examinations and approximately 33% on the basis of the instructor evaluated Speeches. Further’investigations structing a distribution of the total accumulation of 1>oints, the enforcement of this regulation became stronger, especially when the issue was the passing or failing the 'written examination. The determination of what constituted a failing oral performance was left entirely in the hands of the recitation instructor until Fall, 1964. 39 showed that the reason for this reversal of the policy stated in the course syllabus, was the failure of the reci- tation instructors to utilize the whole range of numerical ratings on the oral assignments. These findings, together with the establishment of a "speech evaluation form" which showed a significant degree of reliability in the experi- mental situation, resulted in a major change in the logis- tics of grade determination for Speech 101. The following procedure for grade determination was instituted in Speech 101 during the Fall term of 1964. Construction of the mid-term and final examinations became the responsibility of the research assistant in the course, though the policy of having the recitation instructors con- tribute the items to be used remained unchanged. The course chairman continues to scrutinize and approve all examinations prepared.“2 The scores that the student re- ceives on each test are standardized. These standard.scores are then weighted so that the final examination standar- dized score has twice the weight of the mid-term examina- tion standardized score. These weighted scores are then added together and the totals are standardized and recorded as the "total written score" for the student. After a dis- tribution of scores on each of the three speeches for all students enrolled in the course is determined, a mean and uzInstructions for writing multiple choice questions are included in the "Instructor's Manual" for the course (Appendix E). 40 standard deviation are computed and a standard score evolved for each student's speech. This represents a sig— nificant departure from previous years of Speech 101 where the recitation instructor’determined the speech score that a student received on a given oral performance independently of the scores given by other recitation instructors. The standard.Speech score for a particular tOpic is weighted in such a way that the score on the first instructor evalua- ted Speech is weighted as two units, the second instructor evaluated Speech as three units, and the third instructor evaluated Speech as five units.“3 These weighted standard scores are then added tOgether and their total is standar- dized and recorded as the "total speech score” for the stu- dents. Both the total written and speech scores are scaled so as to have a mean of 50 and a standard deviation of 10. These total scores are then multiplied by .40 and .60 respectively so as to meet the requirement of the 60/40 ratio of Speech to written scores. These scaled totals are added tOgether and form the basis on which a final grade is determined for each student. The mathematics of this grading procedure is performed by the CDC 3600 computer 43Peer group evaluations had been used only experi- mentally as a basis for grade determination. This fact does not express any lack of confidence in the evaluative ability of students, but only a reluctance to incorporate this feature into the grading procedures until that time when an adequate Speech 101 rating scale has been develoPed. 41 owned by Michigan State University. For the current ver- sion of Speech 101 the final decision for the relationship between a specific score and a letter grade is made by the administrative staff of the course.hu Summary of the logistics of Speech 101. -- The pre- ceding discussion represents an attempt to outline some of the Dmportant procedures that have been evolved for Speech 101 and serve to structure its Operation. This discussion is not complete in itself but should be supplemented by the cited material in the Appendices of this study. Summary of Chapter II Chapter II of this study has attempted to trace the development of Speech 101 at Michigan State University. In addition the objectives and logistics of the current ver- sion of the course have been listed so as to provide the structure in which the evaluative techniques can be effec- tively described and analyzed. A.major concern of this study is for the techniques of evaluating student perfor- mances employed in Speech 101. Chapter III will concen- trate on the develOpment and evaluation of the written examinations in the course, and Chapter IV will describe the development and evaluation of the “speech evaluation forms." uuThe mid-term grade estimates are determined in.a similar fashion, but are based only on the first instruc- tor graded speech and the student's score on the mid-term examination. CHAPTER III THE EVALUATION OF STUDENT PERFORMANCE USING WRITTEN EXAMINATIONS The purpose of this chapter is to describe and analyze the use of written examinations in Speech 101 as a technique for the evaluation of student performance. The presenta— tion will be divided into three areas: (1) the objectives of the testing procedures employed in Speech 101; (2) the develOpment of the test items and the administration of the examinations; (3) the analysis and interpretation of the test results. The Objectives of the Testing Procedures There are two basic objectives of the testing proce- dures used in Speech 101. The first objective points to a desire on the part of the administrative staff of the course to measure a student's knowledge of the principles and con- cepts of speech. This goal assumes a difference between knowledge of principles and concepts of speech and their application in the oral Situation. As McBurney and Wrage point out: The fact that knowledge about speech con- tributes to skill in speech is often taken to mean that Skill implies knowledge and knowledge implies skill. This hasty conclusion is dan- gerous and a half-truth which often results in serious mis-evaluation. One person may give creditable speeches without any real understanding of principles. Does this mean that Speech is an empiric knack, and that no relationship exists 42 43 between understanding and skill? Not at all! It Simply means that there are individual dif- ferences in speech competence related to aptitude and maturation. To make a person think he under- stands the principles of Speech because he can give a passable Speech or even a good Speech is to run the risk of denying him Opportunities for greater skill through better understanding; and, by the same token, to make another think he does not understand the principles because his Speech leaves something to be desired may seriously miS-direct his efforts. One of the best ways to avoid these mis-evaluations is to test underb standing of principles independently of skill.1 The second objective of the Speech 101 testing proce- dure is to evaluate examination questions. Major concern for the development of acceptable test items began with the Fall term of 1963. A goal was set to accumulate 1000 test items which could be used in different combinations and could be structured to fit the assignment schedules of the course. The method used for the attainment of the two stated objectives of the Speech 101 written examinations has been to develOp and administer a multiple-choice form of test. This type of examination format was chosen because, as Travers points out: The multiple-choice type of problem presents a flexible kind of problem situation and, contrary to a common misconception, it can be used to appraise thinking skills as well as simple recog- nition skills. There is nothing unrealistic about the way in which students respond, though many critics feel that the free-answer test represents Something much nearer to the situations that arise 1James H. McBurney and Ernest J. Wrage, The Art of Good Speech (New York: Prentice-Hall, Inc., 1953), pp. 54- 35. 44 in daily life than does the multiple-choice type of test. However, there are two sides to this question. In most problems that are commonly encountered in life, the possible solutions are evident and the problem is largely that of selecting the right solution. Most of the mistakes that people make in life are not the result of failure to consider the correct solution to a problem as a possible one; they result more frequently from a failure to con- sider the correct solution as the best and the resulting choice of an inferior alternative. Very few situations are even encountered, out" side of scientific work, in which the individual does not have to make a choice from the alter- natives which present themselves. Consequently, the multiple-choice test problem is not so arti- ficial as it may seem to be at first sight. In most multiple-choice problems in which the stu- dent has to weigh the relative merits of the various solutions, the tasks he performs are not very differgnt from those he must undertake in daily life. Two other advantages to the multiple-choice form of examination are its adaptability and its ease of scoring. The multiple-choice form.is adaptable to a wide variety of item topics. It can be used to measure knowledge of facts, such as definitions and dates, where the student's reaponse depends on recall and he either knows the answer or’does not know it. The form can also be used to measure complex abilities and fundamental understandings, which ”in most cases, require the student either to see new relation- ships between facts or apply principles to relatively "3 novel situations. 2Robert M. W. Travers, How to Make Achievement Tests (New York: Ocyssey Press, 1950), pp. 62-63. 3I‘nid. ’45 Multiple-choice items are well adapted to modern tech- niques of machine scoring.“ New electronic scoring machines are located in several test scoring centers (including the Office of Evaluation Services of Michigan State University). These techniques of machine scoring requires the student to make an Opaque mark in a required space on a specified answer sheet. At Michigan State University these answer sheets are read by an optical scanner which records on the answer sheet the total number of Questions responded to as well as the number of correct choices made by the student. In addition, this machine will punch on IBM data cards how the student responded to each test item as well as the number of items he answered and his score based on his correct choices. This type of machine can score and punch information for approximately 1000 answer'sheets per'hour. The IBM cards can go directly to a computer for the deter- mination of percentile and standard scores as well as for item analysis. Thus, the versatility of measurement of a multiple- choice test and its ease of scoring make this form of ex- amination particularly suitable for use in Speech 101. Test Construction and Administration All tests in Speech 101 are in the multiple-choice “Kenneth F. McLaughlin, Inte retation of Test Results (Washington: United States Department of HeaIth, Education and Welfare, 1964), pp. 15-16. 46 format. The examinations cover material presented in the weekly lecture periods and in the textbook for the course. There are two such tests: a mid—term, consisting of fifty items; a final, consisting of 100 items. Individual Ques- tions are written by the recitation instructors of the course. As was stated in the preceding section, the initial effort of the research in the area of Speech 101 written examinations has been to develOp a sufficient number of acceptable test items. This objective has had a direct bearing on the procedures of test develOpment used in the course. The course requires of the recitation instructor that he write five, five-option, multiple choice items per week. At first, the instructors were given free range of the textbook and lecture material from Which to write items. It soon became obvious that such a freedom provided no guarantee of items of sufficient coverage of the course material. Thus, it became the responsibility of the admi- nistrative staff of Speech 101 (primarily that of the re- search director) to assign specific material from which test items should be written. Such a procedure then allowed for the development of a reservoir of test items which covered the essential material of the course. As the number of available items increased, the Specificity of sources for the items tended to relax. 47 The major problem discovered with allowing the recita- tion instructors to write the test items was the fact that most of them had little experience in the writing of multi- ple-choice questions. For the first two years of the course, no real instruction was given in how to write acceptable questions. Consequently, a great many items were rejected by the administrative staff of the course and were never used in tests. In the case of such rejection, the ques- tions were destroyed and the recitation instructors told to construct new items covering the same material as the rejected ones. When.Dr. Craig Johnson became research di- rector for the course (Fall term 1963) a set of instruc- tions for writing multiple-choice items was added to the "Speech 101 Instructor's Manual."6 Some time was also de- voted in the Speech 101 staff meetings to the writing of 7 test items. As a result of these instructions and dis- cussions there appeared to be an increase in the number of items which on their initial presentation were accepted as useable on an examination. The following represents the form in which the items written by the recitation instructors were handed to the 5F. Craig Johnson and George R. Klare, "Procedures for Item Writing," Michigan State University, Department of Speech, 1963. (Dittoed and contained in the "Speech 101 Instructor's Manual," Appendix F.) 6Staff meetings in Speech 101 are held once a week during a given term and are attended by the recitation in- structors and the administrative staff of the course. 48 administrative staff of the course:7 Figure l Classifying the Audience Chapter 7 -- page 117. TOPIC OBJECTIVE Establishing a basis for the hearers' confidence in the speaker is the first task when dealing with: (1) an apathetic audience. (2) a hostile audience. (3) a friendly audience. (4) a mixed audience (5) a neutral audience. V—KEY' 1 123mg O’F—E DIFuF'I-ECULTY i D is c'RI'MIN_A‘—TION“‘ * 2 was 10/12/64 1 [ f As was stated previously, when the questions are re- ceived by the administrative staff of the course, they are screened in order to determine their usability. While this judgment is fairly subjective, two general criteria are kept in mind: (1) the appropriateness of the idea reflec- ted in the question to the material stressed in the course; (2) the degree to which the question corresponds to the multiple-choice format. Items which appear to meet these 7These forms are standard for'multiple-choice ques- tions and are provided by the Michigan State University Office of Evaluation Services. ‘rd 49 criteria are filed as "usable" in Speech 101 examinations. It is from this file that the actual examinations for the course are constructed. Approximately three weeks prior to the time that a particular examination is to be given, questions are selec- ted from the file labelled "usable." This selection is structured by the requirements that a given test should: (1) be based on items covering the course's lectures and textbook assignments which are pertinent to the amount of material that the student is supposed to have studied by the time of the examination; (2) reflect points of emphasis in the course. Questions once selected (this initial choosing of test items is done by the research assistant of the course) are scrutinized by the course chairman. This check accomplishes two things. First, it provides for a second application of the criteria for item selection. Second, it tends to sift questions so that they do not cover identical material. Questions which pass the above screening processes are then combined in order to make two forms of a particular test. These forms differ only with respect to the ordering of the items.8 Each form of the examination is then typed 8For the first two terms of 1963/64 two separate exam- inations were constructed. This procedure allowed for the evaluation of 300 items per term, but caused some degree of confusion when the two tests showed significantly different results. In an attempt to be fair to the student, in the Spring of 1964 the test construction policy shifted to the preparation of two forms of the same examination. 50 and mimeotraphed tOgether'with a cover'sheet and a set of instructions for taking the test.9 A specially prepared answer sheet provided by the Michigan State University Office of Evaluation Services is attached to each examina- tion. These answer sheets are adapted to rapid scoring by machines. The scheduling of Speech 101 is such that a given exam- ination must be administered three times in order to be taken by all the students enrolled in the course. Both forms of the constructed examination are administered during each of the three examination periods. The actual administration of a particular examination is conducted by the entire staff of Speech 101. The fact that, in number, this staff is significantly smaller (be- cause Of peer grouping in the recitation sections) than other courses involving 600 to 800 students per term, some- what justifies certain security procedures associated with the giving of an examination. The number of pe0ple avai- lable to administer*an examination varies, but the ratio has worked out to be two proctore per 100 students taking the examination.10 One supervisor (usually a representa- tive of the administrative staff of the course) gives in- 9For secutiry purposes each form of the examination is mimeographed in two colors. Thus, to the eye, there are four versions of each test. 10There are two examination periods, each of which in- volves approximately 40% of the course enrollment. The third examination period tends to cover the remaining 20% of the course enrollment. 51 structions to the students for filling out the answer sheets and for leaving the examination room. Once these instructions have been given, the students are told to begin the examination. When the student has finished the examination, he leaves the room as instructed. Students with one form leave by one door, students with the other by a second door. This procedure allows for a preliminary sorting of the exam- inations and answer sheets. Once all the students have taken the examination, the answer sheets are checked to make sure that they have been sorted correctly and are then put in.order’according to the recitation section number of the students. Answer'sheets for both forms of the examination are given to the Office of Evaluation Services for scoring. In addition to Scoring the answer sheets, the Office pro- vides IBM punched cards for'each answer sheet. These cards contain the total scores and the students' responses to each item on the examination. The answer sheets and the punched cards are returned to the administrative staff of Speech 101 for analysis and interpretation. Results of the Examination Procedures The most immediate use made of the examination results in Speech 101 is their integration into the grade deter- mining procedures outlined in Chapter II of this study. At this time an assumption is made that the student's exam- ination score is a reflection of his knowledge of the prin- 52 ciples and concepts of Speech covered by the test. The rationale for'this assumption is based primarily upon the idea that the screening processes for test item selection are such as to certify a certain degree of face validity to an examination in total.11 Once the process of grade determination has been com- pleted, a statistical analysis is made of the items used on the examination. The major purpose of this item analy- sis is to determine whether or not the Specific questions used on the examination meet certain requirements of "accep- tability." Two criteria of acceptability are used in the evaluation of Speech 101 examination items: (1) their difficulty; (2) their discriminating ability. The index of difficulty is defined as the percentage of the total group marking a wrong answer or omitting the item.12 Such a definition is consistent with the most fre- quently used and convenient method for computing and expressing item difficulty.13 For the purposes of Speech 101 each examination can be assumed to be a power test 11Face validity is here used in its conventional sense, that is, the degree to which a test appears to measure What it is supposed to measure. In the case of Speech 101, this decision is made by those familiar with and responsible for the content of the course at the time of test item selec- tion. 12Office of Evaluation Services, "Program F0 303 for IBM Digital Computor," Michigan State University, Office of Evaluation Services, 1965. (Magnetic Tape.) 13William K. Price, "The University of Wisconsin Speech Attainment Test." (Unpublished Ph.D. dissertation, Depart- ment of Speech, University of Wisconsin, 1965). p. 74. 3:...4 53 since ample time is allowed for the student to consider every item on the examination. No correction for guessing is applied in the computation of item difficulty. There appear to be a great many divergent opinions on the matter of a correction for the guessing factor on multiple-choice examinations.1u Since this issue is unresolved in the minds of test constructors, and Since Speech 101 examina- tions are assumed to be power tests, the lack of a correction factor in the determination of item difficulty is not viewed as a Significant weakness in the process of item analysis used in the course. The second index that is computed for each item is a measure of its discriminating power. The index of discrimi- nation is defined as "the.difference between the percentage of the upper group (tOp 27%) marking the right answer and the percentage of the lower group (low 27% marking the right answer."15 Item discrimination can be viewed as a relationship between the item and a criterion variable (in the case of Speech 101, the total test score). Wood sug- gests that: When the total score on a test is used as the criterion variable for judging the discri- minating power of each item in it, the resulting instances, reflect, among other influences, the 1“Frederick R. Davis, "Item Selection Techniques," in Educational Measurement, ed. E. F. Lindquist (Washington: American Council on Education, 1951), Chapter 9, p. 271. 15Officc of Evaluation Services, loc. cit. 54 extent to which the item measures the same mental functions as the total score. The fact that some items prove to have more discriminating ability than others means that for the group tested they are better'measures of whatever the whole test actually measures. 6 The discriminating power of a test item is an expres- sion of its ability to distinguish between good and poor students. If an item has high discrimination, it means that the tOp students on the examination get it correct while the low students fail it. The most convenient method17 for determining the discriminating power of a test item, and the one employed by the Michigan State University Office of Evaluation Services, is that develOped by Johnson and 18 called the ULI or "upper-lower index." The following represents the statistical notation of ULI: ULI== Ru - Re ___?___ Where: Ru, Re = the number of students giving the correct answer in the upper and lower groups respectively. “3 II the number of students in each group. The upper and lower groups used for Speech 101 exami— 16Dorothy Adkins Wood, Tgst Construction (Columbus, Ohio: Merrill Publishing Co., 1961), p. 43. 17Office of Evaluation Services, "Interpretation of the Index of Discrimination," Michigan State University, Office of Evaluation Services, 1965, pp. 1-3. (MimeOgraphed.) 18A. P. Johnson, "Notes on a Suggested Index of Item Validity: The UpperbLower Index," Journal of Educational Psychology, XLIII (1951), pp. 499-504. it: 55 nations are the 27% of the examinees who scored highest on a particular test and the 27% of the examinees who scored lowest on the same test. To avoid compounding the error factor associated with the ULI statistic with the variable of item-ordering, only one form of the examination is used for item analysis. In the process of evaluating a test that has been given, an Operational value is assigned to each of the cri- teria of acceptability. If, as a result of item analysis, an examination ques- tion has a difficulty within the range of 35% to 65% in- clusively, and it has sufficient discriminative ability, the item will be judged acceptable for future use in Speech 101. Price points out the advantage of setting up a range of values for the index of difficulty: The difficulty of the item is a critical factor in item selection in that it controls the shape of the distribution of the test scores, and the shape of the distribution controls the effi- ciency of the test. If the test is too hard, the distribution will be positively skewed and the test will discriminate well among only the good students. If the test is too easy, the distri- bution will be negatively skewed and the test will discriminate well among only the poor students. In order to achieve a symmetrical, normal distri- bution of test scores, the distribution of item 1 difficulties Should cluster around the 50% level. 9 In applying the criterion of discrimination to test- items of acceptable difficulty, it was decided that a ULI 19Price, op. cit., p. 78. 56 value of .20 or higher would be sufficient to meet the objectives of a standard Speech 101 examination. Such an operational value for the discrimination criterion is 2 suggested by Guilford in Psychometric Methods. 0 Thus, for each item to be judged "acceptable" it must have a difficulty within the range of 35% to 65% inclu- sively and an index of discrimination equal to or greater than .20. In addition to the application of the two criteria of acceptability, the options of the item are checked to see if they are plausible alternatives. An Option is desig- nated plausible if it is responded to by at least 3% of the examinees. Items which are acceptable but contain non- plausible Options are marked for revision. The following material illustrates the process by which examination items are evaluated after item analysis. For illustrative purpose questions from the Fall term 1964 final examination are represented. An asterisk signals the correct response to the item. Item 1 To find newspaper coverage of an event in the past, you would look in the: (1) Readers' Guide. (2) Poole's Index. ( ) Evergreen Review. *( ) New York Times Index, (5) National Editorial Review. 2OJ. P. Guilford, Psychometric Methods (2nd ed.; New York: McGraw Hill, 1954), p. 428. AV Yifi 57 TABLE 3 ITEM ANALYSIS OF QUESTION 1 Item Response Pattern l 2 3 4 5 Omit Error Total Upper 27% 9 14 0 58 3 0 0 84 11% 17% 0% 69% 4% 0% 0% Middle 46% 24 20 4 82 14 O O 144 17% 14% 3% 57% 10% 0% 0% Lower 27% 16 19 5 27 17 O O 84 19% 23% 6% 32% 20% 0% 0% Total 49 53 9 167 34 O 0 312 Difficulty 47: Discrimination .37 Question 1 was judged acceptable and filed for future use without revision. Item 2 Which of the following is not a characteristic of style? (1) Clarity *(2) Invention (3) Forcefulness (4) Vividness (5) Adaptability TABLE 4 ITEM ANALYSIS OF QUESTION 2 Item Response Pattern 1 2 3 4 5 Omit Error Total Upper 27% 1 75 0 0 8 O O 84 1% 89% 0% 0 10% 0% 0% Middle 46% 7 110 1 3 23 O O 144 5% 76% 1% 2% 16% 0% 0% Lower 27% 13 48 2 6 l5 0 0 84 15% 57% 2% 7% 18% 0% 0% Total 21 233 9 #6 312 3 O O 7% 75% 1% 3% 15% 0% 0% Difficulty 26: Discrimination .32 58 Question 2 was rejected on the grounds that its difficulty did not fall within the acceptable range. Item 3 The most important difference between oral and written styles is: *(1) instant intelligibility. (2) combination of words. ( vivid language. ( sentence length. ( mode of expression. Ui-L‘U) TABLE 5 ITEM ANALYSIS OF QUESTION 3 ___—.-__ Item Response Pattern 3 4 5 Omit Error Total 2 Upper 27% 70 l 0 3% 10 0 0 84 83% 1% 0% , 12% 0% 0% Middle 46% 86 2 4 45 144 1 7 O 0 60% % 3% 5% 31% 0% 0% Lower 27% 32 4 3 6 39 0 O 84 38% 5% 4% 7% 46% 0% 0% Total 188 7 7 16 94 O 312 O 60% 2% 2% 5% 30% 0% 0% Difficulty 39: Discrimination .45 Question 3 was judged acceptable but marked for re- vision of options 2 and 3. Item 4 The Speaker's stand is: (1) never to be used. ) to be gripped with the hands when tense and nervous. ) never to he leaned upon. ) to be used when appropriate rules are followed. ) to be dominated by the Speaker. 59 TABLE 6 ITEM ANALYSIS OF QUESTION 4 Item Response Pattern l 2 3 4 5 Omit Error Total Upper 27% O O l 24 59 O O 84 0% 0% 1% 29% 70% 0% 0% Middle 46% 0% 0 12 49 83 0 o 144 0% 0% 8% 34% 58% 0% 0% Lower 27% 0 1 5 7 31 0 O 84 0% 1% 6% 56% 37% 0% 0% Total 18 120 173 0 312 O 1 0 0% 0% 6% 38% 55% 0% 0% Difficulty 44: Discrimination .33 Question 4 was Judged acceptable but marked for re- vision of options 1 and 2. 1222.2 Advocacy, or persuasion, is: (1) absent in the kaffeeklatsch. *(2) at times a strengthening of an attitude already present. (3) more often a characteristic of informal speaking than it is of drama. (4) one of four major kinds of speaking. (5) of more relevance for the salesman than the clergyman. TABLE 7 ITEM ANALYSIS OF QUESTION 5 Item Response Pattern l 2 3 4 5 Omit Error Total Upper 27% 1 74 3% 5 l O 0 84 1% 88% 6% 1% 0% 0% Middle 46% 4 95 13 29 3 O O 144 3% 66% 9% 20% 2% 0% 0% Lower 27% 3 29 8 37 7 O O 84 4% 35% 10% 44% 8% 0% 0% Total 8 198 24 71 11 O O 312 3% 63% 8% 23% 4% 0% 0% Difficulty 38: Discrimination .53 60 Question 5 was Judged acceptable and filed for future use without revision. Item 6 It should be assumed that the entertainment talk: (1) is a legitimate and major speech type. (2) is an integral part of a lecture. *(3) gives greater understanding to the audience than to the speaker. (4) would be employed by the clergyman in a sermon. (5) is easily adapted to any subject. TABLE 8 ITEM ANALYSIS OF QUESTION 6 Item Response Pattern 1 2 3 4 5 Omit Error Total Upper 27% 0 9 19 l 55 O 0 84 0% 11% 23% 1% 65% 0% 0% Middle 46% 3 11 3g 5 9O 0 0 144 2% 8% 2 % 3% 63% 0% 0% Lower 27% 3 7 9 12 53 O O 4% 8% 11% 14% 63% 0% 0% Total 6 27 63 18 198 O 312 0 2% 9% 20% 6% 63% 0% 0% Difficulty 80: Discrimination .12 Question 6 was rejected on the grounds that neither its difficulty nor its discrimination fell within the acceptable range. Item 2 When using visual aids, the speaker should: (1) put the aids on an easel or blackboard. (2) put the aids in front of the speaker's stand. *(3) stand at the side of the material ) concentrate on the audience. (5) watch for audience feedback and misunderh standing. 3'? t... 61 TABLE 9 ITEM ANALYSIS OF QUESTION 7 Item Response Pattern 1 2 3 4 5 Omit Error Total Upper 27% O O 58 12 14 O 0 84 0% 0% 69% 14% 17% 0% 0% Middle 46% 1 1 88 20 34 0 0 144 1% 1% 61% 14% 24% 0% 0% Lower 27% 3 l 41 9 30 0 0 84 4% 1% 49% 11% 36% 0% 0% Total 4 2 187 41 78 0 312 0 1% 1% 60% 13% 25% 0% 0% Difficulty 40: Discrimination .20 Question 7 was judged acceptable but marked for re- vision of options 1 and 2. Item 8 Lord Chesterfield held of style that: 1) "le style est l'homme meme." style is a two-edged sword. the highest style comes from a pure soul. style is the dress of thought. it is a torrent of words drowning all thought. U1 4Tb.) N vvvv ( ( ( *( ( TABLE 10 ITEM ANALYSIS OF QUESTION 8 Item Response Pattern l 2 3 4 5 Omit Error Total Upper 27% 3 7 6 67 1 O O 84 4% 8% 7% 80% 1% 0% 0% Middle 46% 5 24 6 105 4 O O 144 3% 17% 4% 73% 3% 0% 0% Lower 27% l 16 11 52 4 O O 84 1% 19% 13% 52% 5% 0% 0% Total 9 47 23 224 9 O O 312 3% 15% 7% 72% 3% 0% 0% Difficulty 28: Discrimination .18 v..— 62 Question 8 was rejected on the grounds that neither its difficulty nor its discrimination fell within the acceptable range. .12-3.41.1 In the speech to entertain, a long narrative demands that the speaker: (1) 433 (5) hurry along. use more pauses than usual. read from notes. possess an excellent vocabulary. use no description. TABLE 11 ITEM ANALYSIS OF QUESTION 9 U 27 4 12 pp” % 5% 1:3 1% 79% 1% 0% 0% Middle 46% 10 Lower 27% Total 23 3 0 3% 33% 3% 64% 0% 3% 0% 11% 29% 2% 57% 1% 0% 0% 74 2 O 7% 24% 2% 66% 1% 0% 0% Item Response Pattern 2 3 4 5 Omit Error Total 1 66 1 O O 84 O 144 O 84 0 312 Difficulty 34: Discrimination .22 Question 9 was rejected on the grounds that its diffi- culty did not fall within the acceptable range. Item 10 The audio-visual aid has most value when used: (1) *(2) (3) (4) (5) dynamically. to supplement a normal communicating process. exclusively of all other'methods of communicating. in large rooms. when the subject matter of the speech 18 factual e 63 TABLE 12 ITEM ANALYSIS OF QUESTION 10 Item Response Pattern 1 2 3 4 5 Omit Error Total Upper 27% 5 71 0 O 8 0 0 84 6% 85% 0. 0% 10% 0% 0% Middle 46% 6 109 2 1 26 0 0 144 4% 76% 1% 1% 18% 0% 0% Lower 27% 5 5O 4 5 20 0 ‘ 0 84 6% 60% 5% 6% 24% 0% 0% Total 16 230 6 6 54 0 o 312 5% 74% 2% 2% 17% 0% 0% Difficulty 26: Discrimination .25 Question 10 was rejected on the grounds that its difficulty did not fall within the acceptable range. As stated previously, items which fail to meet the criteria of acceptability are destroyed and those which meet the requirements are filed for future use.21 Items which are judged acceptable, but require some revision of their Options, are returned to the recitation instructors for modification. There are no specific instructions pro- vided for this modification, though the instructors are given the results of the item analysis of the particular question. Revised items are treated as new questions 2like a point of interest, a periodic check is made of the content of those questions which item analysis has deterb mined to be too easy and non-discriminating. If the con- tent of these questions, in the mind of the course chairman, reflects material that has been emphasized in the lectures and textbook, they are retained in a special file. It is felt that this type of question might point to some attain- able behavioral objective for the course. th th ha m0 am: 085 was the 101 64 available for immediate use on a given examination Summary of Chapter III There are two objectives for the use of written exami- nation as a technique for the evaluation of student perfor- mance in Speech 101: (l) to determine the degree to which the student has mastered certain principles and concepts of speaking; (2) to evaluate examination questions accor- ding to certain criteria with a view toward.their eventual use on standardized Speech 101 tests. It was observed that the validity of a given Speech 101 examination is based solely upon the judgment of the administrative staff of the course at the time of its con- struction. It was suggested that a set of behaviorally phrased course objectives would be a real asset to this type of judgment. This would be particularly true at the time when a decision is being made as to the relevance of the test item to the material stressed in the course. A thorough attempt to match examination questions to the be- havior patterns of effective speakers would appear to add more weight to the assumption that a given Speech 101 ex- amination score reflects the student's knowledge of the essential principles and concepts of public speaking. As was noted at the outset of this chapter, a goal was set to develOp 1000 acceptable test items which could then be used in different combinations to form the Speech 101 examinations. The dual purpose of the testing procedure t'n an be1 65 is justified in terms of this goal. At the present time approximately 500 items have been judged acceptable (bet- ween 50% and 60% of the items on a given examination are so judged). The application of a set of behavioral objec- tives to item selection processes would probably reduce the number of available questions. Thus, it behooves those directly concerned with the course to give some serious consideration to how the processes of item selection can be made more efficient. There are two suggestions that can be made in this area: (1) that once the behavioral objectives of the course have been determined they should be made available to the recitation instructors as aids in test item construction; (2) that more time be devoted, in the staff meetings of the course, to the issues of initial item writing, and particularly to how to revise questions that do not have plausible Options. The accumulation of a large number of acceptable test items will allow for the construction of similar examinations of which the scores may validly be used as a basis for the comparison of students from one test to another. The pre- sent statistical standardization of total scores for such a purpose is not very meaningful when one considers that these scores are based on totals from examinations which are only partially successful at making distinctions between good and poor students. '0 a A" C‘ 66 It should be noted that the application of the statistically oriented indexes of difficulty and discri- mination does not directly confront the issue of test vali- dity. The meeting of these criteria by all the items on an examination assures only that a distribution of test scores will be such that a statistically significant dis- tinction can be made between them. While it is true that this type of distinction is necessary when the tester is faced with the pragmatic need to give grades and be fair to students, it is not a substitute for the notion that a test should have a meaningful relationship to the principles and concepts being examined.22 22 For an excellent discussion of test validity see Lee J. Cronbach, Essentials of Psychological Testing (New York: Harpers, 1965), pp. 96-123. CHAPTER IV THE EVALUATION OF STUDENT PERFORMANCE USING THE SPEECH 101 RATING SCALE The purposes of this chapter are to present the rationale for the use of a rating scale in the evaluation of student public Speaking performances; to trace the develOpment of the Speech 101 rating scale; and to analyze and evaluate the results of using the Speech 101 rating scale in both experimental and classroom situations. Rationale for the Use of Speech Rating Scales In principle, the objective of a rating scale is to render the variable of human judgment as accurate and objective as possible. In a very real sense, rating scales are attempts to standardize those evaluations which are dependent upon human judgment. It is generally the case in a basic course in public speaking that the evaluation of student oral performance is the product of just this type of judgment. It may be considered or spontaneous, advanced by expert or’uninformed, but it is human and the act of making some type of evaluation based on a performance is judgmental. The degree to which such judgments have been standardized by the field of speech is an open question. In 1928, in the first article offering a.rating scale to appear in the Quarterly Journal of Speech,” Stevens re- marked, ”any rating scale, based as it must be on opinion, 67 68 falls considerably short of the concreteness and accuracy of objective measurements."1 Fortunately, some significant advances have been made in the field of psychometric measurement since 1928 which have made "the concreteness and accuracy of objective measurement” more attainable in the construction of rating scales for the evaluation of public speaking. Particularly, the technique of factor analysis has enabled researchers to make objective estimates of the components of judgment involved in the evaluation of public speaking either by expert or lay opinion. All this is offered in justification for the consideration of the use of rating scales in the evaluation of public speaking, and as a practical answer to those who feel now as Stevens did in 1928. In 1929, Knower listed twelve values for the use of rating scales in the evaluation of student oral performance. These values are still applicable to the field of speech education. They are: 1. The use of rating scales may lead among teachers of public speaking to less dog- matism and a more open-minded approach to our problems of content and methodology. 2. The use of rating scales may have an edu- cational value to students of public speaking in that they keep the elements of the situation constantly before them. 1Wilmer E. Stevens, ”A Rating Scale for Public Speakgrs,” Quarterly Journal of Speech, XIV (April, 1923). p. 22 . 3. 5. 9. 10. 11. 12. 69 The use of rating scales makes possible a fairly accurate weighing of the elements in the total situation. The use of rating scales makes possible a more scientific, objective, and verifiable basis of measuring Speech performance in terms of a specific performance. Rating scales are more likely to lead to accurate grading than when the instructor "gets an impression" or follows random clues. Rating scales may be devised to represent Speaking ability in terms of a raw score or'in a graphic manner. Rating scales may be used as a pedagogical device to increase the interest of students in class work. Rating scales furnish a record of speech which may be used to advantage in speech training. Rating scales furnish a measure by which ability to rate others may be determined. Rating scales enable the computation of a group judgment which may have more objective value than individual criticism. Rating scales may be used to compare the work of sections taught by the same instructor. Rating scales may be used to compare the work of sections taught by different instructors. While there seems a sufficient rationale for the use of rating scales in public speaking courses, the nature of a proper Scale, complete and with meaningful items, has 2 Franklin E. Knower, "A Suggestive Study of Public Speaking Rating Scale Valuesa" uarterly Journal;of Speech, 1. XV (February, 1929), pp. 30- 70 been a perplexing problem for the field of speech. Review of Recent Literature With Respect to Speech Rating Scales The major research studies in the field of Speech since 1930 dealing with rating scales as a basis of speech evaluation can be divided into three classifications: (1) those studies dealing with items that could or should be included on speech rating scales; (2) those studies dealing with the comparative results of different forms of Speech rating scales; (3) those studies dealing with the reliability of ratings on speeches. In 1934, Norwelle3 conducted a study in which he sur- veyed twelve of the then pOpular textbooks of basic public speaking for "the various elements that make up an effective speech.” He came up with twenty-four items. By having students and "prominent speakers" rank what they considered to be the eight most significant items, he was able to reduce the list to the following ten scale items: (1) se- lection and organization of ideas;.(2) use of language; (3) audibility; (4) directness; (5) agreeableness; (6) cm- phasis; (7) naturalness; (8) poise; (9) position; and (10) gestures. 3Lee Norvelle, "Development and Application of a Method for Measuring the Effectiveness of Instruction in a Basic Speech Course," Speech Monographs, I (September, 1934). pp. 41-63- .1:C~...& d 0...:me A nine 8 71 In 1936, Monroe, Remmers and Lyle“ published a rating scale based on the following items: (1) posture; (2) direct- ness; (3) enthusiasm; (4) bodily action; (5) voice; (6) attention; (7) objectivity; (8) concreteness; (9) motiva- tion; (10) organization; (11) general effectiveness. 5 In 1941, Bryan and Wilkie advanced for consideration a sixteen item rating scale including the following: (1) opening remarks; (2) personal appearance; (3) voice; (4) distinctiveness; (5) flow of words; (6) self-control; (7) degree of energy; (8) platform behavior; (9) personality; (10) sincerity; (11) command of language; (12) basis of thought; (13) interestingness; (l4) reasoning; (15) con- cluding remarks; (16) value of speech. In 1957, Brooks6 constructed a forced choice scale for measuring Speaking achievement in which he used descriptive phrases such as "speaks fluently, good sentence structure, excessive movement, and lack of information" as items. “A. H. Monroe, H. H. Remmers, and E. V. Lyle, Measur- ing the Effectiveness of Public Speaking in a Beginning Course ("Studies in Higher Education," No. 29; "Bulletin of Purdue University," SSSV, No. 1; Lafayette, Indiana, Pur- due University, 1936). 5Alice I. Bryan and Walter H. Wilkie, "A Technique for Rating Public Speeches," Journal of Consulting PS1- cholo , V (March-April, 1941), pp. 80-90. 6Keith Brooks, "The Construction and Testing of a Forced Choice Scale for Measuring Speaking Achievement," Speech Monographs, XXIV (March, 1957), pp. 65-73. 72 In 1962, Becker7 experimented with instructor ratings of 442 freshman Speeches on the basis of an eleven item rating scale including: (1) subject; (2) analysis; (3) material; (4) organization; (5) language; (6) adjustment; (7) bodily action; (8) voice; (9) articulation and pronun- ciation; (10) fluency; (11) general effectiveness. In the previously cited study, Price8 in 1964 was able to develOp a scale, via factor analysis, which was origi- nally based on thirty-five items (coming from an examination of currently available speech literature) and later reduced them to six items: "(1) does the speaker sound reasonable?; (2) is the Speaker intelligible?; (3) does the speaker communicate well through bodily action?; (4) is the Speaker socially acceptable?; (5) does the speaker use language vividly and imaginatively?; (6) does the speaker have a pleasing voice?" From an examination of the research in the field of speech which seems relevant to the choosing of items to be included on a rating scale, it seems fair to conclude that although there is little agreement among experts with re- spect to specific items, the examined scales consistently allude to the same broad areas of evaluation of speeches. In general, these broad areas of consideration correSpond 7Samuel L. Becker, "The Rating of Speeches: Scale Indepenfiznce," Speech MonOgraphs, XXIX (March, 1962), Ppe 38- e 8Price’ OEe Cite, ppe 218-239e 73 to the traditional canons of rhetoric. It is also true that the cited studies tend to depend upon the experts in the field in order to generate or select items to be in- cluded on scales. The second area of concern of Speech researchers with reSpect to rating scales deals with the various rating tech- niques, their comparative results and their applicability to the evaluation of speeches. This specific area of interest came into prominence in the 1940's and 1950's. 9 and the other Two studies by Thompson, one in 1943 in 194410 dealt with the determination of the relative accuracy of common rating techniques used to evaluate public speaking. Thompson experimented with two techniques for rating student speeches: (l) descriptive scaling and (2) the Bryan-Wilkie scale. He concluded that no one rating technique had a great advantage over any other. Some work has been done in the field of speech dealing with scaling techniques which are dependent upon rank-order. A study by Fotheringham11 dealt with the advantages of re- 9Wayne Thompson, "Is There a Yardstick for Measuring Speaking Skill?" Quarterly Journal of Speech, XXXIX (Feb- ruary. 1943). pp. 87-91. 10Wayne Thompson, "An Experimental Study of the Accu- racy of Typical Speech Rating Techniques," Speech Mono- graphs, XI (1944). pp. 67-79. llWallace C. Fotheringham, ”A Technique for Measuring Speech Effectiveness in Public Speaking Classes,” Speech Monographs, XXIII (March, 1956), pp. 31-37. 74 quiring judges to rank speeches rather than rating them. He maintained that such a method reduces the error caused by generosity and social pressure involved with rating scales. However, this research was done before the era of application of factor analysis to the development of rating scales. Brooks, in his study, found that the ratings on a forced choice scale were comparable with those ratings of the same Speaker using a numerical scale.12 From an examination of the research in the field of speech dealing with the issue of the superiority of one rating technique over another it continually appears that there is no evidence that one technique offers any great advantage over>any other develOped technique. The choice of rating method seems to have been consistently a function of the pragmatic needs of the researcher. The third area of concern for Speech researchers has been the problem of determining rating validity and relia- bility. This problem has two facets: (1) the nature of a meaningful and reliable item on a specific scale; (2) the nature of the relationship between the reliability of a given scale item and the number of raters. Little work has been done beyond the establishment of face validity for items used on the various rating scales 12Brooks, loc. cit. 75 develOped for the evaluation of public speaking. Most of the early studies dealing with the selection of scale items have depended upon so called expert judgment for the selec- tion of items. The lack of agreement among the various groups of experts (a group being those used for a parti- cular study) with respect to specific items points to the difficulty of relying upon the attainment of face validity. The factor analytic studies such as those of Becker13 and Pricelu go beyond the establishment of face validity when they apply the results of actual item usage to the reduction of the list of variables from Which scale items are even- tually chosen. However, it has never been precisely deter- mined how far these techniques go in the direction of standard validity measures.15 Several studies have demonstrated that an increase in the number of raters using a particular scale yields a significant increase in the reliability of the judgments offered.16 However, as Miller points out, this finding has little meaning unless the established reliability is somehow tied to a corresponding increment in consistency 13Becker, 10c. cit. 14Price, op. cit. 15Gu11f0rd, OEe Cite, ppe 354-570 16Keith Brooks, "Some Basic Considerations in Rating Scale Development," Central States Sppech Journal, IX (February, 1957). pp. 27-31. 76 resulting from each increase in the number of raters.17 At the conclusion of his article, with reSpect to the degree of agreement among raters of public speaking, Miller suggests that the problems of reliability and vali- dity are yet persistent: I believe that the reliability and validity of Speech ratings often suffer because of ambituity and uncertainty regarding what we are about. Here, the need for clear and precise specificatigns of behavioral objectives becomes important. The purpose of this section of this chapter dealing with a brief review of research from the field of speech with reSpect to rating scales is merely to establish the fact that the nature of a prOper scale, complete, and with meaningful items remains an Open question. The Development of the Speech 101 Rating Scale From its inception in the Fall of 1960, Speech 101 used some form of rating scale for the evaluation of student oral performance. Appendix G contains a cOpy of the first Speech 101 rating scale. The inclusion and selection of items for the first Speech 101 rating scale were based on the decisions of the teaching staff of the course with the approval of the course 17Gerald R. Miller, "Agreement and the Grounds for It: Persistent Problems in Speech Rating," The Speech Teacher, XIII (November, 1964), pp. 257-61. lsIbid., p. 261. 81 77 chairman. The value of this scale was fairly well limited to the convenience of recording comments for students under general concepts of rhetorical theory. No attempt was ever made to make a student's grade a function of this rating scale, though it is probably true that the listing of the included labels and the requirement that the instructor making the evaluation of a student speech classify his comments according to those items, did affect the grade a student received. It can be said of the first Speech 101 rating scale, that it was never intended to meet any of the objective standards for’the evaluation of rating scales, and that in terms of its convenience as a method of recording observations, it represented a pragmatic consideration within the early development of Speech 101. The working out of the mechanical aspects of peer grouping postponed a major concentration on the develOp- ment of a new Speech 101 rating scale until the Fall term of 1963. It was at this time that Dr. Craig Johnson became the research director of the course. Because peer grouping required student evaluation of student speeches without an instructor's presence (and consequently, interpretation of the basis of evaluation), because the initial Speech 101 rating scale had not been subject to any type of vali- dation that might answer criticisms of its use, and because there was general interest in the whole question of evaluation of student oral performances on the part of the 78 entire staff of the course, (an interest that extended beyond concepts of nominal scaling), a decision was made to develOp a new rating scale. Initially, the problem of developing a new Speech 101 rating scale was attacked from the point of view of student use, that is, from the basis of what students themselves considered to be important items in the evaluation of public speaking. During the Fall academic term of 1963, students enrolled in Speech 101 were asked to list those items which they felt described both good and bad public Speaking. As a result of this investigation, a large, somewhat redundant list of items was obtained. Two lists were gone over by the administrative staff of Speech 101 and were reduced to forty-six items. These forty-six items were then put into a scaling form similar to a semantic differential. (See Appendix H for this scaling form.) During the Winter term of 1964, Scales using the forty-six items were used by peer evaluators in selected sections of Speech 101. Results of this usage were ana- lyzed for the purpose of determining which of the items tended to reflect common factors of student judgment. By selecting those items which resulted in the highest factor loadings when used to evaluate peer speaking, the forty-Six items were reduced to twenty-five. They were as follows: (1) enthusiasm, (2) sincerity, (3) friendliness, (4) eye 79 contact, (5) physical appearance, (6) personality, (7) speaking voice, (8) preparation, (9) vocal inflection, (10) examples, (ll) variety, (12) facial expression, (13) calm, (14) attitude, (15) knew Speech, (16) poise, (17) humor, (18) organization, (19) diction, (20) total effect, (21) lOgic, (22) interest, (23) courteous, (24) vocal pauses, (25) evidence. It was then decided that it would be meaningful to subject these twenty-five items to a more rigorous inves- tigation. A series of filmed speeches was obtained from Ohio University. These films had been evaluated with re- spect to their overall quality in a previous investigation by Drs Johnson while he was a member of the faculty of Ohio University. It was reasoned that this knowledge would allow for the selection of films of different overall quality for use in an experiment dealing with the valida- tion of twenty-five Scales evolved during the Fall and Minter terms, 1963/64, of Speech 101. The rationale for the use of these films was based on the idea that a good scale item should be applicable to speeches of differing quality. That is, the scale item value (a rating from one to seven) should vary with the quality of the speech, rather than the relevance of the scale item. It also appeared to be more rigorous to use speeches whose quality had been independently determined rather than to depend upon the production of a variance in quality of Speeches on a given 80 day or in a given class of Speech 101. The use of films also made it possible for the same Speeches to be evalu- ated by the entire enrollment of Speech 101 and its staff. It was thus reasoned that films of public speaking for use in the research project had advantages which more than compensated for the fact that filmed performance is not the normal type of student performance evaluated in a basic course in public speaking. Four films were selected for showing over closed cir- cuit television to students enrolled in Speech 101 on re- search day during the Winter term of 1964. These films were established to be of different quality by various investigations at Ohio University. The students were told to evaluate each of the speeches according to the twenty- five scales. They were instructed to use all the scales in as objective manner as possible. (See Appendix I for the form used in this investigation.) In addition, the reci- tation section instructors were asked to view and evaluate the films in the same manner as the students. Results of these evaluations were then subjected to factor analysis in order to determine whether the twenty-five scaled items could be grouped around a smaller number of common factors relevant to the evaluation of public speaking. The use of the technique of factor analysis as a means of determining common factors of evaluation is a relatively recent develOpment in the field of speech, but has often 81 been used by psychologists in the area of measurement. Many psychologists have engaged in ex- tensive testing programs, employing factor analysis to determine a relatively small num- ber of tests to describe the human mind as completely as possible. The usual approach includes the factor analysis of a large battery of tests in order to identify a few common factors. Then the tests which best measure these factors, or, preferably, revised tests based upon these, may be selected as direct measures of factors of the mind. 9 Use of the techniques of factor analysis for the derivation of items on a speech rating scale is exempli- fied by the work of Price at the University of Wisconsin.20 In his study, he used factor analysis as a method for examining the intercorrelations among thirty-four items contained on an experimental rating scale. In Price's study the data used in the analysis came from instructor evaluation of student speeches. It will be recalled that the decision to develOp a Speech 101 rating scale was mo- tivated in part by a desire to attack the problem of speech evaluation from the point of view of student rather than instructor usage. The problem of the objective classification of scale items, however, appeared to be similar enough to the Price study to justify the use of factor analysis in the develOpment of this scale. Factor loadings given in Table 13 were obtained as a 1gliarry H. Harmon, Modern Factor Analysis (Chicago: University of Chicago Press, 1960), p. 6: 2oprlee, op. cit., pp. 218-309. 82 result of student usage of twenty-five scaled items. TABLE 13 STUDENT FACTOR MATRIX FOR TWENTY-FIVE SCALE ITEMS Factor Loadings Item I II III IV 1. Enthusiasm .5282 .2927 .0859 .0277 2. Personality .4269 .1438 .1197 .0372 3. Variety .5303 .1438 .1197 .0372 4. Poise .3148 .5385 .0759 .3411 5. LOglc .1174 .2872 .6012 .0410 6. Sincerity .2722 .5883 .1061 .1734 7. Speaking Voice .5849 .2176 .1772 .2269 8. Examples .1862 .1514 .7591 .0446 9. Facial Expression .6366 .1773 .0307 .1012 10. Humor .4994 .2120 .2778 .1694 11. Interest .6056 .0321 .2622 .0440 12. Friendliness .4966 .3087 .0637 .0696 13. Preparation .1556 .6391 .2114 .2395 14. Calm .0345 .6817 .1142 .1210 15. Organization .0331 .4757 .6191 .1070 16. Courteous .1472 .5275 .2786 .0171 17. Eye Contact .4142 .1452 .0314 .4671 18. Vocal Inflection .6228 .1295 .1854 .2015 19. Attitude .4334 .3836 .1850 .2867 20. Diction .2325 .4612 .2545 .1033 Table 13 (con't) 83 Item I II III IV 21. Vocal Pauses .0175 .1020 .1230 .5082 22. Physical Appearance .3972 .0264 .1973 .4334 23. Total Effect .6282 .2063 .2376 .1566 24. Knew Speech .3345 .5461 .0385 ~2489 25. Evidence .2276 .1255 .7412 .0978 Factor loadings given in Table 14 were obtained as result of instructor usage of twenty-five scaled items. TABLE 14 FACULTY FACTOR.MATRIX ON TWENTY-FIVE ITEMS Factor Loadings Item I II III IV 1. Enthusiasm .7960 .3956 .0941 .0556 2. Personality .0354 .2435 .0146 .8391 3. Variety .6676 .2463 .4216 .1516 4. Poise .5249 .6023 .3200 .0026 5. Logic .1735 .2787 .8633 .1567 . Sincerity .3209 .3400 .0579 .5349 7. Speaking Voice .0198 .7034 .2230 .3825 8. Examples .0685 .1444 .8164 .0602 9. Facial Expression .6847 .5260 .1209 .0322 10. Humor .5663 .1829 .3759 .1480 11. Interest .2448 .2803 .8730 .0844 12. Friendliness .0225 .1560 .2058 .7106 Table 14 (con't) 84 Item I II III IV 13. Preparation .3684 .4973 .3982 .4459 14. Calm .0103 .8750 .2142 .1642 15. Organization .0465 .3762 .5786 .3631 16. Courteous .7197 .1835 .2107 .0770 17. Eye Contact .6656 .1069 .0640 .3000 18. Vocal Inflection .0182 .7757 .0295 .0928 19. Attitude .4078 .7519 .3525 .0701 20. Diction .5726 .2495 .0686 .5084 21. Vocal Pauses .0555 .0387 .0610 .5748 22. Physical Appearance .3400 .7165 .0861 .3112 23. Total Effect .4223 .6124 .6052 .0543 24. Knew Speech .0387 .7504 .4573 .0304 25. Evidence .1992 .5273 .6952 .0306 In instances where the objective of factor analysis is limited to the classification of variables according to common factors, it is not unusual to apply some type of objective criterion to be used in determining the number of significant factors that have been established by the ana- lysis. For the purposes of the research dealing with the development of the Speech 101 rating scale, it was decided to apply a three-item criterion for the determination of 85 the number of factors supported by the analysis.21 This criterion requires that for a factor to be established, it must contain the highest factor loadings for at least three items under investigation.22 A comparison of the results of Tables 13 and 14 shows that three factors emerge from the analysis of faculty and student use of the twenty-five items. A fourth factor was maintained by the faculty, but not by the students. It was decided as a result of the factor analysis of the twenty-five scaled items that additional research should be done with the three factors supported by both student and faculty usage. Re-examination of the cited tables shows that students and faculty maintained, consistently, the highest loading of four items on each of three factors. Table 15 shows this agreement. 21Jack O. Neuhaus and Charles Wrigley, "The Quartimax Methods: An Analytical Approach to OrthOgonal Simple Struc- ture " British Journal of Statistical Psychology, VII (1954). pp. 81-91- 22Recent developments dealing with the computeriza- tion of factor analysis, particularly the "principle axis" method, allow the researcher some greater’degree of sophis- tication with respect to the number of items required to establish a factor. The implementation of the ”principle axis" method of factor analysis on the CDC 3600 computer at Michigan State University allows for such SOphistica- tion in the application of the Kiel-Wrigley criterion for the determination of the number of factors to be rotated. An explanation of this criterion can be found in Factor Analysis Prggrams: FANOD 3 and FANIMp3, Technical Report 2 (Revised), Michigan State University Computer Institute for Social Science Research, September 22, 1964. 86 TABLE 15 STUDENT/FACULTY LOADING AGREEMENT Factors Item I II III Logical Reasoning x Organization x Evidence x Examples x Enthusiasm x Humor x Facial Expression x Variety x Calm 1 Preparation x Poise x Knows Speedh x Given the results shown in Table 15, it was decided that a broader experiment was needed before any firm con- clusions could be made as to what items should be included on a formalized Speech 101 rating scale. Particularly, the researchers were reluctant to commit Speech 101 to the use of a rating scale based on the evaluation of only four filmed presentations. During the Spring term of 1964, seventeen films were 87 selected to be shown on closed circuit television to the entire enrollment of Speech 101. These films had been evaluated as to overall quality at Ohio University. The quality of these films had a somewhat greater range than the four used in the research project of the previous term. The films were shown to and evaluated only by students using a rating form composed of the twelve items which are described in Table 15. Appendix J represents the rating scale used for the project. Results of the film evalua- tions were subjected to factor analysis. The criterion of at least three items per factor was used to determine the number of rotations for the analysis. The results of this analysis are given in Table 16 TABLE 16 STUDENT FACTOR MATRIX ON TWELVE ITEMS Factor Loadings Item I II III 1. Evidence .7914 .1077 .1151 2. Facial Expression .0828 .7973 .2380 3. Calm .1382 .0382 .7208 4. Legical Reasoning .7739 .1145 .2530 5. Preparation .3025 .1206 .6865 6. Examples .7826 .1774 .1389 7. Organization .6323 .1108 .3985 8. Enthusiasm .0594 .8282 .2654 Table 16 (cont'd) 88 Item I II III 9. Variety .2447 .7900 .1317 10. Knew Speech .2192 .2713 .7184 11. Humor .1361 .7030 .0403 10. Poise .1378 .3806 .6770 These results support clearly the existence of the three factors that were found in the analysis of twenty- five scales. Over the seventeen films, the three factors accounted for 63% of the variance for the twelve item scales. The citation of the percentage of accountable scale variance is to be viewed as an evaluation of the particular factor solution being imployed. It is conven- tional to label a particular factor solution meaningful if it accounts for at least 50% of the scale variance. This criterion is not to be confused with the one used to determine the number of established factors for a parti- cular>analysis. It is particularly useful in those situa- tions where the analysis is based on the reduction of the number of items representing a previously established common factor. During the summer of 1964, a review of all the ma- terials relating to the Speech 101 research on the devel- 2 3Harmon, op. cit., pp. 13-19. 89 opment of a rating Scale was made by the chairman and the research director of the course. It was decided at that time that a formal rating scale could be constructed from the evidence available, but that the issue of exactly what items needed to be stressed in the evaluation of student oral performance in Speech 101 was by no means closed. The formalization of a Speech 101 rating scale in- volved the selection of items to be contained on the scale and the selection of a format which would be called the rating Scale ppp pp. As a result of the previous research it was decided that the items selected to be used on the Speech 101 rating scale should reflect the three factors that seemed to prevail in the evaluation of filmed speeches, when that evaluation was done by students. The items: evidence, logical reasoning, and Opganization were selected to repre- sent what was labelled the "materials of development" fac- tor. It will be noted that other items, such as examples, could have been chosen to reflect the same factor; however, it was felt, at the time, that the terminology of the text- book and the course lectures should be given some conside- ration in terms of the selection of which items should be used as representative of the specific factor. The items: pye contact, enthusiasm, and facial expression were chosen to represent what was labelled the "materials of experience" factor. Finally, the items: poise, preparation, and sin- 90 cerity were chosen to represent a "personal proof" factor. The labels for these factors correSpond to major areas of discussion in the course textbook. A tenth item, £2331 effect, was added to the rating scale for three reasons: (1) a desire on the part of the course chairman to have a general item on the scale in order to allow the student evaluator some sense of an overview of the speech being evaluated; (2) a pragmatic consideration in that ten items would allow easy division in terms of an average rating on the scale as a measure of the central tendency of the indi- vidual items; and (3) the desire to include an item which might reflect that amount of variance not explained in terms of the three labelled factors. Pragmatic considerations played an important role in the determination of the format of the Speech 101 rating scale. Once the items to be included on the scale were determined, it was decided that each item could range in value from one to seven. This range is conventional for rating scales and parallels that of the semantic differen- tial. It was then decided that the items should be arranged vertically rather than in the customary horizontal pre- sentation of the semantic differential. It was felt that in order to make the pr0posed rating scale more meaningful as a critique of a particular speech in the classroom situation, that written comments by the evaluators should be encouraged. The vertical arrangement of the items on 91 the rating scale allows sufficient space for written com- ments. It was also decided that the format of the scale should be such that a cOpy of how it was used in the eval- uation of a speech could be made available to both the student speaker'and the recitation instructor. For this reason the scale was printed on NCR paper-sets. This pro- vided an original and a carbon cOpy of the assigned scale values as well as the associated written comments. Appen- dix K contains the first edition of the formalized Speech 101 rating scale. Results of the Use of the Speech 101 Rating Scale The formalized Speech 101 rating scale was first used during the Fall term of 1964. The same form was used by the recitation instructors when evaluating student performance as well as by the student evaluators in the peer group, both with and without the recitation instruc- tor's presence. It was decided that for the period set aside for research during the course term a replication of the filmed speech evaluations should be done in order to determine if the ten item formalized Speech lOl rating scale truly rep- resented the same factor structure as that discovered when twelve item scales were used. Thirty-four speeches were shown in groups of three and four per showing to the stu- dents enrolled in Speech 101 during the Fall term of 196“. These films were shown over closed circuit television with 92 a unique grouping for each of the recitation hours of the course. The filmed speeches had been previously classified as to their general effectiveness as low, middle, or’high. This classification was done previously at Ohio University. The films for groups of students were shown in random order with respect to their quality. Results of the student eval- uation were subjected to factor'analysis in order to deter- mine the factor loadings compared with results obtained from research conducted during the previous year. Table 17 represents the results obtained from the analysis of the evaluation of the thirty-four filmed speeches during the Fall term of 1964. TABLE 17 THREE-FACTOR.MATRIX - RESEARCH DAY - FALL 1964 Factor Loadings Item I II III 1. Total Effect .5262 .5788 .3687 2. LOgical Reasoning .8286 .1781 .3288 3. Evidence .8688 .2227 .1089 4. Organization .6975 .1680 .5160 5. Preparation .3911 .2694 .7851 6. Poise .3137 .5053 .6131 7. Sincerity .2946 .7585 .2648 8. Racial Expression .2018 .8800 .1014 9. Enthusiasm .1698 .8921 .1458 93 Table 17 (con't) Item I II III 10. Eye Contact .1045 .7273 .3749 It was discovered that the three-factor analysis shown in Table 17 accounted for 78% of the variance of the scales (Factor I: 26%; Factor II: 34%; Factor III: 18%). How- ever, it was found that the item sincerity, which previ- ously loaded highest on the "personal proof" factor (Factor III of Table 17) loaded highest on the "materials of ex- perience" factor (Factor II of Table 17). It will also be noted from Table 17 that only two items, poise and 222‘ paration, had their highest loading on Factor III. Thus, using the criterion that each factor must be represented by the highest loadings of at least three items, the re- sults of the student evaluation of the thirty-four films did not support a three-factor basis for Judgment. Table 18 presents an analysis of the same data used for Table 17 but based on a two-factor solution only. TABLE 18 TWO-FACTOR MATRIX - RESEARCH DAY - FALL 1964 Factor Loadings Item I II 1. Total Effect .6071 .6137 2. Legical Reasoning .8748 .1899 94 Table 18 (con't) Item I II 3. Evidence .8001 .1909 4. Organization .8529 .2217 5. Preparation .7135 .3876 6. Poise .5516 .5916 7. Sincerity .3531+ ~7770 8. Facial Expression .1872 .8712 9. Enthusiasm .1798 .8930 10. Eye Contact .2431 .7771 The solution presented in Table 18 accounted for 73% of the variance of the scales (Factor I: 36%; Factor II: 37%) as used in the research project. It will be noted that the two items which loaded highest on the "personal proof" factor of Table 1? split with a two-factor solu- tion (poise loading highest on the "materials of experience" and preparation loading highest on the "materials of devel- opment" factor). The fact that the student evaluation of the thirty- four speeches seemed to support only two factors led to an investigation to determine the consistency of this result when the quality of the film was held constant. Table 19 presents a factor analysis of the evaluation of those films (8) which were Judged-to be in the ”high" quality classification. 95 TABLE 19 THREE-FACTOR MATRIX FOR HIGH QUALITY FILMS Factor Loadings Item I II III 1. Total Effect .5610 .4782 .4226 2. LOgical Reasoning .8617 .1526 .1847 3. Evidence .8394 .1528 .0950 4. Organization .7623 .1461 .3654 5. Preparation .4696 .1467 .7667 6. Poise .2725 .5401 .6003 7. Sincerity .2490 .6796 .4481 8. Facial Expression .2035 .8907 .0549 9. Enthusiasm .1097 .8580 .2035 10. Eye Contact .0835 .6852 .4332 With the exception that the item total effect loaded highest with the "materials of development" factor, the analysis of the evaluation for the "high" quality films tended to support the findings of Table 17. This analysis accounted for 75% of the scale variance (Factor I: 28%; Factor II: 30%; Factor III: 17%). Again, as in the case of the analysis of the evaluations of the thirty-four speeches, a two-factor solution was obtained for the eval- uation of the "high" quality films. Table 20 presents the two-factor solution. 96 TABLE 20 TWO-FACTOR MATRIX FOR HIGH QUALITY FILMS Factor Loadings Item I II 1. Total Effect .6497 .5476 2. LOgical Reasoning .8542 .1477 3. Evidence .7991 .1226 4. Organization .8335 .2020 5. Preparation .7205 .3412 6. Poise .4497 ~6777 7. Sincerity .3601 .7662 8. Facial Expression .1092 .8187 9. Enthusiasm .1255 .8710 10. Eye Contact .2018 .7772 The two-factor solution presented in Table 20 ac- counted for 70% of the scale variance (Factor I: 34%; Factor II: 36%). The loadings of the items preparation and 22$§2 observed in the two-factor analysis of the evaluation of the thirty-four filmed speeches is maintained in Table 20. Table 21 presents a factor analysis of the evaluation of those films (18) which were Judged to be in the "middle" quality classification. 97 TABLE 21 THREE-FACTOR MATRIX FOR MIDDLE QUALITY FILMS Factor Loadings Item I II III 1. Total Effect .6613 .4910 .2251 2. LOgical Reasoning .8745 .1696 .1244 3. Evidence .8143 .1319 .0862 4. Organization .8507 .1404 .1867 5. Preparation .3880 .1717 .7100 6. Poise .5090 .2118 .6506 7. Sincerity .3333 .7917 .1106 8. Facial Expression .1081 .8096 .3394 9. Enthusiasm .1195 .8869 .1930 10. Eye Contact .1573 .4323 .7673 It is interesting to note that the results cited in Table 21 based on the evaluation of the "middle” quality films support a three-factor solution. Where it was ex- pected that the items preparation, oise, and sincerity would load highest on the "personal proof" factor; in the analysis of the "middle" quality films, the items prepara- tign, poise, and eye contact loaded highest on Factor III. In terms of the amount of variance accounted for by the three-factor solution represented by Table 21; 76% of the total scale variance is accounted for by the three-factor solution (Factor I: 35%; Factor II: 27%; Factor III: 14%). 98 Table 22 presents a factor analysis of the evaluation of those films (8) which were Judged to be in the "low" quality classification. TABLE 22 THREE-FACTOR.MATRIX FOR LOW QUALITY FILMS Factor Loadings Item I II III 1. Total Effect .4375 .5894 .3637 2. LOgical Reasoning .7880 .1279 .4224 3. Evidence ' .8721 .2686 .0800 4. Organization .6408 .1149 .5834 5. Preparation .2390 .2724 .8128 6. Poise .2539 .4285 .6733 7. Sincerity .2248 .7114 .3264 8. Facial Expression .1771 .8700 .1313 9. Enthusiasm .1402 .8780 .1606 10. Eye Contact .0942 .7600 .2530 It can be seen that the results cited in Table 22 are highly similar to the analysis of the evaluation of the thirty-four filmed speeches. In terms of the accounted for proportion of scale variance, a three-factor solution ac- counted for 75% (Factor I: 22%; Factor II: 33%; Factor III: 20%). It can be observed, however, that the results con- tained in Table 22 do not support an existence of three 99 factors according to the criterion of at least three items with their highest loading per factor. Table 23 presents an analysis of the results of the evaluation of the "low" quality films in terms of two factors. TABLE 23 TWO-FACTOR MATRIX FOR LOW QUALITY FILMS Factor Loadings Item I II 1. Total Effect .5433 .6100 2. Logical Reasoning .8810 .1418 3. Evidence .7451 .2201 4. Organization .8552 .1645 5. Preparation .6552 .3828 6. Poise .5201 .5832 7. Sincerity .3438 .7382 8. Facial Expression .1854 .8663 9. Enthusiasm .1720 .8812 10. Eye Contact .1930 .7828 The results of the loadings contained in Table 23 parallel closely the two-factor analysis of student eval- uation of the thirty-four filmed speeches (Table 18). Tables 24 and 25 present summaries of the results of the initial analysis of the evaluations given by students of the filmed speeches on the research day of Speech 101 100 during the Fall term of 1964. These tables show the fac- tors on which the ten items from the rating scale loaded highest and the percentage of the scale variance accounted for by the analysis. Table 24 presents a three-factor solution summary and Table 25 a two-factor solution summary. As was noted previously, Factor I was labelled "materials of development," Factor II "materials of experience" and Factor III "personal proof." TABLE 24 THREE-FACTOR SUMMARY OF RESULTS OF RESEARCH DAY FALL 1964 Highest Factor Loadings Item Quality Group High Middle Low TOTAL 1. Total Effect I I II II 2. LOgical Reasoning I I I I 3. Evidence I I I I 4. Organization I I I I 5. Preparation III III III III 6. Poise III III III III 7. Sincerity II II II II 8. Facial Expression II II II II 9. Enthusiasm II II II II 10. Eye Contact II III II II Percentage of Variance 75 76 75 78 101 TABLE 25 TWO-FACTOR SUMMARY OF RESULTS OF RESEARCH DAY FALL 1964 Highest Factor Loadings Item Quality Group High Middle Low TOTAL 1. Total Effect I I II II 2 . Logical Reasoning I I I I 3. Evidence I I I I 4. Organization I I I I 5. Preparation I I I I 6. Poise II II II II 7. Sincerity II II II II 8. Facial Expression II II II II 9. Enthusiasm II II II II 10. Eye Contact II II II II Percentage of Variance 70 '70 69 73 Tables 26 and 27 present summaries of the percentage of scale variance accounted for by the analyses. Table 26 presents a three-factor solution summary and Table 27 a two-factor solution summary. 102 TABLE 26 PERCENTAGE OF SCALE VARIANCE ACCOUNTED FOR BY THREE FACTORS Percentage Factor High Middle Low TOTAL I 28 35 22 26 II 31 27 33 34 III l7 14 20 18 TABLE 27 PERCENTAGE OF SCALE VARIANCE ACCOUNTED FOR BY TWO FACTORS Percentage Factor High Middle Low TOTAL I 34 38 ' 33 36 II 36 32 36 37 Results of all the factor analyses of the thirty-four filmed speeches shown on research day, Fall term 1964, seem to indicate the following: (1) The students' evaluations consistently grouped around two factors, "materials of develOpment" and "materials of experience." (2) The two-factor grouping of the student eval- uations accounted for approximately 73% of the scale variance. (3) The factor labelled "materials of experience" accounted for a greater percentage of the variance in both the two and three-factor solutions (the exception to this was the analysis of the "middle" quality films). 103 (4) The three—factor solution accounted for a slightly higher percentage of the total scale variance than a two-factor solution. (5) The factor “personal proof" contained highest loadings on the two items poise and preparation for all quality groupings of the films, but this factor did not meet the established criteria of at least three high item factor loadings. (6) The item sincerity, which previous research indicated would load highest with the "personal proof" factor, consistently loaded highest with the "materials of eXperience" factor. Interpretations of the results of student usage of the developed rating scale are subJect to certain reservations. The first such reservation is with reSpect to the influence of the course material on the discovered factor structure. It was generally the case that the research periods devoted to the development of the Speech 101 rating scale came approximately five weeks into the course. By that time the student had been exposed to four lectures, had been assigned to read ten chapters in the course textbook, and had taken the mid-term examination. An investigation of the contents of those lectures and text assignments shows that the students involved in the cited research proJects had suf- ficient Opportunity to come into contact with the factor labels "materials of development," "materials of experience," and "personal proof.” The mid-term examination covering these lectures and textbook assignments would tend to rein- force any felt need on the part of the student to use the 104 terminolOgy and syntax of the course as a basis for speech evaluation. All of this indicates that the course material may have had an effect on the way in which the specific items on the rating scale grouped around the discovered factOI‘S. A second reservation is with respect to the influence of the researcher's meaning for the reference variables on the discovered factor structure. It is true of the findings associated with the student evaluation of the thirty-four films shown on research day Fall 1964, that there was some expectation that the results could be interpreted with reSpect to "materials of development," "materials of ex- perience," and "personal proof." In a sense this expecta- tion was structured by the selection of items that appeared on the rating scale. This might tend to result in a certain amount of interdependence between the expectation of re- sults and their consequent interpretation. Guilford makes specific reference to this point in his discussion of the interpretation of factors: There may be some who prefer not to attempt to give psychological meaning, or any other’kind of meaning, to factors even when rotated. One could of course, merely designate factors by letter or by number and define each one by the fact that it characteristically shows loadings of such and such in tests of such and such. One would also probably have to specify a population with certain properties. This approach to the handling of factors seems to forgo the important possibility of relating factors in a conceptual system and relating the system to other facts and principles of science. A philosophy that has 105 resistance against naming may be commendable from some points of view. The chief fear seems to be of giving a name to something that, after all is said and done, actually does not exist. There can be no doubt of the utility of discovering unities when they do exist. Neither should there be much doubt of the utility of developing new concepts which serve us with tools with which to lay hold of events and which serve as media of communication. Especially is this true when the concepts can be supported by reference to opera- tions by which they were derived as in factor analysis. If concepts are faulty, this will be 24 discovered in time and changes can always be made. The above stated reservations with respect to uncon- trolled influences as they may or may not affect factor analytic results and their interpretation are not viewed as a serious limitation to the use of the develOped rating scale in Speech 101. This may appear to be a rather prag- matic conclusion, but in truth, there exists no standard statistical procedure for testing such effects outside of the replication of the experimental situations giving rise to the results. It is worth stating, however, that before any claim can be made about the applicability of the deve- loped rating scale to all situations involving the evalu- ation of public speaking, these reservations will have to be dismissed by additional research. In addition to the factor analysis of the evaluation of the thirty-four filmed presentations on research day during the Fall term of 1964, an effort was made to estab- lish the reliability of student raters with respect to each 24Guilford, op. cit., p. 522. 106 of the items contained on the Speech 101 rating scale. The statistical procedure used in the determination of this reliability was Ebel's Interclass Correlation of Reliabi- 25 lity. Guilford provides the rationale for the use of this particular approach to the reliability of ratings: There are some who prefer to estimate reliability of ratings by use of rerating data, and certainly this is a meaningful type of reliability. Except for the trouble of a replication, it is an easy procedure to employ. There are serious dangers of correlation of what should be errors, however, due to the memory of the raters. Most investigators seem to prefer the operation of correlating ratings obtained from different raters as the approach to reliability of ratings. There may be common biases among raters, but this source of error correlation is probably smaller than reratings. One has to assume that raters involved in the reliability study are interchangeable. Since raters with similar types of information are generally utilized for thiszgurpose, this assumption is not unreasonable. Since the material was available, a decision was made first to determine the reliability of those raters whose evaluations were used in the above cited factor analysis, and then to see the degree to which the reliability of a scale item was a function of the number of raters. In the factor analysis, evaluation by twenty raters per film was used. Table 28 presents the median reliability estimates for the ten scale items for two to ten, fifteen and twenty 25R. L. Ebel, "Estimation of the Reliability of Ra- tings," Psychometrika, XVI (1951), pp. 407-424. 26Guilford, op. cit., p. 395. 10? raters per film. The films were shown in groupings of three and four each (with each group representing the three Judged quality classifications). The median was chosen as representative of the reliability. Since reliability is a correlation based on order*rather than score value pep s3, the use of this median rather than the mean seemed Justi- fiable. Table 28 clearly shows a high degree of reliability among the twenty raters, whose evaluations of the thirty- four filmed speeches shown on research day of the Fall term 1964, were subjected to factor analysis. It also shows that the median reliability for a given item, when used as a basis for evaluation by students, increases as the number of raters increases. These results confirmed the finding of Price when he studied the reliability of Judging public speaking by pooling or averaging a series of Judgments.27 The results of the use of the Speech 101 rating scale on research day stimulated in interest in the comparison of these results with those of student usage of the same scale in the peered recitation section. It will be remem- bered that it was for such usage that the scale was devel- oped. For this comparison, 34? student peer group evalua- tions were chosen at random. These evaluations were made on all six of the speaking assignments of the course. They 27Price, 0 . cit., p. 237. 1138 has-eav Ama-eev “ea-mev Amanamv “maummv Aaanomv Aaa-emv Asa-amv Asa-amv Asa-mmv Ama-mmv ma. Ha. an. cm. on am. am. rm. mm. Na. am. essence mas Asa.sme Aaa-ssv Aaa.amv Aaa-asv Ama-mmv Aea-omv Aaa-amv Assummv Aaa-smv Aaa-mmv Aamxamv mm. mm. mm. mm. 0m. 0m. pm. mm. Ha. Ha. ow. Emwwmsnpsm Awanmav Ammumav Ama-mmv Amansmv amassev “canamv Aaauwmv Awaamav Aaauamv “mauamv Amanmmv a. mm. mm. am. ma. ma. «a. as. mm. mm. ma. scaamosaxm fleeces Assamwv Amazeav Ama-eev Ama-mmv Asesaev Ame-asv Assumsv Aea-ssv Aam-ssv Asa-smv “maxmmv ea. me. as. as. as. as. as. ea. mm. as. am. spaceccam Ana-mav Asa-aav Ama-omv Ama-osv Ama-asv Aaauomv Ama-msv Aaauomv Aaa-asv Aaa-msv Aaa-asv ma. ca. as. as. as. as. ea. om. ea. ca. am. canoe gee.mmv Asa-amv Ama-mme Assummv “ma-oev Aaa-mev heasaev Ama-sev Aea-mmv Aamuomv “Na-mmv om. as. as. mm. mm. mm. mm. me. we. we. as. scaeeseacaa aca-eev Aea-mev Ama-amv Ama-smv Asanmsv Amauamv Ama-amv Asazwmv Aaa-omv Aaa-osv AewuomV as. ea. me. me. as. as. as. ea. ea. as. as. scaossasesac Ama-aev ama-oev Aoa-msv gee-msv Ama-esv Ama-sm2 Assummv Amauamv nos-smv Ama-mav Asa-msv am. we. on. aw. :0. :0. we. 00. mm. :m. ma. cocooflam Ama-sav Aeauaev Asa-amv Ama-mmv Asa-emv Amanmmv Asa-wmv Leanemv Acauomv Asa-mmv Assumed am. we. ma. me. me. sa. ma. ca. we. so. so. measceecm Hecamoq Aea.eav Amauoav Ama-oae Ama-eev Ana-mev Aaa-sev Ama-aev Aaanwme flea-mmv Ama-mmv “ma-msv Na. .Ha. as. we. as. mm. ow. ea. ma. 0». me. occeam Hence om ma OH m m N w m a m .N EmpH whopmm Ho amassz SPHH gum E Emma Ewe 02¢ ESQHWM Padang 33”me «mm EB 109 were subJected to the same type of factor analysis used to determine the results of research day, Fall, 1964. Table 29 presents a three-factor analysis of peer group usage of the Speech 101 rating scale during the Fall term of 1964. TABLE 29 THREE-FACTOR MATRIX - PEER GROUPS - FALL 1964 Factor Loadings Item I II III 1. Total Effect .4978 .6589 .3178 2. LOgical Reasoning .7705 .3516 .2998 3. Evidence .8491 .1469 .3206 4. Organization .8151 .2297 .3431 5. Preparation .3553 .2762 .7918 6. Poise .1011 .4394 .7486 7. Sincerity .2156 .7224 .3287 8. Facial Expression .1686 .8511 .1099 9. Enthusiasm .2626 .8523 .0396 10. Eye Contact .1660 .7218 .2262 The three-factor solution presented by Table 29 accounted for 78% of the scale variance (Factor I: 20%; Factor II: 38%; Factor III: 20%). point to the same factor structure as that discovered for the research period. The peer evaluations As in the case of the factor analysis 110 of the evaluation of the filmed speeches (Table 17), the results do not support the existence of three factors of evaluation according to the criterion of at least three items with their highest loadings per factor. Table 30 presents a two-factor solution of the peer group evaluations. TABLE 30 TWO-FACTOR.MATRIX - PEER GROUPS - FALL 1964 Factor Loadings Item I II 1. Total Effect .5615 .6720 2. LOgical Reasoning .7534 .3665 3. Evidence .8298 .1635 4. Organization .8063 .2559 5. Preparation .7977 ~3019 6. Poise .3580 .7617 7. Sincerity .3643 .7333 8. Facial Expression .1753 .8554 9. Enthusiasm .1943 .8553 10. Eye Contact .2575 .7294 The two-factor solution accounted for 72% of the scale variance (Factor I: 33%; Factor II: 39%). The item associations per factor were exactly those found in the two-factor solution of the evaluations of the thirty-four 111 filmed speeches, (Table 18) and accounted for approximately the same amount of scale variance though a greater per— centage of that variance was attributed to the second factor "materials of experience." The results of the usage of the Speech lOl rating scale in the Fall term of 1964 in both experimental and classroom situations indicated the following: 1. Students tended to make their evaluative Judgments primarily on items which repre- sented a factor labelled "materials of development" and a factor labelled "materials of experience." 2. The scale items poise and preparation could be used to support a factor labelled "personal proof" which would account for between 18% and 20% of the scale variance, but at least one more scale item would need to be added to this third.factor in order to firmly establish its existence. 3. A.high degree of reliability with respect to the individual items contained on the scale can be obtained by pooling and averaging student Judgments. 4. Results of the student evaluations of the filmed speeches indicated that a high degree of reliability with respect to the individual items contained on the scale could be obtained by pooling or averaging four to six student Judgments. This range is that represented by a student evaluation panel in the peered group recitation sections. It was decided that a slight modification in the Speech lOl rating scale might make it possible to estab- lish a three—factor basis of student evaluations. It will be recalled that when the ten items were selected for the formalized version of the scale, that each of the then 112 established factors had additional items which could have been so selected. The results of the use of the rating scale during the Fall term of 1964 indicated that the item sincerity consis- tently did not load highest on the factor "personal proof" as the research which led to the develOpment of the scale had indicated. For’this reason, it was decided that sip: cerity should be changed to some other item which (1) evidence suggested might load highest on a "personal proof" factor; (2) could be tied in with the textbook discussion of personal proof.- The item, attitude, seemed closest to meeting these conditions. No time was available during the Winter term of 1965 to design any type of research similar to that conducted in the Fall term. The demands of the Winter term on the number of Speech 101 rating scales were such as to use up the forms representing the first edition. It was decided that the item attitude would be substituted for the item sincerity for the second edition of the form. In addition to this substitution, a slight modification of the format of the form was made for the second printing. This change involved the bracketing of the items according to the labelled factor names ("materials of development," "materi- als of eXperience,” and "personal proof"). Appendix L represents the second edition of the Speech 101 rating scale. 113 During the Spring term of 1965, an attempt was made to determine if the changes represented in the second edition of the Speech lOl rating scale were such as to produce a three-factor basis for student evaluation. This attempt was somewhat hindered by the lack of available time in the scheduling of activities for the peer group recitation sections. Only the Tuesday-Thursday sequenced sections had a period available for research. This prevented a large scale research proJect similar to that conducted during the Fall term of 1964. A small proJect involving student evaluation of filmed speeches using the revised Speech 101 rating scale was attempted and the results of that proJect are reported here as indicative of the on-going nature of the investigation of the use of rating scales in the eval- uation of student performance. For the proJect conducted during the Spring of 1965, three films were shown to approximately 130 students en- rolled in Speech 101. The films were shown on closed cir- cuit television during the recitation section meetings of the course. The films represented the three quality groups previously discussed and were selected from the sample of thirty-four filmed_speeches shown on research day during the Fall term of 1964. Table 31 presents a three-factor analysis of the student usage of the revised Speech 101 rating scale for the research conducted during the Spring term of 1965. 114 TABLE 31 THREE-FACTOR MATRIX - RESEARCH DAY - SPRING 1965 Factor Loadings Item I II III 1. Total Effect .4022 .6962 .4110 2. LOgical Reasoning .8612 .3120 .1244 3. Evidence .7675 .3467 .2472 4. Organization .7052 .2502 .5203 5. Preparation .2844 .3955 .8081 6. Poise .2813 .5936 .6151 7. Attitude .3413 .7953 .3170 8. Facial Expression .3247 .8342 .2456 9. Enthusiasm .3149 .8312 .2182 10. Eye Contact .1877 .7926 .3360 The analysis presented as Table 31 accounts for approximately 83% of the scale variance (Factor I: 25%; Factor II: 39%; Factor III: 19%). However, it can be seen that the rationale for the substitution of the item app;- pugg for the item sincerit , that is, to have three items load highest on the "personal proof” factor, was not upheld by the results cited in Table 31. The items preparation and ppigg continued to load highest on a third factor, with that factor accounting for approximately the same percen- tage of the scale variance as had been previously attri- buted to a third factor. However, a three-factor solution 115 to the evaluative data does not seem Justified in light of the criterion of at least three items per factor. Table 32 presents a two-factor solution of the student evaluations of the filmed speeches shown during the Spring term of 1965. TABLE 32 TWO-FACTOR.MATRIX - RESEARCH DAY - SPRING 1965 Factor Loadings Item I II 1. Total Effect .4673 .7731 2. Legical Reasoning .8488 .2803 3. Evidence .7900 .3607 4. Organization .8074 .3827 5. Preparation .4734 .6514 6. Poise .4101 .7625 7. Attitude .3793 .8327 8. Facial Expression .3425 .8428 9. Enthusiasm .3258 .8303 10. Eye Contact .2366 .8470 The two-factor solution presented as Table 32 accounted for 78% of the scale variance (Factor I: 30%; Factor II; 48%). It will be noted, however, that the item preparation, Nhich with previous two-factor'analysis loaded highest on the "materials of development" factor, tended to group with 116 those items representing the “materials of experience" factor. The fact that this loading is not clear cut allows one to suspect that the result might be a function of the particular films observed. This tentative hypothesis is also supported by the fact that all students involved in the research saw the same films Whereas the previous re- search proJects had involved students evaluating a signi- ficantly wider range of filmed speeches.28 In any respect, the results continue to support the conclusion of student evaluations based on "materials of develOpment" and "mate- rials of experience." Summary of Chapter IV Chapter IV of this study has attempted to trace the development of the Speech 101 rating scale as well as to analyze that scale as a technique for the evaluation of student performance. The Speech lOl rating scale was develOped primarily for student use in those situations where student evaluation was the sole basis of Judgment. The develOpment of the scale clearly shows that the items it contains are student generated, and the evaluation of those items were based on 28Guilford, o . cit., pp. 532-33. "Although there are no known ways of estimating sampling fluctuation in rotatee factor loadings, it is obvious that we should be concerned with the reliability of the correlation coefficients with which we start an analysis. Errors in correlation coef- ficients will be reflected in errors in factor loadings." 117 student usage only. In the actual course, the instructors use the same form when they are responsible for the evalua- tion of a speech. Instructor usage of the scale has yet to be investigated to see if similar or different results are obtained. Reaction to the scale by the instructors of the course appears mixed, though they have all appeared willing to use it, and in terms of total scale scores, seem to make a sufficient discrimination between various quali- ties of speeches. The research connected with the develOpment and use of the Speech 101 rating scale represents little more than an attempt to identify and classify the components of the factors that students in the course brought to bear in the evaluation of specific public speeches. The statistical techniques of factor analysis were used because they lend themselves to the processes of identification and classi- fication. The results cited in this chapter clearly point to the need for additional research into the area of student evaluation of speeches. It is also clear that before any meaningful research can be done based upon the developed Speech 101 rating scale, some decisions are going to have to be made with respect to the following questions: (1) Is the proper measure of the validity of a Speech 101 rating scale represented by its correspondence to the content of the course lectures and textbook? (2) If the techniques of factor analysis are to be applied to issues beyond the identifi- 118 cation and classification of the components of student Judgment, approximately how much of the variance in scale usage should be accounted for by the common factors of student evaluation? (3) What is the prOper relationship between the principles and concepts of public speaking and the number of factors that students should bring to bear in the evaluation of speeches? (4) Should the course make any attempt to increase the number of factors that stu- dents use in the evaluation of public speaking? (5) Is it Justifiable to base a student's speech grade entirely upon his recitation instructor's use of a rating scale devel- oped for student evaluation? Ideally, the answers to each of the above questions should be stated as hypotheses and then tested in as rigorous a manner as possible. If it is true, as some suSpect, that this type of research is not practical within the confines of a continually changing course, then the answers to the questions must be empirically derived from a set of well defined obJectives for Speech 101. The fact of the high degree of reliability established by pooling the ratings for items can be misleading. High reliability appears to be attainable independent of the quality of the items used for evaluation. In other words, when one talks about the reliability of a scale, is he talking about raters or scale items? Results of the Speech 101 research tend to indicate that reliability is a func- tion of the number of raters. There are procedures avail- 119 able which can establish scale reliability in the same manner as that used to determine the reliability of written examinations.29 These procedures might offer some valuable information relating to the issue of the validation of scale items. It should be remembered that the scope of this dis- sertation does not include terminal research in the area of the evaluation of student performance by techniques of written examinations or use of rating scales. The study, in general, describes and analyzes the methods used in Speech 101, and is Justified only on these grounds. 29Theodore Clevenger, "Retest Reliability of Judgments of General Effectiveness in Public Speaking,” Western 8 eech, XXVII (Fall, 1962), pp. 216-21. CHAPTER V SUMMARY, CONCLUSIONS AND RECOMMENDATIONS The stated purpose of this study was threefold: (l) to provide a detailed description of the develOpment of Speech 101 at Michigan State University; (2) to examine the obJectives and logistics of the evaluation of student performance in Speech 101; (3) to provide a critical analy- sis of the techniques of the evaluation of student perb formance, making use of data, statistical methodology, and results of the general research proJect conducted concur- rently with the study. The purposes of this chapter are to provide a summary of the material covered in the previous sections of this study; to make some general conclusions based on that material; and to make some Specific recommendations with reSpect to possible changes that might be made to improve upon Speech 101 as it now exists at Michigan State Univer- sity. Summary of the Material The maJor obJective of Speech 101 is to train students to be more proficient agents of change in public speaking situations. An examination of the developmental stages of the course show the definite influence of a desire to handle, efficiently and effectively, a large enrollment on the means used to obtain this obJective. 120 121 The efficiency of the course can be seen in terms of such things as its highly structured approach to the scheduling of activities, its centralized administration and the use of modern computerized techniques of data processing for'many of the tedious routines commonly associated with large enrollments. Such efficiency explains, in part, the rapidity with which significant changes in the course structure were made. The conversion to televised lectures was greatly aided by the fact that the course was initially built around the concept of a common mass lecture for all enrolled students. The changeover to peer grouping the recitation sections was made easy because, from the out- set, the course followed a syllabus and early in its develOpment evolved a pattern for the uniform scheduling of classroom activities. The use of a computer in grade processing and test item analysis was successful, in many respects, because the responsibility for these actions rested with a well-organized administrative unit. The effectiveness of the course is difficult to summarize. Most of this difficulty stems from the fact that, while it is true that Speech 101 does process a large volume of enrollees, the effectiveness of this processing has never been subJected to direct rigorous investigation. Occasionally, during the development of the course, small experiments were conducted attempting to 122 compare the effectiveness of some proposed change in struc- ture against the status quo. When such a change appeared reasonable, and when the experimentation yielded no evi- dence to suggest that the effectiveness of the course was impaired by the change, it was made. This type of research does not represent a direct attempt to evaluate the quality of the course with respect to the training of public speakers; instead it characterizes a pragmatic approach to specific 10gistical problems of handling a large number of students. There is evidence of a general feeling, on the part of the administrative staff of Speech 101, that the course does meet its primary obJective of "assisting stu- dents to operate more effectively as agents of change in public speaking situations," but this feeling is based almost entirely upon subJective impressions and the fact that the course runs smoothly. The use of multiple-choice examinations for the evaluation of student performance in Speech 101 is based on the assumption that a knowledge of certain principles and concepts of public speaking is important to the student and cannot be learned completely through practice in.making speeches. A great deal of effort, on the part of the staff of Speech 101, has gone into the construction and valida- tion of test items. The items themselves are written by the teaching staff of the course, and each is reviewed and evaluated by members of the administrative staff both be- 123 fore and after being used in a particular test in order to determine its relationship to the concepts and principles of public speaking being stressed in Speech 101. Each examination question is subJected to computerized techniques of item analysis and then evaluated with respect to its difficulty, discriminating ability and the relevance of its options. It is true that the validation of test items is carried on simultaneously with the evaluation of students taking written examinations, and that this fact raises some issues with respect to the meaningfulness of a given exami- nation score; but the eventual goal of standardized exami- nations composed of acceptable items is sound pedagogy. A Speech 101 rating scale is used in the evaluation of student oral performance. The scale, and the research proJects associated with its evolution, are significant in that they attempt to meet the general issues of student evaluation of speeches within the framework of an ongoing course in public speaking. The pragmatic need to give untrained students a basis for their Judgment of the quality of speaking done by their peers resulted in the application of modern computerized techniques of identifi- cation and classification to determine the factors that students use when evaluating public speaking. The use of factor analytic techniques in this area would probably have been impractical had not a computer been available for the Speech 101 research proJects. Not that these techniques 124 are new, or unknown to the field of speech, but that the time and difficulty involved in their computation would have certainly deterred their use without the Michigan State University computer and its factor analysis programs. The factor analytic techniques employed at all stages of the development of the Speech 101 rating scale yielded immediate and meaningful results with respect to the general factors that students employed in the evaluation of public speaking. The identification and classification of student generated scale items according to the common factors that they represent ("materials of development," "materials of experience," and "personal proof") allowed the isolation and selection of variables in speech eval- uation relevant to the concepts and principles discussed in the course lectures and textbook. It is true that the scale which was evolved does not achieve all that was hoped for it, but the techniques used in its development do represent a methodology for determin- ing a sound basis for speech evaluation by untrained as well as trained raters. General Conclusions In the section labelled "Limitations of the Study" of Chapter I, it was stated that the weakness of a descriptive- analytical design could only be overcome if, at the end of the study, certain avenues of thought were cpen which pre- 125 viously were closed because of a lack of information about the nature of general problems, and of procedures by which these problems might be solved. The writer feels that this study does open up additional lines of investigation with respect to speech education and does advance procedures whereby persistent speech training problems might be solved. This study provides some information concerning the kind of relationship that needs to be established between the obJectives of a course and its logistical structure. It is apparent that the obJectives of a course should stand as the criteria by which it can be evaluated. When this type of association is not evidenced, other considerations tend to have a disprOportionate influence on the definition of a problem and its possible solutions. In the case of Speech 101, the 10gistics of handling a large number of students have been the prime considerations, serving as the motivation for many of the structural changes made in the course. There has been a definite lag in defining the course goals as specific behavioral obJectives relevant to making the student "a more effective agent of change in the public speaking situation." It is paradoxical that, had not the decision been made to gear the course toward the handling of an increased enrollment with a minimum number of staff members, the need to obJectify the goals of Speech 101 would not be so readi- ly apparent. Certainly the expenditure of university funds 126 for the research that pointed to the need for behavioral obJectives would have been less, had not the course proved so successful in processing a large volume of students. Yet, it is this very research that can be criticized on the grounds that it is not directly aimed at measuring the ef- fectiveness of the course. The writer hopes that this study will serve as an initiator for additional university sup— ported research into the effectiveness of Speech 101. In a sense, it was the desire to meet the problems of an increasing enrollment that compelled attention to new techniques of peer grouping, centralized grade pro- cessing, computerized test construction and evaluation, and rating scale development. Traditional methods were found either to be not relevant or too cumbersome to meet the pressing requirements of an increasing demand for service. The use of these new techniques to solve per- sistent problems in speech education has been described in detail in this study. The use of a computer and now exis- ting computer prOgrams have been illustrated and cited. This information is generalizable beyond the confines of a basic course in public speaking. It is obvious that this study does not solve any problems. Such was not its intent. The study does point to new complexities and redefines others, but solutions remain for'the theorist to devise and the scientist to test. 127 Recommendations This study examined the ongoing development of Speech 101 and its associated research proJects. This realization would seem to merit any recommendation that might be made concerning modifications aimed at improving the course. The writer would like to make five specific suggestions designed to help remedy some of the course weaknesses cited previously. These recommendations are limited to those which the writer feels could be accomplished without much difficulty within the near future. The first recommendation is that the goals of Speech 101 be expressed in terms of measurable behavioral obJec- tives. An examination should be made of the course lec- tures, syllabus, written handouts, textbook and file of acceptable examination questions with the view of listing, in terms of observable characteristics, those behavior patterns which can be Judged to be associated with effective student speaking. These characteristics can then be used as criteria for determining which concepts and principles of public speaking need to be stressed in the course lec- tures, oral assignments and critiques, and written exami- nations. The degree to which students adopt these charac- teristics can be viewed as a measure of the effectiveness 1 of Speech 101. 1Work on expressing the obJectives of Speech 101 in terms of measurable behavioral obJectives began in the Spring term of 1965 and is expected to be completed by the Fall of 1965. 128 The second recommendation is that the procedures used for item analysis and grade determination be combined in such a way that a student's test score would reflect his reSponses to questions which meet the criteria of diffi- culty and discrimination. This policy should be followed until that time when the course examinations have become standardized. This suggested modification seems entirely feasible since separate computer prOgrams now exist for both item analysis and grade processing. The third recommendation is that more consideration be given to the writing and revising of examination questions. A series of staff meetings could be devoted to a review of the obJectives of course testing procedures, a clear explanation of the statistical criteria applied in item analysis, and discussion and illustration of good and bad test items by someone competent in the area of examination construction. Such a series should make the process of question writing more efficient. The fourth recommendation is that more attempts be made to analyze student usage of the Speech lOl rating scale in the actual peer group situation with a view to- ward integrating these evaluations into the grading pro- cedures of the course. The fifth recommendation is that some consideration be given to the develOpment of a rating scale to be used in the evaluation of student speaking based on instructor 1.29 generated scale items. There is good evidence to suggest that teachers of public speaking are able to isolate more factors of speech evaluation than those supported by stu- dents in Speech 101 to date.2 If it can be demonstrated that faculty use additional factors of evaluation which have relevance to the principles and concepts of effective speaking, then one of the obJectives of the course might be to move student evaluators in the direction of properly using an "instructor's" rating scale. 2Price, op. cit., p. 273. BIBLIOGRAPHY A. Books Berlo, David K. The Process of Communication. New York: Holt, Rinehart, and Winston, Inc., 1960. Bryant, Donald G., and Wallace, Karl R. Fundamentals of Public Speaking. 3rd ed. New York: Appleton- Century—Croft, Inc., 1960. Control Data Corporation. 3600 Computer System Fortran/ Reference Manual Preliminary. Palo Alto, California: Control Data Corporation, 1964. . 3600 Computer System Reference Manual. Palo Alto, California: Control Data Corporation, 1964. Cronbach, Lee J. Essentials of PsycholOgical Testing. New York: Harper Bros., 1959. Dixson, Wilfrid J., and Massey, Frank J. Introduction to Statistical Analysis. New York: McGraw-Hill Co., 1957- Dow, Clyde. (ed.) An Introduction to Graduate Study in Speech and Theatre. East Lansing: Michigan State Uni- versity Press, 1961. Gage, N. L. (ed.) gandbook of Research on Teaching. Chi- cago: Rand McNally and Co., 1963. Guilford, J. P. §§ychometric Methods. 2nd ed. New York: McGraw-Hill, 1954. Hance, K. G., Ralph, D. C., and Wiksell, M. J. Principles of Speaking. Belmont, California: Wadsworth Pub- IIshing Co., 1962. Harmon, Harry R. Modern Factor Analysis. 2nd ed. Chica- go: University of Chicago Press, 1960. Lindquist, E. F. (ed.) Educational Measurement. Washing- ton: American Council on Education, 195I} McBurney, James H., and Wrage, Ernest J. The Art of Good Speech. New York: Prentice-Hall, Inc., 1953. McCracken, Daniel D., and Dorn, William S. Numerical Methods and Fortran Programming. New York: John Wiley and Sons, Inc., 1964. 130 131 McLaughlin, Kenneth F. Interpretation of Test Results. Washington: United States Department of Health, Education and Welfare, 1964. Michigan State University. Michigan State University CatalOg Issue 1260-61. 'IVoI. 54, No. 13, May, 1960.) East Lansing: Michigan State University Press, 1960. . Time Schedule for Classes: Springl965. (Vol. 59, No. 9, February, 1965)' East Lansing: Michigan State University Press, 1965. Rao, C. Radhakrishna. Advanced Statistical Methods in Biometric Research. New York: John Wiley and Sons, Inc. , 1952. Rummel, J. Francis. An Introduction to Research Proce- dures in Education. 2nd ed. New York: Harper and ROW, 1964. Travers, Robert M. W. How to Make Achievement Tests. New York: Odyssey Press, 1950. Wood, Dorothy Adkins. Test Construction. Columbus, Ohio: Merrill Publishing Co., 1961. B. ARTICLES AND PERIODICALS Becker, Samuel L. "The Rating of Speeches: Scale Inde- pendencz " Speech Monographs, XXIX (March, 1962), pp. 38- . Brooks, Keith. "Some Basic Considerations in Rating Scale Development," Central States Speech Journal, IX (February. 19577. pp. 27-31- . "The Construction and Testing of a Forced Choice Scale for Measuring Speaking Achievement," Speech Monographs, XXIV (March, 1957), pp. 65-73. Bryan, Alice I., and Wilkie, Walter H. "A Technique for Rating Public Speeches," qurnal of Consulting Psy- cholo , V (March-April, 1941), pp. 80-90. Carter, Elton 8., and Fife, Iline. "The Critical Approach," in An Introducpion to Graduate Study in Speecn and Theatre (East Lansing: Michigan State University Press, 1961), Chapter 5, pp. 81-103. 132 Clevenger, Theodore. "Retest Reliability of Judgments of General Effectiveness in Public Speaking," Western Speech, XXVIII (Fall, 1962), pp. 216-21. Dahle, Thomas L., and Monroe, Alan.H. "The Empirical Approach," in An Introduction to Graduate Study in Speech and Theatre (East Lansing: Michigan State University Press, 1961), Chapter 9, pp. 173-200. Davis, Frederick E. "Item Selection Techniques,” in Educational Measurement (Washington: American Council on Education, 1951), Chapter 9, pp. 266-328. Ebel, Robert L. "Estimation of the Reliability of Ratings," Psychometrika, XVI (1951), pp. 407-424. Fotheringham, Wallace C. ”A Technique for Measuring Speech Effectiveness in Public Speaking Classes," Speech Monographs, XXIII (March, 1956), pp. 31-37. Johnson, A. P. "Notes on a Suggested Index of Item Validi- ty,” Journal of Educational Psycho10gy, XLIII (1951), pp. 499-504. Knower, Franklin H. "A Suggestive Study of Public Speaking Rating Scale Values,” Quarterly Journal of Speech, XV (February, 1929), pp. 30-41. Miller, Gerald R. "Agreement and the Grounds for It: Persistent Problems in Speech Rating," The Speech Teacher, XIII (November, 1964), pp. 257-61. Monroe, A. H., Remmers, H. H., and Lyle, E. V. Measuring the Effectiveness of Public Speakingyin a Beginnipg Course ("Studies in Higher Education," No. 29; Bulletin of Purdue University, XXXV, No. l) Lafay- ette, Indiana: Purdue University Press, 1936. Neuhaus, Jack O., and Wrigley, Charles. "The Quartimax Methods: An Analytical Approach to OrthOgonal Simple Structure," British Journal of Statistical Psychology, VII (1954), pp. 81-91. Norvelle, Lee. "Development and Application of a Method for Measuring the Effectiveness of Instruction in a Basic Speech Course," Speech Monographs, I (September, 1934), pp. 41-63. Stevens, Wilmer E. "A Rating Scale for Public Speakers," Quarterly Journal of Speech, XIV (April, 1928). pp. 223—232. 133 Thompson, Wayne. "An Experimental Study of the Accuracy of Typical Speech Rating Techniques," Speech Monographs, XI (1944). Pp. 67-79- . "Is There a Yardstick for Measuring Speaking Skill?” QuarterlyJournal of Speech, XXIX (February, 1943)! ppo 87’910 C. REPORTS Michigan State University Computer Institute for Social Science Research. Factor'Analysis Programs: FANOD 3 and FANIM 3. Technical Report 2 (Revised). East Lansing: Computer Institute for Social Science Research, September 22, 1964. D. UNPUBLISHED MATERIAL Johnson, F. Craig, and Klare, George R. "Procedures for Item Writing." Contained in "Speech 101 Instructor's Manual." 2nd ed. Michigan State University, Depart- ment of Speech, 1963. (Dittoed.) Kinstle, Robert. "Analysis of Agreement Among Student- Assigned Evaluative Rankings: ProJect 002.) Michigan State University, Department of Speech, May 1963. (Dittoed.) "Correlation Analysis of Speech Grades Assigned By Instructors and Peer Evaluators." Michigan State University, Department of Speech, June 1, 1963. (Dittoed. ) . "Student Attitude as a Function of the Mode of Presentation in a Lecture Segment of a Course in Basic Public Speaking." Michigan State University, Depart- ment of Speech, April 1963. (Dittoed.) Lashbrook, William B. "Ebel's Reliability Correlation Coefficient: FORTRAN IV Computer Program." Michigan State University, Department of Speech, March 1965. (Punched Cards.) . "Final Grade Program for Speech 101: FORTRAN IV Computer Program." Michigan State University, Depart- ment of Speech, September 1964. (Punched Cards.) "Mid~term.Grade Estimations: FORTRAN IV Com- puter Program." Michigan State University, Depart- ment of Speech, August 1964. (Punched Cards.) 134 . "Speech 101 Experiment: Project 1." Michigan State University, Department of Speech, April 1962. (Dittoed.) Office of Evaluation Services. "Interpretation of Index of Discrimination." Michigan State University, Office of Evaluation Services, 1965. (MimeOgraphed.) . "Program F0 303 for IBM 1401 Digital Computer." Michigan State University, Office of Evaluation Ser- vices, 1964. (Magnetic Tape.) Price, William K. "The University of Wisconsin Speech Attainment Test." Unpublished Ph.D. dissertation, University of Wisconsin, 1965. Speech 101 Administrative Staff. "Speech 101 Instructor's Manual." Michigan State University, Department of Speech, 1962. (MimeOgraphed.) "Syllabus for Speech 101: Public Speaking." Michigan State University, Department of Speech, January 1, 1965. (MimeOgraphed. Speech 101 Committee. "Committee Report on Speech 101: Public Speaking." Michigan State University, Depart- ment of Speech, May 18, 1960. (Dittoed.) Speech 201 Committee. "Committee Report on Speech 201: Public Speaking.” Michigan State University, Depart- ment of Speech, January 14, 1960. (Dittoed.) E. OTHER SOURCES Barson, John. "Phase I: Interview with Dr. David Ralph, Speech 101." March 13, 1964. (Varifaxed.) Ralph, David C. "Memo to Dr. Dietrich, Chairman, Michigan State University, Department of Speech." 1962. (Typewritten.) APPENDIX A SPEECH 101 COURSE SYLLABUS Your Your Your Your qichigan State University Department of Speech Syllabus for Speech 101 Public Speaking Lecture: Section Recitation: Room Seat Section Number Recitation Instructbr: Speaking Number: Room A-2 INSTRUCTIONS AND INFORMATION This Syllabus for Public Speaking 101 has been prepared for you in order that you may learn at the outset what you need to know about the Operation of the course. Please read pp care- fully and ipnediately. IT IS ASSUMED THAT YOU HAVE READ AND UNDERSTAND—THE MATERIAL IN THIS SYLLABUS. 1. The Distinction Between Your Recitation Instructor and thg Lecturer. The Lecturer and course chairman for Speech 101 is Dr. David Ralph. The name of your particular recitation instructor will depend upon the meeting time of your reci- tation section. Occasionally during the term you will be asked to name your Speech 101 instructor (at examination time, for example). Your response should be the name of your recitation instructor. Your Recitation Instructor: The General Goal of Public Speaking 101: To assist students, through knowledge of and experience in the principles and methods of speaking, to Operate more effectively as agents of change in speaking situations. Specific Goals of Public SpeakipgglOl: a. To help you understand and make effective use of the materials of speaking--materials Of development, per- sonal proof, and materials of experience. b. To help and put into practice the principles of good speaking--discovering or limiting the topic; adapting to the audience; organizing and outlining the speech; develOping and using language for speaking; practicing and presenting the speech. 0. To help you feel more secure in the speaking situation by assisting you in a personal adjustment to your role as a speaker. d. To help you understand and accept the responsibilipy of the speaker to society. e. To help you understand the role of speaking in our society. A-u f. To help you develOp the ability to analyze, criticize, and pass judgment on the speaking of others. Teaching Methods of Public Speaking 101: a. Study of the principle of speaking through careful reading of the text. b. Presentation of additional information through lectures. 0. Preparation of written assignments to aid you in increas- ing your ability to select and adapt tepics, discover and interpret evidence, use reasoning, organize and outline speeches, adapt to your audience and speaking occasion, and employ effective language. d. Investigation of specific subjects of value and interest to you and your classmates for development into worth- while speeches. e. Preparation and delivery of various types of speeches in which you demonstrate your grasp of the principles of speaking. f. Criticism and evaluation of your speeches by section instructors and your classmates. g. Experience in evaluation and criticism of the speaking of others. h. Examinations on principles of speaking. Organization of the course: Each student is required to enroll in and attend one of two lecture section meetings held at 10:00 a.m. and 2:00 p.m. on each Monday of the term. Each student is also required to enroll in and attend a recitation section. The recitation sections are scheduled so as not to conflict with an available lecture period. All recitation sections follow either a Monday-Wednesday-Friday or a Tuesday-Thursday meeting pattern. (Students in Case-Wonders-Wilson sections follow a special lecture-recitation pattern. See the time schedule of courses for particular term in which you are enrolled.) Attendance: The official University policy with respect to absences is that “the student is expected to attend all class periods." This policy is strictly enforced by the staff of Speech 101. Any absence, no matter what the cause, will, of course, work against you. If you are absent from your recitation section 10. A-S for an acceptable reason, you may be allowed to make up work you have missed. The decision as to what constitutes an “acceptable reason" for an absence is left to the judg- ment of your recitation instructor. THERE ARE NO EXCUSED ABSENCSS IN SPEECH 101, there are only accepted reasons for allowing you to make up work you have missed. With respect to absences because of illness the policy is rigid. Illness will constitute an acceptable reason for allowing you to make up work you have missed ONLY if you present to your recitation instructor a written note from the STUDENT HEALTH ENTER. If you are absent and do not have an acceptable reason, you will receive an "F” grade for all work missed. It is obviously impossible to make up work missed at the lecture sessions. QUESTIONS WITH RESPECT TO ATTENDANCE IN BOTH LECTURE AND RECITATION SHOULD BE DIRECTED TO YOUR RECITATION INSTRUCTOR. Work Schedule: All assignments-~reading, oral, written-~are listed under the appropriate topic. In order to keep up with the work of the course, it will be necessary for you to study these assign- ments in advance of the time when the topic is under con- sideration. You will want to read ahead in your textbook and work ahead on oral and written assignments. Textbook: The textbook for Speech 101 is PRINCIPLES OF SPEAKING, by Kenneth G. Hance, David C. Ralph, and Milton J. Wiksell, published in 1962 by Wadsworth. You are requested to purchase a copy of the text. The textbook provides the major statements of theory in the course and is to be thoroughly mastered. Lectures: While the textbook presents the basic theory of Speech 101, the lecturer will present material which is both supplementary and complementary to that suggested by Hance, Ralph, and Wiksell. The lecture will often present a different ap- proach to many of the problems of public speaking. As a student of Speech 101 you are held responsible for the materials presented by the textbook and by your lecturer. Because of the large lecture enrollments, Speech 101 lectures are presented via closed-circuit television in smaller viewing rooms. Speeches: a. Philosophy - This course is based upon the philOSOphy that public speaking includes not only 'stand up' speaking with a formal audience but remarks in reply A-O to speeches of others, committee reports, short state- ments, and all the many informal public speaking situa- tions that daily confront us. Some opportunity will be given to you, therefore, to speak informally as well as formally in the classroom. Every student should take the utmost advantage Of all the Opportunities to speak which Speech 101 will Offer. Choice Of Subjects: At times your syllabus will limit your choice of subjects for a speech; at other times the choice will be yours. In every case you should treat your subject so that it is worthy of your audience's attention. A SIMPLE RE-HASH OF A SINGLE MAGAZINE ARTICLE IS NOT ACCEPTABLE, NOR IS AN OLD SPEECH FROM YOUR HIGH SCHOOL DAYS. Node of Delivery: Most Of the speaking situations in which you will find yourself throughout your life, when you have been given some time to prepare, will demand an EXTENPORANEOUS mode Of delivery. Occasionally you will want to read from a manuscript, and at some point in your career you may even memorize a speech or two. Occasions which do not permit preparation will force you to speak impromptu. But, when you are given time to prepare, you will use the extemporaneous mode most Often, and EVERY SPEECH ASSIGNMENT IN THIS SYLLABUS carries with it the requirement that you speak extemporaneously. (He use the term 'extemporaneous' to mean that you will select or limit your tOpic, do research to equip yourself with the necessary knowledge, carefully out— line and organize your thoughts, memorize, etc. . . the pattern of thought, but select the wording Of the ideas at the moment you face your audience.) The above statement should constitute a sufficient warning to those students who feel they must read or memorize their speeches. At no time will the reqpirements outlined in this syllabus be satIsfied by either of these two modes of delivery; Evaluation: One of the most important teaching devices in any public speaking course is the eXperience Of listening to the speeches of others, evaluating them, hearing the instructor's evaluation, and then profiting from what you have learned. This is one Of the major reasons for the rigid requirement Of attendance in Speech 101. Your own speeches, tOO, will be evaluated, orally and in writing, by your recitation instructor. This is your Opportunity to receive expert advice concerning your speaking at a relatively small cost. Learn everything you can from your instructor, the key figure in this course. VI ll. 12. 130 A-7 Time Limits: You will note that each speech assignment carries with it an established time limit. Although these limits may be increased or decreased by your instructor, depending upon the enrollment in your particular recitation section, when they are definitely set, they must be rigidly adhered to. Speaking over- time steals time from another student; speaking under- time cheats yourself. Your ReSponsibility as a Speaker: In the time schedule portion of this syllabus you will find a blank space in which you should indicate the dates on which you will speak. As soon as your recitation instructor has set up his schedule of speaking for the term, write your speaking dates in the appropriate blanks for all six of your speaking appearances. TO expedite scheduling, your instructor will assign a recitation number to you which will be yours throughout the term. He will indi- cate those students who are to speak on a given day by number. You are you alone are reSponsible for seeing to it that you are prOperly assigped and for being ppesent and prepared to speak at the_prgper time. For classes with maximum or near-maximum enrollments no time is available for make-up speeches. Unless you can satisfy your instructor with an "acceptable" reason for allowing you to make up work missed, your grade for that work will be 0. If your reason is "acceptable," you will simply miss the speech and no grade will be recor- ded. (Note that this applies only in those classes where it is impossible to make up work missed.) If the instruc— tor and the class members permit, a special make—up period may be arranged for those who have missed a speech date for reasons which are "acceptable" to the instruc- tor. Generally speaking, only the hospital-confined illness of yourself or a close relative will constitute an "acceptable" reason for allowing you to make up work missed. Written Assignments: Written assignments are an integral part Of the course. They should be the best work of which you are capable and must be submitted when due. Late papers will be penalized and may be refused by your instructor. Additional Assignments: Additional assignments, reading, oral, or written, may be made at the discretion of your instructor. Examinations: There will be two major examinations in Speech 101: a mid- 14. 15. A-8 term and a final examination. The mid-term examination is scheduled for the fifth lecture period of the term and will be taken at your particular lecture meeting place. The schedule Of the final examination may be found in the TIME SCHEDULE FOR CLASSES. You will take the Speech 101 final examination ACCORDING TO THE MEETING TIME AND PLACE OF YOUR LECTURE. In addition to the mid-term and final examination, you may be given unscheduled quizzes. These QUIZZES will be given during lecture periods. The mid-term examination will cover text and lecture assign- ments through topic III. The final examination covers the entire course, with emphasis on tOpics IV through VI. Notebooks: You are requested to maintain a standard size notebook in which you are to keep the following material: a. This SYLLABUS with notes as to the dates on which you are to speak. b. Lecture notes. These notes will be more useful to you if you take them in outline form and then type them. c. Any notes you take while reading the textbook or other material. d. Speech outlines which have been graded and returned to you. e. Your instructor's evaluation of your speaking. f. Your evaluations of your own and your classmates' speaking. g. Your classmates' evaluation Of your speaking. h. Your written assignments which you have and those which have been graded and returned to you. YOUR INSTRUCTOR MAY ASK YOU TO LAND IN YOUR NOTEBOOK AT ANY TIII'IE DURING THE TERI‘II . Conferences: Your instructor is available by appointment to aid in the solution of any problems which may arise. In addition, most instructors are available for a few minutes before and after the class hour. If you have difficulties, your instructor is available and willing. 16. 170 A-9 Grades: Speeches, including outlines and other written requirements associated with the preparation and delivery of speeches, will count approximately sixty percent of your total grade. Examinations, other written assignments, attendance, and your general classroom attitude will count approximately forty percent. You must achieve a passing grade in both the speech work and examinations in order to pass the course. Your recitation instructor may penalize you for failure—to submot any required work. You will note that as the term progresses you will receive number—scores rather than letter—grades for the completion Of your assignments. This scoring system makes it difficult for your recitation instructor to give you a specific letter grade at any given moment. Your final grade in Speech 101 will be determined on the basis of the cumulative number of points you receive for all assignments and examinations, and will not be determined until all information is available (this means until after your instructor has received your score on the final examination). At no time in the course should your recitation instructor be asked to commit himself to a letter grade based on incomplete information. Students are warned not to make the transposition of number-score to letter-grade themselves since such action would be little better than a guess and could lead to much disappointment. eech Proficiency Evaluation for Students Desiring a condary School Teaching Certificate: t Sp Se Each student seeking certification for teaching in a secon— dary school will be required to present evidence of his speech proficiency. "Proficiency” may be defined as: (1) creative and coherent develOpment of thought (analysis, selection, and organization of speech materials); (2) oral language skills (pronunciation, grammar, style, physical activity, vocal intelligibility and variability, self- assurance); and (3) general effectiveness. General Procedures for Speech Certification: 1. With the adviser's assistance the student will select and enroll in a speech course (usually Speech 101,108, or 401 or when appropriate, 116, 243, 260, 305, or 309). 2. At the beginning of the term, the course instructor will try to identify those students who desire Speech certification. If the student does not notify the instructor within two weeks after the beginning of the quarter, certification cannot be granted in the course. 3. Before the final examination period, the instructor will A-lO submit a rating card for each candidate to the A11- University Speech Evaluation Committee showing whether or not the student has demonstrated speech proficiency. u. If certification is recommended, the student becomes eligible to student teach. 5. If the recommendation is that certification be withheld, the Secretary of the All-University Speech Evaluation Committee will propose procedures to make up the defi- ciency. This recommendation may include additional course work or consultation with the University Speech and Hearing Clinic. O. The Speech Evaluation rating will be not necessarily related to the student's grade in the course. It is possible for a student to receive a high grade in the course and not be certified. Conversely, it is possible for a student to receive a low grade in the course and be certified. 7. A report of each case will be made by the All-University Speech Evaluation Committee to the College of Education, with copies to the student and his adviser. Advisers are asked to urge the student to follow the recommendations of the Committee at the earliest possible date. 8. The student should fulfill the requirement as early as possible in his academic career. The requirement must be filled prior to his student teaching. 9. Transfer students and students seeking secondary certi- fication after graduation will be held to this requirement. The following course outline is divided into six topics, each topic representing a major content area in public speaking. The text chapters indicated should be read in ADVANCE of preparing the oral assignments. chedule Of Lecture TOpics: l. ”Caterialsof‘DevelOpment“ 2. “Arresting and Holding the Audience's Attention” 3. “Motivation and Motive Appeals“ 4. "Organizing the Speech" 5. Mid—term Examination 0\ 0 “Evaluation in Speaking“ E"Sugg'gestion“ ;The Domain of Public “Ethics and the Speake 1 “Speech and Society: Speaking“ '8 Responsibility“ An Overview” A-lZ Topic I THE MKTERIALS OF SPEAKING The Speaker's Personal Proof The Materials of Development* iaterials common to all speaking avidence Reasoning Materials of Experience* Common forms Motive appeals Attention Suggestion Pd 3 Assignments: Reading: Chapter 3, ='The Speaker as a Person“ Chapter 4, ‘Materials of Development“ Chapter 5, “Materials of Experience“ Oral: Each student will prepare and deliver a four- minute speech offering direct support for a single point. The first thing for you to do is to decide definately on the point you want to prove or explain. Synthesize your idea to a single decla- rative sentence. (Purpose sentence.) State it simply; for example, "minor league baseball is going out of business.“ After stating your point, stay with it-—try not to go off on a tangent. N w, gather and organize supporting evidence and round out the development of your point in the manner best suited to your purpose. In summary, what you are to do is to state your point; then you should develop it with 'fact' and 'opinion' evidence--such as examples, narratives, statis- tics, quotations, etc. In your conclusion you should restate the original point. This speech is a simple three—point process: (1) you state your point (purpose sentence); (2) you support and clarify your point with evidence; and (3) you restate the point and conclude. Be careful in selection of your topic; make sure it is a SINGLE POINT, worthy of talking about, and capable of expansion and clarification through the use of evidence. Time Limits: 4 minutes per speech, 3 minutes per evaluation. ‘fi Purpose: axperience in using and evaluating *These materials will be emphasized in this speech. J’ "I \/ Jritten: Time Schedule: Hy speech A-13 evidence in a speech; eXperience in analyzing a topic; experience before your classroom audience. (1) Each student will submit to his instructor at the time he is scheduled to speak (a) an outline of his speech, carefully following the instructions in Chapter 2 and the sample out- line distributed earlier; (b) a list of the sources of his evidence (Sibliography--see page 155 of the text for instructions); (0) identification of the types of evidence he has used, according to the information provided in Chapter 4; (These identifications should be made in the outline at the points where the evidence occurs.) (d) identification of the types of reasoning used, following instructions in (0) above. On your outline identify the types of evidence used, according to the information in Chapter 4. Chapters 8,9, and 10 may provide helpful in preparing this assignment. (2) At the first recitation meeting after the completion of Topic I, each student will submit to his recitation instructor a short paper (not to exceed 250 words) in which he observed among the speeches presented as a part of this topic. _A mere listing does not meet this assignment. Use the criteria in Chapter 2 as well as those things mentioned by the recitation instructor for evaluation. is to be presented on: A—l4 ADAPTIUG TO THE OCCASION AND T37 AUDIENCE Assignments: Readings: Oral: Written: The Setting of the Speech The Listeners Analyzing the Listeners Types of Audiences Adapting to the Listeners (Materials of Personal Proof and Materials of Experience will be emphasized here in this speech.) Chapter 6, ”Understanding and Adapting to the Occasion‘ Chapter 7, “Understanding and Adapting to the Audience” Chapter 8, "Discussion or Conference” Each student will choose a subject in which he strongly believes. He must consider the attitudes of his listeners toward his belief, as well as the problems contained in the classroom setting of his speech. He is to see how many attention— arresting devices he can work into his speech, from beginning to end, yet he must not lose sight of the message of the speech. He will employ motive appeals, along with his reasoning and evidence, in an effort to convince his audience. Time limits: 4 minutes per speech, 3 minutes per evaluation. Purpose: Experience in analyzing and adapting to an audience and an occasion; experience in arresting and holding the attention of a group of listeners; experience in the use of motive appeals; experience in adapting logical materials to an audience. 1. Each student will submit to his recitation instructor on the day he speaks an outline of his speech, a list of his attention-arresting devices, and a list of the motive appeals he intends to employ. His outline will be based upon the instructions in Chapter II and the sample outline given him. However, he should begin reading Chapters 8, 9, and 10 in order to provide him with knowledge by which to improve the composition of his speech. (Note: no more one—point speeches!) 2. At the first recitation meeting after the completion of Topic II each student will submit A-15 to his recitation instructor a paper in which he lists each member of his class audience, and makes a short statement about each member. The purpose is to detail what he knows about he composition of his audience. Use the materials in Chapters 5, 6, and 7 as criteria for these evaluations. Time Schedule: Hy speech is being presented on: A) Assignments: Reading: Oral: Written: A-l? TOPIC IV PRESENTING THE SPEECH Chapter 11, "Style in Speaking” Chapter 12, “Delivery of Speaking" Each student will carefully choose and limit a tOpic, aCcording to the principles and instruc- tions in Chapter 8. He will collect his materials recording them according to the instructions in Chapter 9. Then, employing the inductive speaking plan describedin pages 238-239 of his text (with such modifications as the student and his instruc- tor may agree upon), he will outline and organize his speech for presentation. The principles of outlining in Chapter 10 must be adhered to, in so far as this is possible in the inductive plan. Additional instructions may be given the student by his section instructor or the course lecturer. The instructor will carefully evaluate the student's choice and limitation of subject, his choice and develOpment of the inductive speaking plan, and the manner in which the student presents his speech. Time Limits: 5 minutes per speech, 3 minutes per evaluation. Purpose: Experience in taking the materials of speaking and putting them together in a pattern which will produce an acceptable public speech: experience in considering the language necessary to 'put across' a speech employing the inductive pattern; experience in utilizing the principles of effective delivery in speaking. 1. Each student will submit to his recitation instructor on the day he speakg a full outline of his speech, prepared according to the instructions in Chapter 10, using the assigned speech plan or a variation of it. He must use complete sentences and include an introduction and conclusion in his outline. At the top of the outline, immediately below the title, he should indicate any special variation of the inductive speaking plan he intends to use. He should also submit at that time a set of cards containing the materials of development he is using. 2. At the first recitation meeting following A-18 Lecture 9, "Ethics and the Speaker's Respon- sibility,“ each student will submit to his reci- tation instructor a short paper (not to exceed 500 words) reacting to the lecturer's point of view on ethics in speech-making. The student may agree or disagree with the lecturer's position. Time Schedule: My speech is to be presented on: A-lg TOPIC V SPEAKING AND INFORMIKG Assignments: Beading: Chapter 13, “Speaking to Inform“ Chapter 16, “Special Types of Speaking” Chapter 17, ”Audio~Visual Aids in Speaking“ Oral: Each student is to report a process-—how some- thing is made, how something operates, how something is marketed, how a product is used, how an idea has developed, etc. In general, it is desired that the student take a fairly elaborate idea and reduce it to a short speech which can be under- stood by an audience which is not experienced in the matter under discussion. So far as it is possible, he is to reduce the process to a series of steps, employing one of the speech plans dis- cussed in Chapter 13, organizing and outlining the speech according to the principles and methods he has studied in this course. Each of these main points is to be amplified with specific, concrete materials. The report must be interesting as well as informative. To assist in accomplishing these goals, the student must make use of visual or audi— tory aids. (See Chapter 177' A complete reliance upon the blackboard will not constitute an adegpate use of visual aids. Time Limits: 6 minutes per s)eech; 2 minutes per evaluation. Purpose: Experience in organizing, outlining, and presenting an informative speech with the use of audio—visual aids. firitten: 1. Each student will submit to his recitation instructor on the day he speaks a full outline of his speech, usingrone of the speech plans develOped in Chapter 13 or a variation of one of these. In addition, he will submit a list of the visual or auditory aids he intends to employ. 2. At the first recitation meeting after the com- pletion of Topic V, each student will submit to his recitation instructor a short paper (not to exceed 500 words) analyzing the delivery of a speech which he has heard in pe son outside of public speaking class or via television. 43.";30 Time Schedule: Hy speech is to be presented on: Assignments: Reading: Oral: Written: A-Zl TOPIC VI AKIFYG AND ADVOC-1T.“TG Ciapter 1a, :Spe skins to Advocate“ Chapter 15, .Slieak ing to Entertain' Each student v:ill prepare a sfir ech of advochy in support of or against a current policy of the national, state, or local government, or a principle, custom, or tradition of our society. The student must make an honest effort to analyze his sub'ect, his audience wbers, the occasion, and his own prejudice in order to determine the relative amounts of the kinds of materials of speaking he wants to bring to bear on his speech. He should review the entire textbook, selectirg and adapting those ideas which he believes will best aid him at this task. Materials of development, personal proof, and materials of experience will all form a necessary part of this speech. The speaker should have a specific reaction in mind which he wishes his audience to make to his speech. Depending pen his analysis of the situation, however, he may be more or less direct in his efforts to secure this reaction. One of the speech plans discussed in Chapter 14 will be selected by the student for his use. Time Limits: 7 minutes per speech, 2 minutes per evaluation. Purpose: Experience in the complete preparation and presentation of a speech of advocacy, including analysis of the audience, occasion, subject, and spea aker; selection of the appropriate materials of speaking; organization of the speech in terms of the plan best suited to the situation (including the possibility of indirect approaches to the subject); presentations of the speech. Each student will submit to his recitation instruc— tor on the day he srea ks a full out line of his speech prepared according to the instructions in Chapter 10, using one of the speech plans developed in Chapter lb or a variation of one of these. In addition, he will submit a list of the materials of development which he intends to employ, along with a statement to his instructor of the rationgle upon which he is operating in the FUJIinfi‘IO‘T AND 3-22 PRESTVTATION C? IIS 3P33 3. Time Schedule: Ky speech is to be presented on: [I'll l I .l.lll| ll .I1.| APPENDIX B SAMPLE CLASS SCHEDULES t {I \I. II 1| l '| l [‘lllll ‘ .il l.‘ IIII'I '1' All ’1‘] fiyF Pattern Peer Groupigg Group A Schedule for Speech 101 Spring 1965 Date Period Activity Speakers Evaluators Chairman Timekeeper *Note that Tepic VI comes before Topic V for this group Mid-Term Examination Final Examination 901 Mon. May 3 10-11 a.m. 901 Sat. June 12 8-10 a.m. 902 Mon. May 3 2-3 p.m. 902 Tue. June 8 3:#5-5:h5 p.m. 903 Mon. May 3 1:55-2:#5 p.m. 903 Tue. June 8 3:45-5ah5 p.m. B-Z gyF Pattern Peer Grouping Group B Schedule for Speech 101 Spring 1965 Date Period Activity Speakers Evaluators Chairman Timekeepgr Mid-Term Examination Final Examination 901.Mon. May 3 10—11 a.m. 901 Sat. June 12 8-10 a.m. 902 Mon. May 3 2-3 p.m. 902 Tue. June 8 3:45-5:05 p.m. 903 Mon. May 3 1:55-2:05 p.m. 903 Tue. June 8 3:45-5:45 p.m. APPENDIX C STUDENT INSTRUCTIONS FOR PEER GROUP OPERATION IU"TEUCTIO““ FOE T3? ADMIKISTRATIOY T OUR COURSE When you are the Chairman: fihen f. 3. [/10 j. list the speakin3 order on the board by name and number. instruct the t1nehoooer as to time limits. check the roll with the form provided. provide the assigned evaluators With their critique forms and carbons. collect the critique and retain the ori;3 ine s, giving the Garcon.) to the speakers. collect the outlines and evidence cards, etc., due. return 3raded outlines from the previous day's speakin3. collect and 3roup all critique sheets for each student with the speJCer's outline in an orderly manner to be turned in to the instructor at the conclusion of the period. desi3nate from the as si3ned evaluators, one person to make an oral evaluation of each speech When the Instruc- tor is not present. be esponsible for beginnin3 the class on time and com— pleting all assigned work for the day. (The Instructor Will deliver all materials to the Chairman .8. 3.11 the be3innin3 of the period.) you are the Timekee per : a. enforce the ti e limits for the speaker and oral evalua— tor as prescribed by each assi3nment. you are the .Spea1cer: a. be responsible for fulfillin3: the assignment and making a worthwhile contrioution to your audience on the day you are assi3ned to speak. you are the Evaluator: ,4 .A'. ,1 Ir—fi 1.41-. sch day 4- 6 evaluators Will be assi3ned to evaluate speeches When the Instructor is not present. (When he is present, he will be the evaluator.) Tech evaluator Will ma1:e a Written critique for each stuie nt usin3 the form handed to him at the beginning of the period. The valuator Will make out each form in duplicate. The ori3inal Will be 3iven to the student chairmen, Who Will pass it alon3 to the Instructor, alon3 With the outlines from each speaker. One evaluator W'ill :nake an oral evaluation of each stulent speech, assi3nnents to be made by the student chairman. T_OD OF‘EVALL TATIOV: You are asked to nah arr indicating the areas WLe itten comments on each 311eaker, re you feel the speaker Was stron3 and the areas use re he should improve. You will be provided with a criti iq 1e pad for this purpose. After you have completed Jriting your com_1ents, 3rade the C-2 C-3 ‘ q 7 bein3 a hi3h grade. Put the 3rade you give the 3 {e on the ori3inal cepy only on the criticism Sheets and sign this copy. At he end of the class neetine the chairman will collect all criticism forms and riven the carbon cop'es to the Speakers. sneaker from C to 7, Wit (" t.) APPENDIX D SAXFLE 3’3303 PLAN DIX D _\'\7 ATP" 15.31 PL.’ C31 a??? J .2...) -- r1533 ,‘7 2.. C" - J BED CHINA IN THE 11.11.? Introduction A. "If the United Nations is to be a true amity of na- tions, it cannot close its doors to a quarter of the inhabitants of the globe" (President Abbound of Suden). B. Membership in the United Nations is intended to be universal. l. 2. 3. Purpose C Sentence Its strength resides in its representation. Its influence was to reflect its catholicity. The value of the United Nations lies in the mee- ting of different ethnic groups and forum discus- sions by potential enemies and active opponents. The United States should support the seating of Communist China in the Security Council of the United Nations. Eody I. Communist China would play a significant role in the United Nations. A. Until Communist China is admitted to the United Nations, there can be no realistic discussion of vital world issues. 1. Red China is directly involved in such issues as: a. The unification of Korea and Viet Nam. b. The Laotian problem. 0. The problem of Taiwan Straits. 2. Indispensible Communist Chinese representatives were invited to the Geneva Conference on the Laotian problem. a. This bypass weakened the prestige and authority of the United Nations. b. If Communist China is opposed in the U.N., her participation in other conferences should be Opposed. Communist China chould not be denounced for breaking the code and rules of a club she doesn't belong to. l. U.W. members are subject to accepted consti- tutional, communal and social discipline. 2. U.N. membership incurs responsibilities and duties. 3. Chou-En-Lai tempered himself to suit the Fan- dung Conference. II. Public Opinion favors the admission of Communist China. A. Public opinion in 21 countries, 75% American allies, favored the admission of Communist China. (Apple— ton, Sheldon, "Bed China and the United jations," Current Histopy, September, 1961, pp. 141—145.) D-Z A. 3-3 1. Among the countries favoring admission we Great Britain, France, Uest Germany, Japan, India, “ustralia, New Zecla nd, Canada, Norway, De mari, Sweden, Finland. 2. Only the Unite d 3tates, Mexico, and the ictnorlaid were against the seating. 3. Great Britain, our closest ally, recognized Red China and favors her admittance to the U.N. It should not be difficult for the U.3. to do likez-Ii se . The United States is coiulttin: a grievous diplo- matic error by continuing to maize a vital issue out of a hopeless cause. 1. Since 195+ the United States has steadily lost support for its proposals not to discuss Communist China. 2. In 1961 the United States could not find a country willing to lead the fight to bar Com- munist China. e Communist Chines intervention in Korea is no 3 for refusal to admit her into the United Na ations. That is the advantage of perpetuating moral censure over Korea? The exclusion of an undisnayed aggres sor bring 3 us nearer another Korean War. Conclusion The overriding responsibility of the United Nations is to ensure that there are no more wars. 1. The China question must come to a hes d. 2. Peiping cannot be ignored. Uncle Sam will have to face reality: ”is I was going up the stair, I met a man who wasnit there. "ire wasn't there 8.333111 1:0de I W135, I WiSha he'd Stay away.“ (anonymous) APPENDIX 3 STUDENT INFOBHATIOW FORK 3P3: W 101 II “"WMATIO" W‘OYI'I Complete his form and turn it in to your recitation instructor by the e111 of the first week of classes. 1. Full name: 2. Name by which you wish to be called in class: 3. Your lecture Section Uumber: heets in Room: J U. necitation section number: Meets in Room: O 6. Your ‘order of speakin3= number ass13ned to you: \J 7. Your year in school: F esh. Sozah. Jr. Senior Grad. o C. Your major or pr er ence: 9. Your Adviser: peech xperience: (Describe briefly) U) 10. Hish 3ohool 11. Previous Colle3e Course. in Speecn: (Describe or give numbers) 12. Vocational Ambitions: 13. Oojectives in taki 3 3peech lOl: lb. I-Iajorr1 difficulties as a speaker, based on your analys s and c01:;1 s of others: 15. Are y01dintercsted in extracurricular speecha activities such 33 33b-3 t3, Or1t03y, extein speaking, oral interpretation, or discussion If so, indicate experience and/or lutwmwnt 1191‘”: o .3171’737T'3IX 33‘ H '1“ '1’ 11:131th Fri-’1‘? 9" "’1"”fi. .31)-: J .31. 10]. J... .J 3..-.L- \J_L\J..- .) 1.. n..*. v..-. a ”*‘I’T‘ r .J 111.: v .3. SP1: H 101 L) H d LU 3.L ?OLICI33 A73 IQCC13 Dc.1) The policies ani irocedures noted below Ire the result f several yetrs' eXperience in teaching Speech lOl. Coveree below are such items as: (1) general university procedures affecting Speech 101, (2) general policies and T0313 of the course as laid d un by the original planning c1initt12, (3) policies establishei by t1e Speech 101 staff, and (3) opera- tional p ocedures which have become regularized after several years' erperie11ce. Some of the items covered below may be subject to criticism end review. All are subject to expla- netions1nl clarificat101 at staff meeting. 1. 3TZTT I13TI" 3 33 ulr,1r neetin;s of the current teaching stsff and the course che 1rns1 are 1131:. each week, usually 011-.Iei1esday I'orm‘ at 10:06 3.n. Attendance at these Hect11us is expected,3 aqi those stsff nowhere who also enroll for class es are askei to keep this hour free. 2. TCTUZE 301-33L: All Speech lOl lecturers :re scheeulei 07.1 Ton3ays at 10:00 a.m. and 2:00 p.m. Since the teachin3 staff supervises these ectures, es well as the mid-tern exen, eseh :e1ber of the sxaff shgulj hold open 01 his scbeitle either 10:00 qrv 2:"CC on 2752113337" 3. IZOCICZTfi C? TTLTVI3CD L3CTUZT3 All speech lCl lecturers are televise in rooms. If atte:1dsnce is to be checked, 3 1 11. W111 be asheg to proctor one or more lecture roons, sn' neck attenisnce. The general proceiure is for the proctor i t in the room on the day of the first 3 then turned into the lectire chief, p . k all nan s of oreinst the master lecture roll. '1' v a ‘ fi4~ “,'\ 3. “o -. _ ll- 1'," ‘fi' . is *1ll re 11 the s13n-u3 sheet to the proctor, “no ulll us this shes es in; sub equee C) 3 master list zor eh sch 1 later lecture days h s h o and conoore it 1.ith the (f C.) I LL: 0) C) O (1) C O L) Q U 1 I et in the ro Oct .rk t1em on the master list. All students who have ‘3 I: I» p.— checked off the lecture chief‘s master list care to the s sf. as not havin3 attenle i any lecture s. y .re a proctor, pl ‘ .1. 1 ,3 ' . ,_ 1.. -' v - ,— ‘PrA 13‘ ,1 -:y1—t‘:r1 time, 5&311 iestructor -1- 1 rec at 01 t i 1 r313 eerie for the rurp so of e mi1—t3rv 3, ill not be 3 ce 1 for ever" stuleut i 33argl, oely {W'veiv3e, transiers, earls; uiee s on scholer- hit 931 proh3tioe receive xii—t 3 T Ptl Ioter U111 be pro MC38831 for you Ky tee cotputcr e1 3H3 will he b<334 33 the First listructor—greiei soeech 331 the Vii-cor“ eiiiinetioe. These gieies ere inteufler es esti13‘es 0311, "hi do not b;cehe e 1'rt of th3 37u133"s per 3333t recorW. They ere 3ot to 13 13 mation 0? such thievs es sceo .. 3 ' " ° ' I- .. \ ,C‘L,‘ ‘...° A L‘.» __ , - J. ,. ,li”iblllb‘. .Lbor UNLalw , tn; 0 :72 "t2 t3 L rgtur-,3 v‘ U v 4.2‘- K ' .. FL. : V- .-1,‘,:- ,. 7 ., .,_- .~-' 1-- \ ..¢ 1’- .- '1I,. 1, 1‘ _7 to t"; 1*1L o -103, 22:33 tgv x5412trar 2111 3101 t21~ '2 I" ' "\ a- - L1 7‘ -::JS‘_‘;3-.-\"Jffi_r‘. fi2-2."\ T‘V'P TPv “'3'? 727' m'jI'fi‘fi /. .12. .LJ. ..'-L .24 ‘111.) 333302 101 is r :13 novin“ tafiri 2 zitaoi C1 urr2V1nt 2t ' 1 w 231 r cor 31;22 f a. l...)- Ho . I.) § J . Ht‘ ('_D (2‘ l r. Q) Q ‘. 4 ‘V ‘ f N J O _1 .0 _ ‘ . _ ‘1 ~ I -.-‘ . ”1 .. ' .1 ~ 0 ‘ 1; a r .L. 1,12; ,rocagn1w: 2 311d xxx- u:1<;1.;rv:1o‘€p '273 18 :113J,cu t3 frtnw22t c” ““3, “oat or the 2:33 3 12ft? 2tion 00‘crrtfi‘ — *‘- 4 - 4 ..J -- s‘ - - ~ -4- ‘1 --.-l «u. _ -5-— '.A , 1 . 3" ,,--:.- 2 ..- 1.: -1 :-2, .n - -- t1: st1131t‘2 ;P7 a 12 tan 033322 21 l 21 32 V1251 10: J03 O ‘ ’5 L a 1n tne w t 3 LJO O |J :3 ,4 ..J Li l-s " (D '- O 1...: 9.1:; :3 A 1 _~ lo"12" br121 3:2cr11— tioa ”ill ssrv: t3 12tr 3303 you to ur 9rzil"; 22*h33s: 1...! I) I: ‘J C. C u C)\ O K O W p-) C) U) (1‘ CT (. I ...: :14 \ ) L) CT .0 ; H 3‘ : L U ...]. U) ;. \ct' (0 O ”'5 I..." I.) ,3; H J.) L) >14 3 0" *3 <1 'J ‘14:.» k.) i—‘° Q} E.)- L) '5 1 *1 .3411) (J ’J Ci' :- (J H H ‘5' '1 ct" r-J H :5 O O " .1. D )3 d, .L :22 +2H+w D) (I u) (E! *3 (Q (a i”) .4. '40 (\S I O O H "5 (1‘ LL: I CT U) 3 L) :— "J (L) J 1 Ho Po ‘ {.3 3 (1" ) ‘_) J A O i l O I {3‘ I'OLIFDS 8.20:3 C‘ICC3T‘)‘ 9 vour 00 112.37: U ‘.’\ ‘1.” . ‘3 '\l (‘Ot |)14L tllLJ' t C) 3 C3 0 3' | Y \ w H 3 v I C) Q» Cf) b I H $- c_,4 ‘1 O ,C H 9 (f 0 O r {J - r k 0 (D C) L.) ’1‘; ("i \ <3 0\ 0 C1“ 23‘- d «135“ Cf (‘1‘ (i) (Q) \‘1 Cr .4 3,. O ,_._1 ...J (L) L; *3. L.) ‘ J .3 J “‘J O d H. A CT k 1...)! 3* c_r O _. - O (1" P. Q ,_J LI] V( O 0 CF 1, '5 1+3 (L) I.) woo H 1...}. 1 U.) (9 H ii 5. (D O H- 0 .1 H \ La.‘ 4 <4 *‘D d" I“ ....Jo “J C”) C \l) \ \V 1:) (u U u) < c F1. :V U) '..3 O [—50 LU (n 5 O O U) ' O H) O ‘f H -7 ,1 1., ,Aq .1 :. 333201 cr21cs ‘ 1” --r _ 1 t 2"- .. 1 ' . 2 1 - - ~ . 2" r' ~ 17 ' - x . L‘ f- aluaJS 39 ta: LVEllltlQfl 2203t2 930v11tl Jou, 233 833 unfit - ,3 : . , , . .1 , - ._ ,1 1‘ -‘v -,-. f. 1- .«2 - a 1' 1. , - .1. . Jotq‘:3uu-xy1t 1‘. are chso U12. tne1u 111r: txv2 SLA1133t <11 1 a l q fn v1 0 1.3 of l to 7 o_1 each Siiit10111 00“”22t as d J. :J' -.v . -.. ~ 4“. Jr“. 1‘11. 132 - --. - 11vii: by 10. Inform yoar std1ents bash tfllb s or; L91??- sents the spaech C' Y 23 YSU 3L7? 72133 IT. The gr:ie m T 2 - - .o u w . ‘1 .1. 1,» ° - -..,1 ‘ Ob loncrel :1 cr you qave 25211231 tqe nr1t en -.t2r121 ‘2 V ‘ . ‘ N 1" . ‘ 4“ A 1’ 'rTr AN '- 1 I ’ pref-r33 13 03 ction ”1tq tie spseon. 4111 you n2v2 r321 ‘ (2 fi‘v ‘ f , ws- J—. - A w . 1 - - ,- - - 1r tn: 0332 1 p122 2.3 ottar rchlr-i flater1.19, hfif“ voar *'. 2 —~ -“ ‘ o ,2 -- 1‘1 r‘ ‘ "A A 1 - J-‘ ' r . a v , Q A ‘ . 2 ‘ J," f1231 :“3Jd for t1: 332:02 01 baa or141221 2 ala,t131 sneev, .A -1 , Au’: ,1 1 m . - '., 1, H .1. .- ‘1 ' , 01 twe stalCJL 1 123102 1112, 211 11 your A ater Lr,12 r3021 m1 - \‘1 ".'\ J- . 11“ Pa - ‘ ‘ 3 ‘. ‘1 c a "« PV- fi . . 123 Droce3ur2 for coagut1gg 223 rdCOf&lflp hour ur21cs 1s . \n . ‘ *— £01r r“ L r1 A '—y ’. ~ ('3 f' *7. A\ C‘ '. 'V‘ f“ r I .0 . T 311112r to tn-t 1t10r1a31 dbOvzo 11ny_y 213 tqe 11V? to ,A — - ,. ' .0 ,1 .1‘, .1. .3. , 1 1.. ,. . 1 ,. .3.‘ A .3 .. -‘ S“Vzfl 311fcr;1t tot2ls 12n212113 o '1;j?111h~: they 111g 1 d1cat3 o o - _ 3 ‘ ‘ . . _ I o ‘ 1H33Wtrn ‘ H?OV3W33 3 or f311ures 1n marforwdnce 11 A. .3. .l 3923 hoq U3 h33t1 3v thu 110tt1c or; they V111 he U331 33 1 ‘ J...‘ J— Lin .-, - - 1 - . 1 ~ .. -- n on use 3vw1n- org, buQJSGlVSS, 331 tans uni o 333 BV?-U?tOTS rrqfle. 3 O ;..J I. I O ' "’1 v) -rO tructor or rtu13nts att3v3t to into 13tter :r3138, f r*3 101 of transl3t193 11 st t _ '111 not 1:13. 8 4 1. 1 ;r313, (fifty to :13JOW - .. V133 t'ne th3t k3 hvs 0V3r—33 135t34 1t; ( 13ntn 3‘30331" tenfl to r3te thetsclves cs I ma? 3 a 301“: 3011; into the f1331 931 yet I fluxke: ‘hé course.” 1033313t he stU13nt 1n knovihg n1: S‘nniinfi at 33v t .-h to 3v3r333 the nufiber r‘ C) O 1 hit thdt the nver33e W39 v (D Q U. .2/ J .. (i. - C. ZV3Ufn3t10n Cr31;3 ,x30113t10ns 3:3 :31: Up by the staff 33 3 :rou;, adm1113— t3r3fl 17 the $t3Cf, 331 the f103.1 grofle for 3Y3W1n3t1ong 13 ‘ \v . fl '3 ‘1 “L- A '0 .- W x- ‘A ’. r “ ... 33 33'11;1 Ev tqe SbuLf. .'~1lndt1n 3 1111 be I. .1. 7T74?1 3‘7 rfiw 3cor33 “3303t36 to you, 3133; W1tr€13va3t ° “-0, _ ...- .' -, ° .11, - .1- .1. - ... J. A -,- ,. “.1 ‘ 3. 1- luLOfinlQ1 it 3' 11 to help tne suuflcqt flutérw.13 113rn We (3 {'33-'91-w _. \J‘ ..a‘. )0 fl. " A \(‘s‘ (’1 U. :1131 r';: 13 ini’oatei 3bov3 the r331ily 1 wrovinv 331 chanqiflg mathois .0 _ 0 :11 :;c>31?1 \# _ 1 - _, \J 1 C 9 . -n‘ 911 racorzinf "r3133 W111 fOTLE th stiff of -11 man for 1 3 1 H u O ,. . 1.; V - i) -1, .4- V: ‘ - .~ - .L .. The f 113'133 1393t43t101 13v t3 Unéful, 50’3vcr: (1) 3- tain 311 riotuikzr'ccar'iflg U0 3't1110t10"8 tau U111 :3c31v3 ‘f::3tn¢.gr:1'rz‘“111_131 f1161 31th the cour3e ch31r333, 91103 :333t13 sbtut these ;r3133 may 3r133 mft3r yam h3v: left 3.3.U. 333ech 33' .13331‘3t133 L OV33 7111 t: tdt'lil, rviwtsf (‘05 — U 7), r: 001131 33” raflfirte“ t3 Q"C“ii%. I? 3 :1v3n ;rdic 1033 not 33.01 "h3t you telicvc the stnizgt 31ov14 raceiwe (for cr3~p1~, 320332 0t33qcea, €311U563130 co 313t‘t 1‘»-1 _133r :13U;, 350.), the ;?QJ‘ 3111 be 31313t31 13 CD33U1t3t131 ”1th "he course chair— 11351. Itlianrwa t5) 1131: r113t55? ‘F111311 ;;T”fi€773 ‘vfi_t11r11t t‘11s: (:0 13111 - tit19". '1. I'pfinvt11t 13“”13tfix113 COLCCIVJSfig’733133 (1) 12 13t31 ,arliir, th3 ;r313 3 3tui1fit rec3ives for a 3;:301 U3t t3 3 30 :131t: 3? his or31 terforc3flc3, t”< QW’JjQQ'13f hi: :Urgy3r3t10?, ‘V11 :3; otfiwné;f3ct0rc Tffixir tH3 1U3t310t3? ~3” 193$ :1 31P1033t or t33 Syllabus 3:" r33u133. - (Q) 1 3F'3W1 poi fi;:3c11? 153 U331: '7 '3 Pfi4j1, 1 153'1OW. 7T1112 13 tsrug of this 3031:, nat in t r:ns of letter graiec. L 5 “”9 of 3 '3031? F3 recorici for 3 syeach not 3311ver3‘ V l "H'V‘Q A.\J « § 1. (J O J. b KW” 51-“ A V e:- 1 '1 cal... I] 2.... .3. .B Z r” Va 1!. 3 .fi. 1 n. ..... "W... 2. C r .L d. D L , - .2; h. 7.2 .1. ...: S ( O 9 my 2.. . .... j 3 ..12 V... DA... LU ...,“ 1- All «.1. Mr. 11 A) «.... 3 .1 ...).Jim .3 3 .l _ .C. V. _ 1|- r T. x). i , .H 3|.” .. Tm / \.,. I... _ 1. 7 a ‘/ . I Y ... .mL ‘4, _ fl . ..lI. 3-. (fi fixya wort] V J-. — I)\‘l \.I 2|:— L‘V ’J A. C‘ . 1r old. , D‘, . h m *7 x . J. -1 , ‘f‘ 10 - J— 1 (1 W J. L! 1m .LL;' " EC". 1‘ I.) J. i be t 1"“. fill u‘1 L f) . Qrfij V .. lJ .. f1 1 CO 92 flter 3‘ (11" I’Wn . .. ‘-.I f 1 real 1’1 U ’3 .1 r: 1 21 -.CdUfil i 1 C. hm «'flr.‘ Yr I n . 101 room ".7 no "I .4 JV 'v|-4‘ \u .— 7.10 -‘_\ N (I ‘,‘\ . _' . Ci: 21 [1 ~ ~ '7‘ ‘W‘ N"? 1 AV - \I 1". :1 01 I a 10 A .J l h at QT( T‘ ’t" I‘ S l .-Qi to <3 fl :‘1 J V O Ydfi ”LL \ . ‘. I ‘5 C . 1“ D ‘- 1'71] L/ 3'14— " ‘J .I. ms a 1":‘7‘ ll v-3“? 1‘21 r‘, 0H e r: tor W1 f“ L.) I j? I‘UC 4A L.J\‘l l 2 m A ’.‘. ~. JN o ... L? J— ufl U f '1'). O .1 ..L D J o l n : Al F. . . \ D ’_ V‘ ‘K_ of wationm Q 11 —~ \‘q l COlngg I r I‘ V r: , C‘O'VQ r V ‘Ju; \/ r l 7" r3 7‘4. V ' 0 LL U ti .3 3. r 3 \J__ Qti 0:1 0 M i r‘l .‘ {j 1') ‘3 A“ ‘ .1. b i V ....u! -, 1 1‘0”".1 .L -—~-V.~ a T". ’3 .. Tl! : ‘1 :3 - * r913 ..1]~r 3 e n 9r: 1 x , 4— niI‘ClL, this or ['1']? ’3 \l t" 'enc :“rl J- ’IL‘ VP-Js \J JL 'w 1 ’3 OH I fiti ) H ...L‘ 0 -LC -l?r \A :1 fix? 1‘ 2‘? (”‘11 uc‘ f1 r“ 2.. ~21 Q \ A C 0 UI‘ U' “{Vfiq 1“. 731 f} a X) 071: i +. U 7101" 1 -v I\ l C IL TM other 0 -L. l 44 A A —_l I fifirt \ 4. CV7 (‘N ID .3 \\I .3 r: ’1 ,1”; .r‘ :3 '1 that 1.‘ L .L ..1‘ 117' A V v "1 Q . t ".15 I ... a v w - r r, , a .3 1V vs; st 3 \4 "A’Kfl h A_ th- \/I I‘ "11‘ f0 1. 1,1; I -.- CULT)" .. 4 l :‘~‘ ‘4 W...“ my. #7- 9 C .1 "1H 3 c... h , .1 ...L _ I 3 1 a a. C T C . o 3 3 h 0 . _ U. .1 an O T t _ . .1 C 3 n. O 3 .1. .1 _ t n C O t v. ,1“ .1. O .... T ...; n." t arm. 3 O “I., .3 t u “L. 9 11 C 11 .1 3 ”a t t a... 3 t _ O, .1 J.-. .1 .1 O n. C n .n. e a f n n... W a .1 3 3.1 07-. r 3 3 t l m... 0 .fl. v... C .1 .L . a”. O O a," r. t .7.- .U m. C U.N.Cw .C C “)1 3,. n...“ Lb “I” .Tu 101 “W. t ml. r V A). My .. “U «H». .1 6;.n1m . .. fi¢ haw 3v CV “II. “M ...... f 01 LnU ”...... 3.. M“ O .1 n WM. «lib DJ 13 01 an!” 1'— LU 1. C “ya“.LIU— Mr“ 3... ..Q V a. ;. O .1 "a .1 1 .fi 1 .n f t O O b v t C d 3 .1. .1 .2 S. .1. C .1 a. .1 10.. e .1 9,. -E O H .1 U. a ...: n... 3 t a. «L t1 .1... ...», S 3 n. t n. S .13 3.1-. w. 1.. C t u t .C 3 O h l n... 3 .1. mM 3 “ml-41:1 . 3 3 C 1. .C V 3.1 3 U. 7...... r O .l n H. A; 3 .l ..b o“ .fl. T 1.73. 1 a.” Wiaogm ingiuut+:u0flv;:1o 03:3 iv21le?;§c a. 3 S .1 f 1 O t : .1 C a; n c- f .1 O 2 .1 n t n u 9 r1 r2 -.. .11... .... a... t f .l O C a .. 1 3 A... _.,-, V. Hm O h .- .t O O h C a, T 8 .3” m ...-U m.) 0 no no av ow m. A. nu .2 mu m. a. 1,.2 m” .p +v q o. .p 41 m1 m1 1. u1 “v.41 a .v1 .2 41 n. n; «a .51 t T .1. ..J ...... O _ v U. I U 3 .fl n. ...; .l .l(\ .1 9:; _. 1.7 ..1 ...... 01 S r a.- O o. -.L .E 3 h n.1{ n... .. :L 3 Q... a _ r 1 t .2... .1». C 3... n. O O 3 c. O C C P S M: S .1 0 .... 1.... "L r . O .11 01v. ,2." p... C; .1 l .1 .1 O f r .1 ..n O t 0 .fl ...». v. a. 3 a ,-......hl O t .1 Ci; 1 _ 31 3 a. C O a. V7». 0 O W u .31 q... .1 r, t S r 3 .fl .311 t 0%....”1... .3 f .1 3 l u 1 .1. r. t r O .1 i t ..L F .1 u .1 t C t ...... 1...... .1: 0 ol 0. d. .l r u t C 0132. v. .- 9 3 “L .r. H 3 .W- ... .J 9. I .3 0...... \* .r..u r e ..1 3 Q C H 3 .L r t a .7. l 1,. a, a, C r 3 .....- .7. E n; r 11 1310 V19 3... -ru 3.1.3 o... t n L S O S .113" o e. n... a, .... S . 17w 7 3" l h .1 3 r r ...1 n T O o, C ... C P a, r .1 v. 1 r -.. n... t D f. . 11.. .f .1 t 3 .1. 5,. O D n t .1 .1 9 H ... 1...). e C O t J..- 0 t n. O l 1 1?, M . a... 1 h m. 3 v. H... .2...” «a A... 9 p O 0“ VJ. H V .1 r t c. e .C 3 c C .2...“ Fun... "4 _ _, e t n... _ h a). t 3 C S b t e t ...“ V. C G u... 1.- : .1 .176qu ..- ...-M ...... .L r ..1 t t t O a. 3 S .1 3 e -. u .L. .1 u G .1. t ... mu; 3 C a.-. oi . ,1“ a ”.... C O h. n T. 1 m: ...u .1 «L. at,” V O 3 m1 10 1L. .3 .l 1.... 1 . a 3 a) .3 O .r n 2,. O Lt n e 1 .fl 3 a. C .....U 3 t .1 S x...“ 31.- 1.1. T l O O r Va 1 C 1. 9 3 8 T 2. h ...1 3 h V. ...! 3 1. 8 u . S ...1 Q ...-.... .1... t mt; m... 9 o. n r 3 H. M1 4,. m1 t 01.3 ...u U. a-” .1” S 9 D) S ...: .th o . fl, 0 O C 3 a]. n1... t ...). c. ... t l O ...... .J. .l O .1 O 3.2 r o.” 3 p ... O . .1. n; n .1 u a r n .1 l V. t e a, O r C .1. t ..-... 3 ..- t .121.“ .1. t 1ft r 3 O r. 3 p, 3 S 9 a). S . T R. a...“ .... Q a S C _ 2 S U . .74? d at d1 n t 1H m.” 3 -Tv O m. " .. 02.1mm.“ .J 31.3 nT. m._..L.l 3“. ”a.” "(a-u e u S o .1 C T. t 1 .. w. H e S ......” T t t a... .1. b r S o a 0 0:1 t7; C .1 1.“... l H ..1_ u 0 ...; C u h .f n S a... ...-1. .C .1 T .1 G .H. .2- ....1 8.0 T 1L t a... .1 .3 t .n. f v31 -. ......- 4- 0 1...... 4.... u 1.1. .1. 3 .1 h .3 Win 3 2, Du“ T. .... W. V 21n +u .nflow nwlv.o o n.dg.wfi.r.q o nvr m.>. Av .1 VJ C 31 a... S s S .1 LE ...-1. P- C 2.. .- .n. h D O r .U .l 3 C U. A... , . 2.. 2L .nu avaL n“ . ”1 a um 41 :.Lb a. a. 1p 1b a_-. .-.nu nuLu an a. n..2 x11QU1q. “1 r” r); a, t r. u .1 C O , 3 .1 C 1.- 3 9 2 3 1 S .1 C t ...): C .32-... O 1.. ...-..._ .w-J Lb Lb «T. Q. 3 S O 3 :J “Jon... C 0v 3 S .; ”No.44. a... S 3. .M 571 am... C 1.151 .L S 3 3 .... P 2,. C A.-. u ... .3 1 T 913 . V r ,U n H n 17:14.1...IC, 1. Au .1 ... 31 .. no a1.11 . .1 n1 A. .- n. a. a. v;.11 v. v. mg. 1a..nu a. a. O C n 9 a e t G u a l .1 a. a ......1 3 3 C n. l C r ...L. .... ...—.1417 m. t t u 2., O .-u S t .1 f 3 O .1 t .1 .1 31.-.... C 3 ...... .1. e 3 a ..n .1... T ”.1 ,D. O r C S t 9»; a r 0 e... H .3 U. t 3 ...-TE 3 f 3 t V 3 t1 ...-.7; .1 ...,” S t a. .C H. r S T .1 _ ... ...,W t i ... S O B ... 3 3 a... t t. x . t t 00 3 mg “J. O n.. ...M. .41 Lb _ ...i .aJ. .q T. A: .1 VJ. C O o m..._w-..‘.3M\U C. 3 .1. .1. 3 .010 v... C 3 T. Olk . 013,1. T O .1. .CnL m- 10 .1. T1111”: U... .C e O b T f .0. fl .1 \l .0. .1 .0. V .1 . .1 f t .1 .1 n. U. 1. 3th fl .3 11 C e. a. r 9 M2 .1 t R 1 O n: U. .l t n. C. ...»; -C 3 .1 a... O a... .37.... K... ._-,. a. U. .3 ...; D C O t 3 U. 1.10. 11 r t .C 3 ..1 .....V U S 3 h J ..1 m.-. .2 3.2.? H .. ...-u t 3 3 .fl .0 v... Q. C C 2.19.1 t a, .l 3 an 1.... H”. .C H.21- m- .1 .fl 3 t u .C -51.... .1 U S h h .1 .1 3 P, a ......H .... V t r .... C .1 C ..d m; l a O . a. T. _ : a... ...-u .C n .1 .n-u «1 5.1L U. ..f T .1 a .3 2,51 .1. .C n.- 11 7...; .C r. n .372; “..L r 11 m; .l a r ....J l O .....J u 1H1 ,3 3 .-q C. 1!. m r m.. .ru 3 m-.. t 3.... .....,WD~M.: ... 3 .1 1 n 2., V b O .C V. r .30 a. T1 0 .0. a : .....u C m 2.. O .1 Dunn-111.114. f . f .1 .1 ...-.H U a. u C p 3 .30 v31 3 T A... S n .1. N f C 1 J O ..L1u_._..._.1 /\ t5 f or 4— .-.. V ‘J -. 1 . L 3‘- O "1.".‘-"T -.-‘- -.. V iciulz ‘ C: 5. 7n ."\ l. .-x- x, f! ' \ .4 (W l ‘- (SO --. ’7‘ I -l 1r'—- \m‘rgif "‘ ’fi/‘i'rm JUJ. .L A; '0 -'-v— ‘I 54‘ L .1"- .L. _ J. .1 121-1...: "v.1 7"" ""-1 F‘r‘_" " ' ‘ 4 I '7“ 7 Tn ‘ ..L o .. 1 .. I,“ *T-V‘V'IW ..' \a \ _r .1 \J-A— ‘ J I - . F‘- in"? ‘.LA. .-.... I . -K— —-. T'\ .-J .- r L t a ’9‘ -.-1 , -12} A\ 1-0 *7! T“? C“ . lJ',I J... - U (\ 100 b V«. . J.— .. fl. -1 r1 - N1 "\ 1'1 ".1 ‘1 31.0 2 ..‘l ij‘f‘ I 01 07 L'. QQCUL L- .L. U .1-‘ .- r11 to iun roll “3.- tar- ~1-4.~ I ‘ u Vla- ‘— U LJ :3 4"] 3 \|. n1 1 1'5 0 \1—1 ‘.I .4... O ~~1tr\1-"..I) ' L LJ )- .1 \J O .,. 3 -. ., f l '4 U v-J A 1 ... V I 1171“, ('1 ‘_,l':) 1. ‘4 ins l ’1‘»-- 3 l 3 1'1 ructor, .1. L2 0‘ J. k C as... S Al~ O n N. .1 v .....1 C 3... T I *r u “sentifill A C t 3 I 11 l-) ‘- 1 “a r“ ".-. “1.. |; CO 1,...) f\‘_15\fl ' ‘J I- .1.) ( .... - v 0 ram? m-_. J'_ '2. 1 Z‘ - 31?: O 1’" , . - Lg' OFT} .ch r0 {.3 (T. ‘u '1 in J— ‘ - ~... L) l 7“ '3 0 3:13 ‘ "1.": » \1 'v‘ .1. u' 3.: 1‘ .-. a \I .‘. 1 L11. . .. L") “A I ,- ssctio 51:11V. V r a. V '1 v- 7 O y \ U \1. II. ta (3 1 .- 1" D ’3' If“: 7‘) "\ / DU. 1'. U .L .1. t r- aort “" J 1. ‘1 ., u Y1 ' C) U. wv .. .1 I .¢- u ‘3.'\ -L. 777191.10 1.- .. . fit ‘4‘] 11.2.3 L21 L- Lu U_ ‘Kf" U Q U. 73 - c. f f ‘ :1 conduct ructor will 81 1“ v “I -s. 'ula of ier 01“” «‘in ,. -... /-. \J mic, “'0 C5. " \J r Ch? 1 I own 012.33 1‘ -.-‘ VI— :1 if.) " :18. L- U. 1'11"} .1 .. -’ 1 r* .—. 7 “3 L .' -‘ #1 l J- l alUitO fr 4" ~31 \nIA r~' . C) _p 4» .1 01 r“. 101- 01‘ L- L, ~".\ / 1 i. U n 'L '/'\ '. -(-\(-w ;0110V1 .01: .L CC“ C L‘ 1 1" ’\ .11... 07 WI) '1 1 \ ‘ .-1”\. C 1n, '7' .14”) 01.111 (”Vi . I V" O 3.13 x.) (D -.L: , “I. .L >u‘..| 11 VI "3 1‘11 1’3 -3filic1. .—-1 1". .\ Q n w a n- LT- “w Vn\ mx- . ,n . - «, '--« t f » \—rovc| ;»<, ufirou 1 iv,. .,11° nuzucr 1? 3377.)? i o €20- -.- J . .n, ‘ '1 o.,,., a U eat [or moweiu¢1:l. f.‘ m‘fir'r' 1f 577*) 1‘1;de ‘fi-Trqf‘ VIATT YVTT T" fiTVr"\ m“'r'“.?' fl fim-ffi-fivf'f'rf. fj---l--."7r71 o ,. .-,'.,,.i_ ' ., 1. 1 . -. _, _, ,, ,J I ~--1_ . _- ...:.‘..J \ _', ‘, , _ _ , ‘ , ‘.. .,‘ E-) ,2 _ 44,; J .v ..- 7‘ wi‘r‘ «.v—.v~ l—fiv-fv‘wrf » o a . " ._ 1 A ‘ . 1-- .1_ J_ 1. .» a, I'.;.- ”n.b r 01p “law 11%» r: ¢dd Wren) lera 4 h r T‘ J...‘ . _ 1.t%r. Thfy bWoulg k1»: fiflir rdcithtiOfi “action NFL” leatura i‘i’}.f‘0‘i“T‘;’“.ti-0‘71 71- ,, O .- 1 A . “.,' _ - ~ A. -. - - .-. - 1. --v 1 F 1:: O‘rt ,ta n;'17/ 9- r».€ ‘ a) O\’“" t'1 1, \01.Itm'1\j 011» -t t; qtniefits is fish %”71 to .0: (30 throw*\ fidch TOWiC ¢”Ti;@- A k l ...‘| .0 D. J— 0_1~ O _ .0 ._ .0 .. j C ‘ 0 . J— ._ ‘ r7 r1- -,,~‘ «Aw '11 ‘ ,- ~ r- A - 5‘ .-~ m A r" 1'\f\‘fi f p ' -‘|: L O J. ' J I i U 'fi . [I \J . ,__\ .1- .1- 'J .1. t. r \J ' 1 IA. . “ .L \ J 14 ’. ‘ ‘u x) 1 ‘ e l o ... a . LI J ,_l. g 4. x» . .L “ _ - -.- ... ~_ .- o 7. ‘tn‘fiztw “?fi t0 “rip"fij ?F'3$' CTTLTTI», TVI?‘“CT Cl““ 773. ”CnOCi.t?i "i*h ” 920% firepfirwtion, hut 're bot t 10 tWfi “rittaj York not dwractly “fififlCl‘tQi V'+H ffin ‘7“““"*““Li:x1 O: t»? ‘73“?CN7YC. '“0Y*' 0*4c3 I 1f1?:fik;h 37, t5i5 Wb“13 t1“ “tnienv is Urt Vj'uir°4 t3 id tW‘ ”firfi liftéh ”13;? .TWDtéfit ‘5?t w. :€7T-;TUU‘91 “+H%‘”*” fio ill of tficu.brP—31.T3310 TI. k- '_ 1“ “re r73“\fisi%19 1 "',“I, ...-. , . .._,, ' .t ' " . . - ‘ .L- .L" C. g1q;? waoé- 2,3)11c: t EJYD fwcm, tA¢A, Ln? 1 for :10? “ ' \d ‘ ~ " - ‘N'. - '-‘~-\J _ . -- _ .L. .‘ ., ’ , -. ‘L. 003:1? ad ntfivtd *1 tlfi 3¢11n us. W ‘ 4“." v1 ’ w? 1‘ v~§f~ ~v -.~ AJ— 1.“ .«. A 4'— ' 1" J— ’1 . — ,a— t1 ‘r‘ [\“t . «A J. g1-p.2,3 ~,-1 Q OJ “29' I‘L/‘u ;- V: b]. ."3 LA Vk CV53 :7: p“ l.-<.3 ‘ O O 1 ‘w . ~ a + 91 uifiq ,x ”1: ,W +u ~ ”A L ”flaw“ s‘flld'njfi j.3 .J3v?fl.- u1-;’ €rr>;“»_23r1.¢.?it‘ o, L 1.“ u- . » V‘ v 3 lrjort“”t Lfi‘t ;., ,m J..JL.. .UVJ-3137 to 4U?"c 3 - (”fflTle“fiC? :valfiny), ‘3 (textimuft), 9 (7rxrtvrefi), 7” (’W”Y‘f111, l? (“*”"fiW”fi30“:), 11% (“fit“503“d), 73 a, _ _- y - r u H. _ _ « (;g?“i?1, ’VW? 17’ (33:'%¥1 ramifioiiuxxf “Vfl77:‘ticr3). 7i”flf?1**fl;, T51 T”T”TW”"<3-V“”"T? trfifil‘ld r~~ ”11*?“ to 'WIV. ___ . V ““ ”“?“ to "5T3“; thrt t3? ““i CV1““ othL": 1: t0 “9 Udé‘ “'" "r ”‘13 7 " ‘F 3”“ f‘““:" 7‘ (3“:L". .P“T“" f 1:" 3 11,2‘ ’““ "“ ‘fld 131l7‘3j7 W . u * ‘9 - - t‘“ t z; $7721?” 1;. :ie 16:33 a it? ”:2 , m“- tWig':f13?I? 7“\4obhevr1fl3‘1jfifori“4J :TDifit ”Hfi“fi~mr 3h: ”tniéwt Tfl;7*:fitiDW “Inf; '3: to “9 canylct‘€ 64% Lvrfie’ 11,1)“ 3E" f?:8?t WVPELt“ifiAy1 “7fftifi". :r' t3 15x4 0 V~"ftic>w o :1»; '7~3” 0. ’“3? (T‘1V -d ;.*I“w“C: ‘ ‘ "" ' ‘ s: u *thtfiu ti? ‘:ou" "fi*ti< vitL fiV? i rtrtctor AW“ UV“ rt“l \4 -. V a 3;“:5“” ‘T"'1'i‘117 77: th: -ritRTQ <3“ fine CV”JJP“ti1””. TTDt 'TD51‘” ,ijLI I:"1flt : tiiq'wif; C”1'i?""\‘7 "“‘3. ti.“2’fi5ir 7”? '11? ‘TilIl E? r5~wod:L\T“ F3? n<“““tifl‘ t3: c7“*~ i1 1~CW “33“. ”We 1 v ~°'rt?“ “CF:*4 “Lé i3 ygh“~ 1+t17 ““* 1 3: tbr“tr ";*iigr ””“1? ‘Ti? it ”1"“.7‘2r‘1—7*’1-- "Tr-‘111’3L‘1‘ 1" C1fififlfifi . h'j'q t‘j‘wfi1'“:‘ ‘3‘, I1 ~. "1‘1“, L ' _- ‘hd , i - J-” IV ’ . .- -‘I‘ .\.I ~\‘ .- ‘ -... v ‘ — CH J“"W "111 b” “*~r0“~1“11 03?“ (i) Tlfitid” th; :72d3“r* _ "— * ... ' ' - J v ' .. f0? ,h° £13 on the had3’, Tith théir rfcit'+ion nugkéréf (‘*) i L_“vw10u;ln;; fl1e tirchz:fxé::? €=3 'to 'thfi) tf;:: 13rr1*::? Ch:<fl:i”-*'bhi “311."itfl: tlw: $cmrn ivmavi;i:i 7;' jbcrvi‘gj it Sqfr**i 5:? r031. ”fifké'tfit 01 3* :: ”Qi'Y7CDE a1” ””71“; V H " *7“1t2 cfigfi t“”ir ‘dtfifi 3: t\:ir r—Citnuififi anh37r': (;) friviiifiv t'“ "flii;*m4 “V’Wfifir ?” Tith th~iv cr1t~;ud sqefit? dtfi C”?W3;J‘ (i) 031‘1CU1“ criti1u07 “Ti rat 111%; 'T‘..."'7 :rm .a ... n: 0.... Lid . L L ...l o . -. 1|” .7. .....L . L .n... a... U a; .w .3 O ”.... a , .3 L., a... .- 3 L C. Q .... ., .... L n. t O _ ..L. 3 _- . V C .C n. _ L - 3 1.M C .1 t f L.,. .... C a I c t .1: ..., b. ... t t t i C l . .4. 3 10 . 3 .-L ..- .1 .L ...». 7.. .- V n... t .3 a L .1. 1 ...... ......_ 3 .J . . .1 v. 0-. 9....“ 9 J ... t t 3 t .6 .... w .1. a L .3 ,- .1 .1 L T. L L a.“ W t r 3 ..-. .u . .L.V f .... v... wJ. O O a .1 ..-. C 3 Au .1 L mg «L AI a; ..,. ? .... t .. I 1 ....“ 3 0 .-. :( C C _, . .... C . 71......HTv ....“ .-.. .- 3 _ a t .- ... .1 O ..z t .C V. n- .1 Z .. ... .....L ”.-..C ..- O V ..-. D . .-. .-. t .1 .- .... Q 9 .3 0.1.3 3 .C 3 c- 3 m). L t x, L... S .l .3 U G n. ....L C L U. .... .... .1 .- 3 C .3 .... .-..C ..r. 3T” j T G n. H.-. .... n. .1 1...... t 3 L.,. 3 ...... L... 1.. 3 t t ...... O. .... o, LIX“ T .... C o n, h u .51.? t ,1 r a ...... T D , 1:. ..- ..- ...:LUC .3..- C ...L C 3 o t , at 0.1.: a. V 3 t L., 0 0 I ....) 9 ... .1 . 1L t L. .. S 1.1 .3 I d. A... V ...L -3 p-.. .1. T t .1 C .1 .. TL . . C. C. ... a... .1 .I v- 4L1L -m 1 C .J. .Lu ‘3 .3 C” c .L-ul.\.Lb \/ o L L L m-.. 9 a 1.11..“ .1 ..L t 7 31 1.1.; l . v T .111 f L .1 .1 .....L as... t t n ... n- F a- 0 v... t 1 ., .... a... 3 T L 4.. a. n1 .. m. I.\ .... 1...”. C C .- 7 .Lu .3 1.. c .... a, a 12 L. V m. .2 .1 a. G O .1 ...: 3 3 U. a A... Q n .1 ..L .3 ... .... ... a .L .1 T. 1 _. t a; f t -C Tfi ..J .1 0 Mr.-.“ v 0 8 Pg C, .. no L... t .1“ N _ LIL .r-.. 1.-“ .... V ( nu ML .«1 a... ...L 7. ML m L31” Lb ..J. a, C :I. . t 31...; t C . ..-. a. .... a, a r ... B 3 3 o. .... C 3 .5 .L.... L- .11 .... A... 3 3 ., Q 11.1.... 3 ...... ...... C ...L L ...... .l w. t 1.. S 3-3.10.3.3 1. 0.970? ...r 0...}: ...n f 01 1.1 .. Q r ...." . . ,1 .1 1 3 l 3 l L.,. a; L... 3 -.. I tLC C l 9.... a. ... l T n m: 2. . .... 1.. n. .3]. L- 1 L-.. ...... .1... ...... 3 L L.,-.Ln. .3 .1. t .. T n.-. . 3 .3 i 3 Ln .... i Q. ..., ....Z; 1.3 .1: ... GLC C. C L, .1 3 t a L... ...... a, ... L- n... 3 a... .3 C J a, t T.. ml. 3 11 A H... 3 1. _ ..w. J C .1 C .1 v. .1 3 O o ...._ TL 3L0 b. a, an.“ O . .... ..-. .-. 1 a... L .1 T ...... 3 2,. \-/ P . G .4... .... .5 A... a .1... .1... 7r .__ 3. I 1 ....“ . ... .... ...L .3 33 n. .1. H. .... a. ... Q C E L.,... i a. t ., L-.. ..-L A-.. a. a... C o ...u a .1 .1” ..-u .3 I.\ 0 LL a- a. L ._ . a... T 3 on ..H “T” .... ...... 3 2 1.1. a. 1 .fi. 0... .l .C .L n i ...- C L) ...H .. L O n O .3 . L r1; 3 D 3 ".L n\ 1L. .3 A... an. n.-. ... a.-. A, 1." 4.._+ a; L. ...L.:lL. a-.. w. 81“-. 1... 1.. n-” .7. ML ..- O 3,. VL s, ”.... 3.. 0 m1.) 5, J C x, ....-L m.. M..- ~11“ \1 C .1 a- C ...L a. S Q a .C L ..u .1. t -3 .0 ... Lyn ......L U. .1 C ...;O a... ... .E t .- . n, _ . ....L Q w. 7 _- .. .1 3 .. .C C .l Y .J ”.... 3.5.1 ...... ...... .1 ...U 1... L T J; n. M1 .1. 3 n, S .3. T O U. n. A; J. .3. V L“. .... L... ...... .0. C 3 ....-. .1 .- L o n... a L. T1. .1 t .. ...... .1 ,1 - t 3 r... . ._ CL? 0 t A. .C ..-. T L- .n a. .7 .1 Q .6 C .C l 1 t f _..-. 3 U «...... ...... L _. _. t ..-. .13....” n... T r? u 0 .V C a ,1 3 it O r “...? . MALLLL. 8.1 n .3 .... D .. ...L .l .L t 3 -.. .1. .1. .. .3 .1 O f 3 ..L fl 3 n... .T...l ...-L n S . . 1. C .3 C Av .. .. LL q L., ...L .3 c-.. 3 .... .... ....L D t L... 3.133... O ..Lt L. e .. .... ”ULb 3 ... .... ...- O ... 32 . .l r 3 m. C ... C ..i. v C I 3...-.. V ..L... ... .l... t .4. L7.” 1:“ C .... a, .... 0 0L We a... a, .-.. ...U:L. LIL I; L... L W. .1... r 3 :1. C 1n... n... .1. a, .. fl ,3. .... 3 a i a, .... u. I. “.....l .13. 0 LC ..T. .L 4.. L ... ... .1 J n.-. t .; 1 T ..-. 1.1.... fi ._. -. D 91.. 3 . - ..-. ...L C ...... U f L t 3 .1 . L n. ...... e .. x) 6 t t 0 _,, .1 .1 313 t .....1 3 I n. O L n. S 2 -C r t O a. a, .1 .1 O ... O L .0 t7... . ...-.. .1 O .” an .qJ AC 3 , A}. O 1.. .... a.) C ....L .«L Lu 6+. .w.\1/) nul— . J a. L m.-. HI». _.n._ «... u... A oqlL. ...... .-m -3 l .1. .1. -. W... .L .71. a ....U ..L O a». t a C ..L 1L1. I 1 ....O 7L1 .1 V 0 .Lu 1O .1 L., . D 0 .1L T. ..l. -3 f ... (I.\ C §, . .. . ,..,_ Am K... «IL « .... .I.L o, L. a 3 .-., ....“ .-,. Z. 1 C L. .0 ..-. .- v... i U. L I V ......L... .... i 9 .3 1.1 .... r t .3. 1 .3 .... O é t O i . . , ....-. ...... L C .3 3 . ...... S L t u- L». 1L - .1 In-” .I" u... 31” H.,. .....J o.-. 3-4 C NIL 1!” D. _. K, a .. L.,. L... "xv «L L-, a... . T _. a... n). .... .... t h C .../s) w. .3 .... O 1 ...u 3 n.-. n .... .\J .. "H. ...L 21 1.: .... “IL . ”.... 4L .. .... m. .le L., N}. «Q Q ”l A ...w ... .-.l. o LCtatCC. L 314 ..-/x :0..-:-..C.1,..1LL T .. O O n. 1-“ L . f \ ‘IN'Y‘ffi #1,!“ 1. O 7'. i. .) ..‘IA , (2i... 12. 13. 14. 15. F-18 on file. Evaluators will also assign a number grade from 0 to 7 following the instructions in 12 below. One evaluator will make an oral evaluation of each student's speech, and if time permits, the whole group of evaluators may discuss the speech or the day's speeches. (The instructor should average the student-assigned grades, adjust this grade if necessary because of out- line guality, and record it on the student' s outline, which will be returned to him at the next meeting. THIS IS IMPORTANT -- do not read this to the studeggz Record the number grade (after you have averaged them, etc.) and accept it as being as valid as the ones you give. You may or may not include this grade in com- piling the course grade. This will be decided later (as a suggestion, it is a good idea to record the student evaluated speech grades in red, yours in blue). DO NOT TELL THE STUDENT YOU MAY OR MAY NOT COUNT HIS GRADING AS EQUAL TO YOURS, BUT INFORM THE STUDENT THAT HIS GRADE IS IMPORTANT ONLY AS A CONFIRMATION OF YOURS. In this way the student will be encouraged to give more realistic grades. Inform the students to remember this in their evaluation: (a) their grades will be recorded; (b) the actual grade is not so significant, but the instructor wants to learn if the relative rating given by students and instructor match; (c) thus, they should not be afraid to assign O or 1 for a poor speech, or 6 or 7 for an excellent speech; (d) they are notflunking a student or giving him an A, but merely assisting the instructor and also their fellow students in indicating the need for improvement. Remind the students that in order for you to hear the final speeches of all students, one group will present TOpic VI before V. Students will have to adjust their readings accordingly. The group delivering Topic VI before Topic V will be indicated on their group master sheet schedule which you.will hand out. In summary of the operation, then, the classes work as follows: you will be with Group A, say, for Tapics I, III, and VI; with Group B for Topics II, IV, and VI. For other TOpics students will be on their own. Record your grades in blue, student grades in red, on your master grade sheet. THIS MUST BE DONE BEFORE YOU ADJOURN THE ORIENTATION SESSIONS A. Go over TOpic I in the syllabus which will begin next class meeting. (The related readings in the text will make it much clearer for them.) B. Select several students and have a “dummy" class meeting, letting the students run the class as if in the peer- F-19 grouping situation. Make sure they understand the Operational details, since half of the class will be on its own for the next week or so. Make sure that there are no unanswered questions as to course operation or requirements. Be sure that all students have copies of the schedule, the syllabus, and other hand-out materials, and that they understand that it is their responsibility to know what they are to be doing, and when. THAT'S.ALL THERE IS TO IT! Speech lOl Procedures for Item Writing F. Craig Johnson George R. Klare You.have been requested to turn in five test items per week you teach Speech 101. Presently you should use the assigned chapters in the text as your source material. Even- tually we hope to have specific objectives for all the text and lecture materials. (You.will note references to objectives in these materials and a place for them on the item form, but do not let this concern you for the present.) The procedures described here will help you construct items in the desired fashion. But remember -- originality of ideas is more important than highly polished form. I. Suggestions for item writing -- general. A. The introductory part of a multiple-choice or short- answer item is called the "stem." It is this part to which the correct choice or short answer must be added to form a true statement. B. The stem may be of either the incomplete or complete statement form. The following examples present the two forms. 1. Incomplete: The difference between "cop" and "cap" is called __ . 2. Complete: What is the difference between “cop“ and ”cap" called? . The incomplete form is preferable because it is gene- rally more efficient; however, either form will be acceptable here. C. The stem should contain a central problem or theme, and should be related to one of the obJectives of instruc- tion. The theme should be as clearly stated as possible. A quick test for clarity is whether or not a person must read the choices in a multiple-choice item before he can understand the stem; if so, the stem can often be improved. D. Simplicity of statement should be a maJor consideration in item writing. 1. Use as simple language (vocabulary) as possible. 2. Be brief, but be specific as possible. If the stem must be long, use several short sentences rather than one long one. 3. In multiple-choice items, include as much as possible of the problem in the stem itself in order to avoid repetition in the choices. Try to put the verb and article in the stem, for example. F-ZO II. m F-21 #. Use strong rather than weak sentence structure. For example, a statement beginning “The best measure of ...“ is preferable to one beginning “It is best to use the ...” 5. Whenever possible, use the positive rather than the negative form of statement. That is, avoid saying something is not characteristic of an object or situation when it is possible to say something lé characteristic. If you use a negative form, emphasize this in some way (e.g., by underlining) so the subject does not miss it for the wrong reasons. (The word “subject" refers to the person taking the test.) Wording of items should not be exactly the same as that of the objectives. The vocabulary used, however, should be no more difficult than that of the objec- tives. Suggestions for item writing -- specific. A. Most of the information specific to multiple-choice items is related to the choices or alternatives to be used. There are two kinds of choices: 1. Correct answers. 2. Incorrect answers, usually referred to as “distrac- terS“ or ifoils." In composing choices (as well as items themselves), try first to draw as much as possible from the objec- tives themselves. Next, assume that you are going to deliver a lecture based on these objectives. Write further items based on the content you.have been given. For each item, be sure to include five choices. All choices should, where possible, seem reasonable from a logical and grammatical point of view. 1. Distracters should attract and appeal to peOple who do not know the correct answer. 2. All distractors should appear equally possible as correct answers (i.e., all distracters should *work“). 3. Questions should not involve com ound responses (e.g., both a “what“ and a "why" part . Revise this type of item by dividing it into two. Avoid irrelevant or extraneous clues that lead a sub- ject to choose a particular answer. In other words, do not let some sign that is unrelated to actual knowledge of the subject matter cause the subject to select a particular choice as correct or incorrect. Such a sign is called a I'specific determiner"; some common ones to avoid are the following: 1. Lack of parallel grammatical structure in stem and some choices (e.g., failure of subject and predicate to agree). Use of particular words -- for example, "always” or "never." These words tend to appear in false or incorrect statements much more than in correct ones. Use of a consistently larger or smaller number of words in the correct alternative. Use of excessive specification or caution in correct alternatives as opposed to incorrect ones. Repetition of some terms of the stem in one of the choices. This often is a clue to the correctness of the choice. Opposites. These may serve as specific determiners if they clearly allow the student to narrow his choices to two: The Washington Monument is (1) lower than the Eiffel Tower. (2) higher than the Eiffel Tower. (2) located in New York City. ( ) made of copper plate. (5) a tribute to Booker T. Washington. The correct answer is clearly one of the first two choices, so the item is really only a two-choice item. Highly technical terms. They may serve as speci- fic determiners when they seem obviously out-of- place (e.g., when the item-writer obviously had to “reach" for an additional distracter). Take the following example: Politicians are most often known for their (1) sincerity. (2) patriotism. ( ) honesty. ( ) size. (5) transcendentalism. You.might select choice (1), (2), (3), or (4), depending upon your feelings about politicians; you would be unlikely, however, to select choice (5), since it seems out of place. Overlapping of two alternatives. This may often permit the subject to eliminate these alternatives as the correct choice, since either could be correct. Consider the following example: The rainfall at the North Pole, as compared to the Equator, is (1) much larger. (2) larger. (3) a little larger. (H) about the same. (5) smaller. If the correct choice were (1) or (3), the student could logically argue that (2) should also be correct. He could, therefore, quickly jump to 9. 10. 11. F-23 (5) as the correct choice. Synonymous choices. These can be eliminated in much the same way, since either could be correct. Use of ”none of the above“ or i'all of the above." ’All of the above," particularly, may serve as a specific determiner, since a subject may be able to select it when he knows only two of the choices to be true. “None of the above" is a problem because it is hard to construct items in which all other choices are absolutely wrong; it is easier to construct items in which some choices are more correct than others, but in this case “none of the above‘ is not acceptable. Unreasonable numerical answers. Subjects can often arrive at a correct answer they do not “know" simply by eliminating unreasonable alternatives. Placement of correct alternative should usually be random, or at least approximately equal numbers of each position (1, 2, 3, U, and 5) should be used as correct. (An exception to random assignment is the case where the answers can be placed into numerical order of magni- tude or some other logical order.) APPENDIX G ORIGINAL SPEECH 101 EVALUATION FORM noupmpgomohm .H> ooaodss¢ on soapwpmmpw .>H wzdxmomm mo mamdnmpmz .HH coo-cocococooooooooooooosowmgm mahpm dam ommswnmq .> soapmNacmmno .HHH canoe .H 0.......OOOOOOOOOOOOOomgha soapmsambm noooam APPENDIX H FORTY-EIGHT ITEM RATING SCALE INSTRUCTIONS We are interested in your judgment of the speech you have just heard. On the next pages are a series of statements on.which you are asked to judge this speech. These state- ments look like this: Speech was good : : : : : : Speech was +3 +2 +1 0 -l -2 -3 bad If you felt that this speech was extremely good, you would place a check mark in the space which is indicated by +3 above; if quite good (but not extremely good), you would mark in the space indicated by +2; if 311 htl ood, in space +1; if quite bad, in -2; and if extreme 2 bad, in -3. Be sure to put a check mark somewhere along each scale. Put your check within the spaces, not on the dots separating the spaces. Put one and only one check on each scale. 29 NOT OMIT SCALES. Please make each item a separate and independent judgment. We want ”first impressions" so go through the scales fairly rapidly. Thank you for your help. Poor improvement I disagree with the speaker Speaker was calm Speaker knew material Speaker was not confident__ Good use of speaker's stand Poor motive appeals Poor speaking voice Speaker's personality good Lack of warmth Speaker did not know speech well Speech was smooth Topic interesting Good speaker's attitude toward his topic Good attention level Speaker well prepared Poor choice of tOpic Poor physical appearance Poor vocal inflection Speaker was not poised 3‘3 +_3'.;§.+—l-.—OD.:—l-.:-—2-.:—3- Good improvement I agree with the speaker Was not calm Did not know material Speaker was confident Poor use of speaker s stand Good motive appeals Good speaking voice Personality not good Warmth Speaker knew speech well Was not smooth Not interesting Poor speaker's atti- tude toward topic Poor attention level Not well prepared Good choice of topic Good physical appearance Good vocal inflection Speaker was poised Good use of materials of develOpment Speaker was not enthusiastic Purpose clear Poor use of examples Good diction Speaker was nd: sincere Speech met assignment Variety Good use of notes Speech was not ethical Speech was not original Speaker was not friendly Good logical reasoning Speaker was not courteous Good use of humor Speech difficult to follow Topic apprOpriate -- Good facial expression __ Poor total effect Good eye contact Good use of materials of experience Poor choice of words Few vocalized pauses **—— Poor use of materials of development Speaker enthusiastic Purpose not clear Good Poor use of examples diction Speaker was sincere Did not meet Assignment No variety Poor use of notes Was ethical Was original Was friendly Poor logical rea- soning Was courteous Poor use of humor Easy to follow Not appropriate Poor facial expression Good Poor Poor als Good Many total effect eye contact use of materi- of experience choice of words vocalized pauses Favorable class reaction __:__:__:__:__:__:__ Unfavorable class reaction Poor organization : : : : : : Good organization Speaker was not pleasant : : : : : : Was pleasant Poor citation of sources __:__:__:__:__:__:__ Good citation of sources Good use of evidence : : : : : : Poor use of evidence APPENDIX I T‘AIENTY-FIVE ITEM RATING SCALE INSTRUCTIONS We are interested in your judgment of the speech you have just heard. On the next pages are a series of statements on which you are asked to judge this speech. These statements look like this: __:__: :__: :__:__ Speech was bad 4.3+le oZI-2-3 If you felt that this speech was extremely good, you would place a check mark in the space which is indicated by +3 above; if quite good (but not extremely good), you would mark in the space indicated by +2; if slightly good, in space +1; if quite bad, in -2; and if extremely bad, in -3. Speech was good Be sure to put a check mark somewhere along each scale. Put your check within the spaces, not on the dots separating the spaces. Put one and only one check on each scale. 29 NOT OMIT SCALES. Please make each item a separate and independent judgment. We want "first impressions" so go through the scales fairly rapid- 1y. Thank you for your help. I-2 Student No. Speaker was enthusiastic Personality not good Variety Not poised Good lOgical reasoning Jas not sincere Good speaking voice Poor use of examples Poor facial expression Good use of humor Topic interesting Was not friendly Speaker well prepared Speaker was calm Poor organization Speaker was courteous Poor eye contact Good vocal inflection Poor speaker's attitude toward his topic Good diction Few vocalized pauses —“*m_—_ Section No. 9) Was not enthusiastic Speaker's person- ality good 10) No variety ll) Poised 12) Poor lOgical reasoning 13) Speaker was sincere 14) Poor speaking voice 15) Good use of examples 16) Good facial expres- sion 17) Poor use of humor 18) Not interesting 19) Speaker was friendly 20) Not well prepared 21) Was not calm 22) Good organization 23) Was not courteous 24) Good eye contact 25) Poor vocal inflec- tion 26) Good speaker's atti— tude toward topic27) Poor diction 28) Many vocalized pauses 29) Poor physical appearance Poor total effect Speaker knew speech well Poor use of evidence 1-1+ Good Physical appearance 30) Good total effect 31) Did not know speech well 32) Good use of evidence 33) STDIX J EATING C L) ALE INSTRUCTIONS We are interested in your judgment of the speech you have just heard. On the next pages are a series of statements on which you are asked to judge this speech. These statements look like this: Speech was good : : : Speech was bad +3 +2 +1 0 -1 -2 -3 If you felt that this speech was extremely good, you would place a check mark in the space which is indicated by +3 above; if quite good (but not extremely good), you would mark in the space indicated by +2; if slightly ood, in space +1; if guite 239: in -2; and if extremely Egg, in -3. Be sure to put a check mark somewhere along each scale. Put your check within the spaces, not on the dots separating the spaces. Put one and only one check on each scale. 29 N92 OMIT SCALES. Please make each item a separate and independent judgment. We want "first impressions" so go through the scales fairly rapidly. Thank you for your help. 135 Project No. J-3 74- Student Number 5) Speaker Y6111) Good use of evidence Poor facial expression Speaker was calm Good logical reasoning Speaker not well prepared Good use of examples Poor organization Speaker was enthusiastic Variety Speaker knew speech well Poor use of humor Speaker was not poised Number (12-13) (IE-15) Poor use of evidence 16) Good facial expres- sion 17) Was not calm 18) Poor logical reasoning ‘-C'- 19) Speaker well prepared 20) Poor use of examples 21) Good organization 22) Was not enthusiastic 23) No variety 24) Did not know speech well 25) ‘_ Good use of humor 26) Speaker was poised 27) ‘ K APP ETD I}: 1 ‘LtTIJd‘ IDII'PI -\ K—Z Preparatio- .' a {mffect [Logical Reasoning Evidence Materials of Devolopmem LOrgon ization {Preparation Personal Proof ”30‘6” U "'Tfiaiiliijfliuiy@ujflifgimflflnjflimifl“