I'Vw7fi3nflwmwvwnwu flrfi k.. WW"I§ . q .uM| W.m.m .3 “Igfi .‘ IUI' |3|.|" .'.II ”I , II“ °.." 'I "dub" "'I'L'II'I3I. " "]1| I ""I'.I.' I. ' ""I' I "'"I'II ".'.I|.I’. ‘I'-II."-I" "'III'” I' II ImhII- .I'I ' NI . . .II I I. .. ". I‘IIIII I 3 33“ I'I'""' I... 3335333 'l'li' IWJIIII '23'3'3 .- I 3II"I‘33|II '.|II.I"" 33I3'3|I"' III j'.' . ..- 33.77.. I-III .. "I . " I I' I 'l'"Il.I. I. * I 'I_I"7'..'1" I 3.3". I .II' ' I. "'I'II'II "II ' IL" IIII'3 "I~"7' '3" 'I'" III'fi‘ IIIII'" II 333%“ 33:: 1' '33“ II‘W '33“ 'II'33.II "'I..' I ' I .' . ." " I "' . .333 I....' .333 .l! ‘ 3.33 .. . .. II I. 3333. 3" 1 I. . -f' .. I'II'I'II II. .IIIIIII'” II'III'I'II": II.III|I. I I' ' IanII II/IIIt I'IIIII'II'fl'," IIII-I'h Imifinfii 33’33333 I'I'K'h '3'“ III" IIIII'NUII' 'IIII" l3'I'III':I" ' . .. . .II'IIIII I'II 'IIIIQ'I‘T.3I"I'"'I I' IIIIII "I'3I'.I3" 3333. IIII .'II".I3I" 3'333'I'I":3'- 3'3'I3III' I'IIA'I'II 3|II3 IIII V333 III... IIIII I... I '.. | III. .3 I ""3 33333333.. III I-IIIIII I IIIIIIII'I-III III 3' "I'I I! .I ; II . ‘. , | I... . .. ”III. I... 3.3... . II. " '."I'II"' III I' IIIIII IIIIIgIIIIIIIIIIIIIIIIIIIIUIII. 3-3 .33I3II II.II'I.’II"333II' ..|: 3| I .' I II I . "I' "'<'-": . "I" II3I|','I'I'| I'I'I'I' " 'I'II'." "' 'I"'" 3"“ 'I 3I"."3I "" " ‘I'I .. . . I: I‘ 'I 3.. ' I' . I. . 'II'I'I' . 'I33I3‘ IIII 3.. " . 3331 III . .'I'II".I I.II , . .' '. "I... I. ' III'IIII 33.3.33 I33I.II|3.I|I..I.I|I II(.III' 33433 I" 33333 I!“ II II'I III'I IIIII'I'I' I3 III‘IIIIII. I I W.IV~ IIMHIW I . MIN 3. W3MIfiwMIHI I? .W -III .II .."I. ' I I.'I III I333 III|I3 33"? I 'I . . IIIIIII'3'III- '.I." IIIIII .||3 IIIIIIIIII I" 'I'IIII IIIIIIIIIII II . I'I' ”II I3: I ' 3| I I"3I'I I" "."I I IIII'I'II ' IIII III I II3|II 33 IIIII 'IIIIII'I'III' ‘ I... "III" 3333'.“ III..." IIII'IIII "'I'III "I ' I33 '3' II|WHMMWNIWWLWI Ih3wII ”Md¥ I III "'I' 'III'IIlI...I II " I 'II 'III' '.'."II - .IIII I'II'II'II 3"3 IIII'I III' .IIIII‘I'I' IIIIIIII. IIIIIIII :IIIII IIII IIIIIIII I33333 I. ”III 3 "33.3 h'fiwflh III'II'I'FI .I'III‘I‘III III I. 3 ”’9'" IIW I333 I3II 3;.fi fiBQEEh}; 333' ' ' °1 3.3 .33 I. _ - ' I33 f“ "II-IIII. I'|'I.III IIIIIIIIIIIII I ' 'I""‘I".|I|33'.' IIIIIII'-. 'I'III' I'IIII' " II I II I III' II' ‘. Iw.' I II” III 7333333333 23-33. 333 33 3333 II" I IIIIIIII-II. 3"..II" I"II':IIII II III II I'III IIII'I "I'III'II I ".'II"II I‘II III'I'IIII'II 'WIWWWW,m..MN II HIHIIII IH'YI fi'Im HM II.IIIIII',|'.I fiWWW'VIMWIIIflyvb‘i II 33.3 I’IIIIIIIIII'3IIIIII3I. II‘I . I I 3 33 3 3 . 33 33‘ It I " "IIIEI.II'|“'EI.IIIIII:‘II'.,'.'|-I‘.‘.-II'. | I ‘7" 'I' III'IIIII I I. '. III” III-I ‘ 'II'IIII'III'I'I'.‘I'.I' '. 31' II.'II.‘I"' .. ""I" . IIII|I333III|.II 3: 1" III III 3 IIIII :|I‘|3I333II.I;iIIj . I II" .II. IIIII ‘II' 3I3I 3.3I3333" III 3.33333. 3.. .. ' I' .33. “II | I ‘I . .3 III 3.3. 3333 3 I I II I II 3 |-.' u-H“"I3'MWIH gw M" "I' '3‘3 3":‘3 II " II ~3I 3 I I3 "'|‘I ‘II.‘ I . I I"|'. I I".""..I.".I' ' ~.' 'l'2"‘ I‘ '1. "I "I 'l I 'l'I' 'III' '3'Il . . ' ‘ ‘ I I . 3I' .3..3|.ZI I'II III '..' I .. . |II.. ‘. I .I '. "3 "IIIII.II.'fiI I" 'I'I'.I "."IIII"" I "'33' I'II'I' .' 'II'I'I . II'III'IIII 'I"" III, I .IIII III II'I' IIIIII I I"' "I .II:.". M.WWV”LIII'~ “II'III .fl “IN” I‘ 33.3.“ . |:|.I-,| 1' .I. .II|3|'3I3 .2“ 3 '3': ’I' .I.--:-. .|"" I: "' - I. I 3 t I ' I' II '0 I. (I II._' IIII"." 3 I. . " 7.73 " I .. .I. . IIII'I I‘I'It‘. 333.. I} ~11." ,. lg ”1111!!qu 1m 11 ‘00 Mil/Ill 1w 1m 2276 This is to certify that the thesis entitled BRIDGING THE GAP BETWEEN DIDACTIC AND EXPERIENTIAL LEARNING: EMPLOYEE PROGRESS INTERVIEWS presented by DEON JAYE GINES has been accepted towards fulfillment of the requirements for Ph.D. , Human Nutrition degree in \ Major professor Date July 17, 1979 0-7639 OVERDUE FINES ARE 25¢ PER DAY PER ITEM Return to book drop to remove this checkout from your record. BRIDGING THE GAP BETWEEN DIDACTIC AND EXPERIENTIAL LEARNING: EMPLOYEE PROGRESS INTERVIEWS BY Deon Jaye Gines A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Food Science and Human Nutrition 1979 ABSTRACT BRIDGING THE GAP BETWEEN DIDACTIC AND EXPERIENTIAL LEARNING: EMPLOYEE PROGRESS INTERVIEWS By Deon Jaye Gines The objectives of this project were (1) to study the differences in student performance of employee interviews across variable levels of student involvement with the learning materials, and (2) to compare random segment evaluation with whole evaluation procedures. The unit was deve10ped from learning outcomes with a final objective to demonstrate the ability to plan and con- duct a simulated progress interview. Test questions were written for the objectives. Objectives and test items were placed on a rating scale and six expert reviewers rated them. An analysis of the information to present in the unit was completed following an instructional deve10pment model. Three student volunteers completed a formative evaluation. Two units were completed, identical in content. One included written model answers to the embedded questions (unit with examples) and one included questions with space for the students to write answers (unit with practice). Deon Jaye Gines Four scenarios were written for practice. Practice sessions following completion of the self-instructional unit were arranged for certain students to conduct simulated inter- views (role-players), certain students to observe (obser- vers) and certain students to observe and evaluate the simulated interviews (directed observers). Students participating in this study included forty students in the General Dietetic Coordinated Study Plan (GDCSP) at Michigan State University. Students completed a personal information sheet, written pre-test, a videotaped pre-test interview and a self-assessment of their interview. The unit was then distributed to the students and was com- pleted individually. The following week, students completed a written post- test and participated in a role-play practice session. After the practice, students were asked to complete the attitude survey regarding the unit. Students were given a scenario to utilize to prepare for the post-test interview and com- pleted a self-evaluation of post-test performance. Item analysis statistics were completed. The written test was divided into sub-tests by enabling objectives to ascertain which objectives had been met and which objectives had not been met. Three Juniors and five Seniors showed acceptable level of performance on the pre-test while 19 Junior and 20 Senior students reached the minimum performance criteria level on Deon Jaye Gines the post-test. Student scores on the unit with practice and student scores on the unit with examples were compared and no significant difference was found. No student met the minimum criteria for performance on the pre-test interview; 15 Junior and 19 Senior students met the minimum criterion level (.75) on the post-test and it was concluded that the unit positively effected learning. Three-way ANOVA was applied to test for significance of dif- ference between the groups on the post-test performance. The junior level directed observers did less well than the other test groups. It was concluded that all students do not have to participate in a role-play session to learn from it. Senior students perceived learning more by using the mate- rials which required practice and Senior students completing the unit with practice felt that the materials were clearer in comparison with the unit with examples. Costs for the unit included the deve10per's time, typing, paper and other materials, duplication costs, ac- tress time, and videotapes. A major expenditure was the time spent in evaluating the pre- and post-videotaped inter- views. It is concluded, since the materials can be used with large numbers of students at minor expense, that they are economical. The length of each interview was determined in units by the VTR counter and this number was divided into ls-unit segments. Half of the units comprising each interview were Deon Jaye Gines drawn randomly for evaluation. The mean of the two instruc- tors' evaluations was compared with the rating given the full-length evaluation and the reliability was .45 for the Junior students and .51 for the Senior students. Random sample evaluation via this procedure is not reliable enough to use to assign individual grades. ACKNOWLEDGMENTS I am aware of the impact on my thinking of several persons whose ideas, support, and enthusiasm were of inestimable help in this research. Special thanks to: Rose M. Tindall, Committee Chairperson Stephen Yelon, Assistant Director, Learning and Evaluation Services Burness Wenberg, Coordinator, Dietetic Undergraduate'Curriculum Castelle Gentry, Committee Member Alice Spangler, Committee Member Gilbert Leveille, Committee Member and to my husband, John R. Schweitzer. Recognition of moral and emotional support must also be mentioned. Very special thanks to my parents, Don and LaRee Gines, and again to my husband, John. This project would not have been possible without the assistance of the General Dietetic Coordinated Study Plan students and clinical faculty including Jane Allendorph, Mary Jo Morrissey, and Carrie Hornby. ii TABLE OF CONTENTS LIST OF TABLES . . . . . LIST OF FIGURES . . . . Chapter I. INTRODUCTION . . . . Nature of the Problem . II. III Problem Statement Justification . Summary . Limitations Assumptions Definitions Hypotheses REVIEW OF RELATED LITERATURE Dietetic Education . . . Transfer . . . . Teaching Alternatives . Criterion- Referenced Testing and Measurement . . . Empirical Item Analysis . . Attitude Scaling . Random Sample Segment Evaluation Diagnosis and Revision in the Deve10pment of Instructional Materials . . . . Summary . . . . . METHODOLOGY . . . . Design . . . Preliminary Procedures . . Research Procedures . . iii Page vi ix 48 49 53 53 66 Chapter IV. VI. Appendix A. Page RESULTS . . . . . . . 75 Progress Interview Unit Preliminary Evaluation . . . . . 75 The Sample . . . . 78 Pre- Post Written Examination Evaluation . . 78 Item Analysis of Criteria Checklist . 84 Evaluation of the Written Objective Examination Results . 89 Predictability of Practical Performance from Performance on the Written Examination . . 95 Comparison of Instructor, Student, and Random Sample Segment Evaluations . 97 Students' Attitude Survey . . . 105 Costs . . . . . . . 110 SUMMARY AND CONCLUSIONS . . . . 114 Progress Interview Unit Preliminary Evaluation ' . . . 114 Pre- Post Written Examination Evaluation . . . 114 Evaluation of the Videotaped Post- Test . 116 Predictability of Practical Performance from Performance on the Written Examination . . . 118 Student Self— Evaluation of Performance . . . . 118 Random Sample Segment Evaluation . 121 Student Attitude Survey . . . 123 Costs . . . . . . 124 Generalizability . . . . . 127 Summary . . . . . . 128 RECOMMENDATIONS . . . . . 131 Implications for Future Research . 132 Suggestions for Revisions . . . 135 Admission Requirements for MSU GDCSP . 138 Evaluation Strategies for HNF 480- Foodservice Systems Management . 140 General Objectives for Residence Hall Experience . . . . 149 iv Appendix Page B. Progress Interview Unit Objectives and Test Items . . 151 Selected Items from Progress Interview unit 0 O O O O O I 169 C. Personal Information Sheet . . . 181 Students' Attitude Survey . . . 183 Criteria Checklist . . . 186 Written Pre- and Post- Test . . . 188 D. Student Attitudinal Comments . . 194 BIBLIOGRAPHY . . . . . . . . 205 Table 10. 11. 12. LIST OF TABLES Test Group Assignments . Hierarchical Analysis of the Progress Interview Unit . . . Gagne's Domains of Learning . . . Performance on Enabling and Terminal Objectives by Class Level . . . Description of Subjects Tallied from Personal Information Questionnaire . Written Objective Examination Item Analysis Statistics, Juniors (n=20) and Seniors (11:20) O O O O O O 0 Multiple Choice Question Pattern of Response by Percent of Subjects Choosing Each Response, Pre- and Post-Test Written Examination, Juniors (n=20) and Seniors (n=20 . . . . . . . Percentage of Junior Subjects (Unit with Practice) Scores on Criteria Checklist Pre- and Post-Tests . . . . Percentage of Junior Subjects (Unit with Examples) Scores on Criteria Checklist Pre- and Post-Tests . . . . Percentage of Senior Students (Unit with Practice) Scores on Criteria Checklist Pre- and Post-Tests . . . Percentage of Senior Students (Unit with Examples) Scores on Criteria Checklist Pre- and Post-Tests . . . . Junior Students‘ Written Examination Pre- and Post-Test Scores . . . . vi Page 54 57 58 77 79 81 83 85 86 87 88 89 Table 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. Senior Students' Written Examination Pre- and Post-Test Scores . . . . Written Examination Pre- and Post-Test Summary Statistics by Group of Subjects Interview Post-Test Cell Scores and Means for Subjects by Group . . . . Two by Two by Three Way ANOVA Table for Test Groups on Post-Test Interview . Comparison of Subjects' Written Examina- tion vs. Videotaped Interview, Percent of Total Possible Score . . . Junior Students' Videotaped Interview Scores, Pre- and Post-Test by Two Instructors and Students' Self-Evaluation Senior Students' Videotaped Interview Scores, Pre- and Post-Test by Two Instructors and Students' Self-Evaluation Inter-Rater Reliability Coefficients Between Instructors and Students on Videotaped Post-Test Scores . . Comparison of Directed Observers' Self- Evaluation with Instructors' Mean Evaluation Score . . . . . Students' Videotaped Interview Random Sample Segment Scores, by Two Instructors Sequence and Number of Observations for Subjects on Post-Test Random Sample Segment Evaluation . . . . . . Inter-Rater Reliability Coefficients Between Two Instructors Using Random Sample Segment Evaluation . . . . Comparison of Subjects' Whole Evaluation and Random Sample Evaluation Scores on Videotaped Interview . . . . Attitude Survey Mean Ratings Displayed by Question and Subject Grouping . . vii Page 90 92 94 95 96 98 98 101 101 103 104 104 105 106 Table 27. 28. 29. B1 B2 ANOVA Cell Means on Attitude Survey under Category "Perception of Amount Learned" . . . . . . Two Way ANOVA Table for Attitude Survey under the Category "Perception of Amount Learned" . . . . Developmental Costs . . . . Tally of Reviewers' Ratings of Objectives . . . . Tally of Reviewers' Ratings of Test Items . . . . . viii Page 109 109 111 155 168 LIST OF FIGURES Two by Two by Three-Way ANOVA Design The Progress Interview Model . Timeline of the Unit Implementation Examples of Random Sample Sets of Videotaped Performance Segments by Counter Number . . . ix Page 55 60 72 74 CHAPTER I INTRODUCTION Current trends in dietetic education include competency-based curriculum, coordination of didactic learn- ing and field experience, and multiple strategies to indi- vidualize learning (Essentials, 1976; Roach, 1978; Breese, gt El: 1977.) Each dietetic program is reSponsible for developing an educational system which follows these recom- mendations and for testing and evaluating. Self-instructional learning materials have been devel- Oped in many dietetic programs and are recommended for the following reasons. Self-instructional materials with a competency-based foundation allow students an opportunity to better coordinate clinical and didactic experiences since the materials can be studied individually, and allow stu- dents to spend variable amounts of time on the materials to reach competency. As part of an evaluation system, simulation has been recommended to allow more reliable evaluation of the stu- dents' performance of the skills to be learned (Muslin, gt El: 1974.) Simulation as an instructional tool can be des- cribed as a selective representation of reality. Simulation is an effective method of eliciting complex skills or behav- iors and permits practice of those skills to increase the transfer of learned skills to real settings. Simulation has been recommended as an evaluation tool in situations where real world evaluation is not feasible or practical (Ward, undated.) Sets of recommended competencies for entry—level gen- eralist dietitians have been developed by various researchers (MSU, 1976; Howard and Shiller, 1977; FSMEC, 1975.) Employee progress interviewing has been considered an essential com- petency; however, it is a complex skill which is difficult to teach in a lecture mode and is also difficult to struc- ture as a real world experience, particularly in facilities with labor unions. Self-instructional materials and simu- lation appear to be possible instructional alternatives to facilitate student learning of employee interviewing and to ensure transfer of these skills to a professional setting. Nature of the Problem The Michigan State University General Dietetics Coordinated Study Plan (GDCSP), Department of Food Science and Human Nutrition, College of Human Ecology, has been developed as a competency-based professional curriculum and evaluation strategies have been formulated by the faculty to reflect the needs of the entry-level dietetic practitioner. The American Dietetic Association (ADA) has responsibility for setting the academic standards for the GDCSP which are described in the Essentials for Coordinated Undergraduate Programs in Dietetics with Self-Study Guide (Essentials, 1976.) The course, Practice of Dietetics (HNF 480), is a com- ponent of professional preparation in the GDCSP which allows students to practice, with supervision, professional skills in a real world setting. HNF 480 is described in the 1979 MSU Description of Courses (p. 481) as follows: Application and integration of nutrition and managerial concepts related to the practice of dietetics. HNF 480 is comprised of two sections: one with emphasis on clinical dietetics and one with emphasis on foodservice systems management. In the context of this project, HNF 480- Foodservice Systems Management, is of major interest. A more detailed description of the course is offered in Appendix A. An attempt to facilitate transfer of theoretical concepts in didactic instruction to the actual performance of skills in a real setting is a major focus or area of educational en- deavor. Admission to the GDCSP is limited to 20 students per year. Eligibility requirements have been developed and pub- lished (see Appendix A.) Eligible student applications are numbered and 20 are chosen by random selection. Enrollment in HNF 480 is limited to 10 students each term the course is offered (Winter and Spring terms.) This controlled en- rollment is necessitated partly by the fact that a limited number of acceptable (in terms of proximity and quality of experience) field placement sites in the Lansing area are available wherein students can attempt to fulfill the 900 to 1,000 clock hour field experience established by ADA. The MSU residence hall system is the only contracted facility and permits 10 placement positions per term for two terms per year. ADA will accept a certain unspecified number of hours spent by students in self-instructional settings and simu- lated settings as part of the experiential requirement. If the entry-level competencies of the graduates of a program have been identified, and appropriate measurement strategies developed with a supportive curriculum, fewer than 900 hours may be scheduled in field placements. Hours in simulation and self-instructional materials may be counted. ADA has not described particular self-instructional modes or simu- lation types, thus one has many alternatives as long as the outcomes of the instruction can be appropriately measured. These self-instructional and simulation materials may allow more students to learn and practice professional skills while still meeting experiential hour requirements when the field experience facilities are limited. Currently it is difficult to individualize the sequence of coursework to make a timely match with concurrent field experiences since students and instructor meet one day each week for scheduled class sessions to cover specified topics. The development and use of self-instructional materials may have the effect of allowing students to study materials at an appr0priate time in the field experiences. Simulation practice sessions may increase transfer or application to other settings. Following a recommended procedure of videotaping stu- dent performance for later evaluation has advantages in terms of student learning, but the time requirements may be pro- hibitive. Alternative evaluation procedures to decrease time needed, while maintaining evaluation reliability, could increase the feasibility of using videotaping. Random sam- pling of videotaped performances and student self-assessment are possible advantageous alternatives. The topic of employee interviewing has been identified by several institutions as being an important entry-level competency and is of interest to this researcher. No self- instructional materials on progress interviewing were located. Problem Statement The problem, therefore, addressed by this project was allowing closer coordination of didactic and experiential learning and positively affecting transfer of learning to the real setting, while determining a practical evaluation procedure. This problem was approached through development, testing, and evaluation of alternative instructional ap- proaches to teaching employee interviewing to students and the comparison of alternative performance evaluation modes. The dependent variable was a measure of student performance on interviewing; the independent variable chosen to be manip- ulated includes a range of levels of structured student involvement with the materials to be learned (i.e., such as formulating and writing answers to questions and role playing interviews.) Justification Improving coordination of experiences and individuali- zation of learning strategies, topic selection, alternative evaluation modes, and costs are the four areas of justifica- tion for this project. Coordination of Didactic and Experiential LearningIIn HNF 480 and IndIVidualization of Learning Strategies Due to schedule constraints, didactic portions of HNF 480 were presented in a six to eight hour block. Meeting for such an extended period of time as a class was less than optimal due to the difficulties inherent in maintaining stu- dent and instructor enthusiasm and interest for several con- secutive hours. Learning may be enhanced by shorter class sessions and a variety of instructional techniques with integration of field site and in-class activities (Lewis and Beaudette, 1977.) Different events occur in the field exper- ience facilities each day of the week and the students should be assigned to the halls for experiential endeavors on each of the days of the week. Development of self-instructional materials would allow more freedom to better schedule class- room activities. Even when given extensive field experience, there are some skills for which it is difficult or impossible to ar- range practice; for example, it is unlikely that a student would be allowed to perform a progress interview with an employee, particularly in an institution with a labor union. At the same time, it is an important skill for the entry- 1evel dietitian to obtain. In other cases, it may be dif- ficult for the instructor to evaluate a student's perfor- mance in the real setting because the instructor's presence would change the sequence of events. It is also possible that the level or quality of practice available to the stu- dent at the field site is not adequate. Selection of Topic Research into essential competencies of the entry- level generalist dietitian was used to determine a t0pic. The dietetic component of the Food Science and Human Nutri- tion Department at Michigan State University sent a series of questionnaires to practicing dietetic professionals, persons responsible for academic and professional prepara- tion, and significant others such as hospital administrators to determine the necessary entry-level competencies of generalist registered dietitians. A list of several hundred important competencies was developed. Several other documents have been deve10ped also addressing the selection and validation of competencies for professional dietetic pro- grams (Howard and Shiller, 1977) while others have published competencies directed to foodservice management programs (FSMEC, 1975.) A review of these competencies indicates a substantial amount of similarity. The tOpic chosen (progress interviewing) addresses a skill listed repeatedly as an essential competency of entry- level foodservice management dietitians. Progress inter- viewing was chosen since successful performance is vital, but also because the content has remained fairly stable in contrast to initial or employment interviews which are sub- ject to changing legal standards and low reliability prob- lems. Termination interviews are seldom the responsibility of an entry-level dietitian. Progress interviewing has been found by this researcher to be difficult to teach and evaluate by the lecture and written evaluation mode currently used in HNF 480. Employee progress interviewing is also difficult to structure as a real world experience and to evaluate through field evalua- tions since managers are reluctant to allow students to evaluate employees, particularly in unionized foodservices. The skill of progress interviewing requires integration of many knowledge areas; there is seldom one correct answer since each set of circumstances is unique. Self-instructional materials and simulation appear to be possible instructional alternatives. Alternative Evaluation Modes Students' performance after studying the self- instructional materials was recorded via videotape for eval- uation purposes. Two alternative evaluation modes were com- pared with the instructors' evaluation of the whole perfor- mance for reliability: Students' self-assessment of whole performance, and instructors' evaluation of random sample segments of performance. The time involved for two instructors to evaluate full length videotaped performances is prohibitive and limits the use of videotaped simulation evaluation. Random sample seg- ment evaluation would also decrease the time necessary for videotape evaluation and investigation into its reliability is necessary. Students will be expected as professionals to be able to evaluate themselves and require training and prac- tice in self-evaluation to attain this skill. If students can learn to reliably self-evaluate, the use of videotaped simulated performances may be increased. Costs of Instruction Didactic instruction in support of the clinical exper- iences which can be tallied as field experience hours to meet ADA requirements may also assist in allowing increased en- rollment in the CSP program.‘ In addition to increased en- rollment, the self-instructional materials would allow the instructor more time for field supervision since these 10 materials would decrease the need for an all day class period once a week. Preparation for, and management of, activities for an all day class period demand a large amount of time on the part of the instructor. Although materials deve10pment would require a large time investment initially, it would relieve the instructor of some didactic instruction commit— ments and provide more time for personal student contact and one-on-one instruction. These instructional materials might also be useful in traditional dietetics programs wherein some clinical practice is desirable. Materials could be designed to allow use of the written materials in conjunction with, or separate from, the practice. Since the number of similar dietetic programs is large and expanding, it is also felt that these research findings and materials would be useful to other programs across the country. As of 1979, there were 64 Coordinated Undergraduate Dietetics Programs in the United States. In addition, there were 68 internships, 25 dietetic technician programs, and 155 dietetic assistant programs where the materials might be applicable. Summary The general problems include: 1) a lack of coordina- tion between didactic and real experiences, 2) poor provision for Optimal transfer, 3) limited course enrollment, and 4) practical limitations on time available for student evaluation. 11 More specifically, questions drawn from these problems to be considered in the context of this study include: 1. The With the content of instruction held constant, does the student's level of participation in instruction make a difference in student learning? a. Will there be a difference in performance between students who study a unit with written embedded questions and answers in comparison with students who interact with the unit via writing answers to embedded questions? b. Will there be a difference in performance between students who actually participate in role plays in class sessions as compared with students who observe or observe and evaluate the role play? Will student attitudes vary depending on level of participation in the instructional unit? Will there be a relationship between students' scores on content (written objective examination) and transfer (videotaped employee interview) tests? Will student self-evaluation be reliable in com- parison with instructor evaluation? Will evaluation of random segments be reliable in comparison with whole evaluation? What are the costs of the various methods in relationship to each other and to learning out- comes? Limitations study was limited by: A threat to external validity since a random sample from the p0pulation was not studied. The subjects included 40 students enrolled in the GDCSP since they most closely approximated the national pOpulation of students in CUDPs wherein the materials would be most useful. 12 A threat to internal validity, possibly including student attitude about participating in the study and biases of the instructor evaluators. The accuracy, validity, and reliability of the measurement instruments utilized. Assumptions It was assumed that: 1. All instructors involved with evaluation had acceptable competence in the area in which stu- dents were being evaluated. Since all students were aware of the videotaping situation, the effects due to these circumstances would uniformly affect all of the performances. Definitions The following operational definitions are stated to promote common understanding: 1. Administrative Dietitian, R.D.: The administra- tive dietitian is a member of the management team and affects the nutritional care of groups through the management of foodservice systems that provide optimal nutrition and quality food, (Glossary, 1974. American Dietetic Association (ADA): The American Dietetic Association is the professional organiza- tion for dietetic practitioners who meet the academic, experience, and endorsement requirements for active membership. The profession of dietetics is dedicated to: the improvement of the nutrition of human beings; the advancement of the science of dietetics and nutrition; and the promotion of education in these and allied areas. ADA is responsible for establishing educational and supervised clinical experience requirements and standards of practice in dietetics, (Glossary, 1974.) Clinical experience: Education which is a com- ponent of a curriculum and is based on actual activities related to the practice of dietetics. 13 When used with the title "dietitian", refers to work in a patient or client-oriented situation (Glossary, 1974.) Clinical instructor: Faculty member, salaried by the educational institution, whose major respon- sibility is deve10ping and/or implementing some part of the professional component of the die- tetics curriculum. Coordinated Undergraduate Dietetic Program (CUDP): A formalized baccalaureate educational program in dietetics sponsored by an accredited college or university and accredited by the American Dietetic Association. The curriculum is designed to co- ordinate didactic and supervised clinical exper- iences to meet the qualifications for practice in the profession of dietetics (Glossary, 1974.) Dietary: Pertaining to food or diet. Dietetic practice: Performance of activities in fulfilling a professional position in nutritional care (Glossary, 1974.) Dietetic Registration or Registered Dietitian (R.D.): Registration is voluntary and indepen- dent from membership in the American Dietetic Association. Dietitians may become registered by: a. Meeting the education, experience, and en- dorsement requirements defined by the Com- mission on Dietetic Registration. b. Successfully completing an examination of basic knowledge related to the practice of dietetics, and c. Paying a registration fee. Registration provides a convenient measure of professional competence for use in deve10ping registration and establishing standards. In addition, it provides the advantage of a legally- protectible designation (Glossary, 1974.) Dietetics: A profession concerned with the science and art of human nutritional care, an essential component of health science. It in- cludes the extending and imparting of knowledge concerning foods which will provide nutrients sufficient to health and during disease through- out the life cycle, and the management of group feedings (Glossary, 1974.) 10. 11. 12. 13. 14 Dietetic Student: The following terms are pre- sented to clarify the terms commonly used when referring to persons enrolled in professional dietetic education programs (Wenberg, 1977.) a. Dietetic Student: A person enrolled in an accredited college or university who has declared a major in dietetics. b. Student Dietitian: A person who is enrolled in an undergraduate coordinated dietetic educational program, accredited by the American Dietetic Association to fulfill the academic educational, the didactic and super- vised clinical experience requirements to become a professionally qualified dietitian. c. Dietetic Intern: A person who has completed the academic requirements of professional education in dietetics and is enrolled in a dietetic internship, approved by ADA to ful- fill the didactic and supervised clinical experience educational standards to become a practicing dietitian. d. Dietetic Trainee: A person who has com- pleted the academic requirements of profes- sional education in dietetics and is enrolled in a dietetic traineeship, approved by ADA to fulfill the didactic and supervised clinical experience educational standards to become a practicing dietitian. (This term will be drOpped in 1980 when all enrollees will be called dietetic interns.) Directed Observers: Students who observed the interview role play session and concurrently evaluated the interviews using the criteria check- list. Field Experience: Assigned experiences in various placement locations to practice skills (see clini- cal experience.) Foodservice Systems Management--Systems: An array of components formed into a unified whole to perform a systematic, purposeful activity. When used in conjunction with foodservice, it would be the components that make up the produc- tion and service of food. Management: The pro- cess of achieving desired results by the effective use of human efforts and facilitating resources (Glossary, 1974.) 14. 15. 16. 17. 18. 19. The 15 Observers: Students who observed the interview role play session. Professional Education: A prescribed program of study and experience to develop competence in the practice of a profession, social understanding, ethical behavior, and scholarly concern (Glossary, 1974. Progress Interview: Formal interviews conducted with an employee to assess present job status, solve problems, and formulate objectives for performance. Role Players: Students who conducted interviews based on given scenarios in the role play session. Unit with Examples: Written unit on progress interviewing which included embedded questions for which answers were provided for students to read. Unit with Practice: Written unit on progress interviewing which included embedded questions for which students formulated and wrote answers. Hypotheses following specific hypotheses were formulated and tested by appropriate statistical methods with the .05 level of confidence established for acceptance or rejection of the hypotheses. The analysis of data followed primarily the suggestions of Chambers and Hubbard (1978) to standardize the procedures and to allow valid comparisons in educational research in dietetics. 1. The performance on the progress interview written examination of students taught by "Reading with Practice" will be significantly higher than com- parable students taught by the method of "Reading with Examples." 16 Hypothesis one is stated directionally based on sug- gestions that student processing of information facilitates retrieval (Bruner, 1961,) and Santogrossi and Colussy (1976) who state that in an undergraduate psychology course, at- tempts at mastery were more successful in unit with study guide questions. Performance on the written examination between stu- dents studying the unit with examples and students studying the unit with practice was compared with a t-test for inde- pendent samples. The written examination was subjected to an item analysis which included indices of discrimination and difficulty to allow decisions to be made regarding im- provement of the examination. Students' scores on the pre- and post-tests were compared using a t-test for matched pairs (Glass and Stanley, 1970.) 2. The performance on the progress interview prac- tical examination of students taught by any one of the methods "Reading with Examples", "Reading with Practice", "Observer", "Directed Observer", or "Role Player" will not differ significantly from comparable students taught by any other of the methods. Hypothesis two is stated non-directionally based on the research results of Holmes (1975) who found no significant difference in learning between observers of live and video- taped simulation sessions. On the videotaped post-test interviews, differences between the sample means among ob- servers, directed observers, role players, unit with prac- tice and unit with examples, were tested for significance 17 by three-way ANOVA (Glass and Stanley, 1970.) An item analysis of the criteria checklist was completed to allow decisions to be made regarding improvement of the materials. 3. The measured attitudes regarding the progress interview unit of students taught by any one of the methods "Reading with Examples", "Reading with Practice", "Observer", or "Directed Observer", will be less favorable than the measured atti- tudes of comparable students taught by the "Role Player" method. Hypothesis three is stated directionally since al- though research has not indicated differential attitudes be- tween participants in a simulation, it is reported that sim- ulation improves student attitudes (Ward, undated.) Dif- ferences between reported attitudes of the test groups were reviewed and meaningful differences tested using apprOpriate statistics. 4. The performance of students on a progress inter- view written examination will not correlate posi- tively with the students' performance on the pro- gress interview transfer test. Hypothesis four is stated directionally since the written objective examination measures information storage while the criteria checklist measures actual skill perfor- mance. Although it has been traditional to use written examinations to predict later performance, they seem to be two different kinds of abilities in this case. A relation- ship between students' scores on the written objective exam- ination and the post-test videotaped interview was deter- mined by Pearson Product Moment Correlation (Terrance and Parker, 1971.) 18 5. The students' self-evaluations of performance on the progress interview practical examination will not differ significantly from the instructors' evaluations of the students' performance on the progress interview practical examination. Hypothesis five is stated non—directionally since there is not evidence to lead to a directional hypothesis. 6. The instructors' evaluations of the videotaped simulated progress interview will not differ sig- nificantly from the instructors' evaluations of the videotaped simulated progress interview by a random sample segment method of evaluation. Hypothesis six is stated non-directionally since re- search by Wise and Donaldson (1961) indicates that random sampling can be used effectively to evaluate employee per- formance. Ebel's inter-class correlation coefficient (Ebel, 1972) was used to test inter-rater reliability between the instruc- tors scoring the videotaped interviews, the students' self- assessment of the videotaped interviews, and the instructors' random sample evaluations. 7. The costs of utilizing self-instructional mater- ials will be less than costs of traditional teaching modes. Hypothesis seven is stated directionally since, al- though initial deve10pment costs are high, subsequent utili- zation costs would be slight. Costs have been calculated and are reported to allow appropriate comparisons. CHAPTER II REVIEW OF RELATED LITERATURE Literature in the areas of dietetic education, trans- fer, teaching alternatives, criterion-referenced testing and measurement, attitude scaling, random sample segment evaluation, and diagnosis and revision in the development of instructional materials have been reviewed. Literature in the area of employee progress interviewing has also been reviewed and will be included within the instructional unit as developed. Dietetic Education An overview of educational trends in the field of die- tetics is important as a framework and foundation for this research project. Current trends in dietetic education have been reported extensively in the literature and three areas can be readily identified as competency-based curriculum, coordination of didactic learning and site experience, and multiple strategies to individualize learning. Competency-Based Curriculum The curriculum evaluation mode for undergraduate pro- grams in dietetics has shifted from "courses" to the 19 20 "competencies" required of those seeking eligibility for membership (Report of the Task Force, 1976) with the empha- sis on specific objectives and personalization of instruction (Hart, 1978.) The essential elements of those programs have the following characteristics: 1. A focus on role-derived competencies to be demon- strated. 2. Statement of competencies in behavioral terms. 3. Publication of the competencies. 4. Use of criteria to measure the competency and stress on mastery rather than norm-referenced testing. 5. Consideration of the learner's performance rather than just knowledge. 6. Permission for the learner to progress at his own rate (Hart, 1978.) Several institutions have spent considerable resources attempting to delineate competencies for entry-level gener- alist dietitians (Howard and Shiller, 1977; Loyd and Vaden, 1977; MSU, Department of Food Science and Human Nutrition, 1976.) ADA has also established a committee to deve10p uni- form competencies as preliminary work for competency-based education across the dietetic profession (Report of the Task Force, 1978.) Coordination of Didactic Learning and Site Experience Another concept is that of coordinating clinical exper- ience with didactic experiences to promote student motivation and transfer of learning. Ideally, courses are designed to 21 give the student the necessary background of theory and prac- tical experience, to provide opportunities to apply knowledge to the real world, to allow for discovery, and to develop observational, problem-solving, and decision-making skills (Watson, 1976.) Coordinated Undergraduate Dietetic Programs are modeled after this concept and the numbers of such pro- grams are increasing. Evaluation of CUDP's is beginning and will grow in sophistication (Roach, 1978.) A three-step model of theory, practice and discussion of experience has been suggested to assist in integration of didactic and clinical learning. In dietetics, the pre- clinical study may include textbooks, articles, lectures, discussions, self-help materials, and other learning tech- niques, to allow the student to proceed to the clinical area with a plan of action (Lewis and Beaudette, 1977.) Multiple Strategies to InHIVidualize’Learning A variety of teaching-learning strategies have been reported in the literature, primarily focusing on clinical rather than management dietetics. Ohio State University's CUDP has de- ve10ped and evaluated case studies for computer-simulation of nutritional care delivery. These case studies are used to supplement field experiences concurrently with didactic in- struction. The researchers compared results on the simula- tions with the students' pre-professional GPA, professional courses GPA, scores on the American College Test, and 22 Myers-Briggs Personality-type Indicator. Faculty time was also recorded. Findings from the two-year pilot studies indicate no significant differences in academic learning and clinical performance when students substituted computer- simulated experiences for hospital-based experiences. Ohio State is continuing use of the simulations (Breese, 95 31, 1977.) Unklesbay (1977) discussed an instructional strategy of students conducting foodservice clinics throughout Missouri. Evaluation indicates that the students can con- tribute to nutritional care of the elderly in Title VII Nutrition Programs. The author suggests future research to evaluate the use of alternate education techniques during training programs with qualitative measurement of the stu- dents' professional accomplishments. Steed, at 31 (FSMEC Proceedings, 1975) report the de- velopment of an instructional unit simulating an aspect of labor relations related to foodservice including a contract negotiation simulation and 10 incidents. Nineteen students were involved in testing the unit. Evaluation of the ma- terials was subjective with students reporting favorable attitudes about this instructional mode. A programmed instruction unit in institutional pur- chasing for dietetic students was developed and evaluated by Pietrzyk, gt El (1978.) Forty-five dietetic students in three groups (students from CUDP's, dietetic assistant and technician programs) were involved in testing. The students 23 showed a significant increase in learning from pre- to post- test (pf; .01) and measured attitudes were favorable. Time to complete the unit was assessed. Fiel, at El (1979) report a model to evaluate skills of medical students which met two criteria: it had to be a valid measure and it had to be used with a high degree of reliability. The steps followed included: 1) selection of a skill, 2) division of the procedure into objectives, 3) subdivision of objectives into steps by task description, 4) converting the task description into an evaluation instru- ment by adding a rating scale for each task (a weighted scale was used since it was felt that some items were more important than others.) A student's score for the evaluation was the sum of points given for each task. The authors tested the model for inter-rater reliability and concluded that each evaluator should be within:r.10 of the mean of the rating scores. Results established the reliability of the model. Carroll and Monroe (1979) reviewed 73 studies on the teaching of medical interviewing. Conclusions regarding im- plications for teaching included: 1) instruction has gen- erally promoted significant gains in interview skills, 2) provision should be made for direct observations and feed- back on student behaviors to promote insight into complex processes, 3) standardized presentations of model behaviors may be more effective than live, spontaneous demonstrations, 4) instruction should include explicit statements of the 24 skills to be learned and evaluated since structured, Specific instruction with demonstration is more effective, and 5) this type of teaching process is "enormously time consuming". The authors make recommendations for future research includ- ing questions regarding retention of skills and comparative studies of single components of teaching methodologies be- tween alternative programs. Bell (FSMEC Proceedings, 1973 and 1975) suggests use of a variety of evaluation measures from paper-pencil examina- tions to real-world observations. This report suggests that instructors utilize interviews, criterion checklists, stu- dent self—evaluation, etc., to test a wide range of compe- tencies. Ingalsebeand Spears (1979) report the development of a criteria checklist for evaluating student performance in a dietetic foodservice management course utilizing the critical incident concept introduced by Flanagan. Twenty-six students were involved with the initial development by collecting and recording critical incidents. Students' attitudes were favorable about this type of evaluation due to its objec- tivity and continuity. In medical education, evaluation techniques have been developed utilizing standardized interview situations, video- taping, and clearly defined rating scales. Student-client interviews were taped, after the unit on interviewing was completed, and the videotaped performances were evaluated by a medical staff member, social worker, and the client who 25 had been interviewed. The study group elicited an average of 76 percent on content and 86 percent on process items while the control group averaged 47 percent and 62 percent respectively. A significant difference (pf; .01) using the Wilcoxon Rank Sum Test was found (Hutter, $3.21» 1977.) In another study (Lansley and Aycrigg, 1970), stu- dents' and faculty members' evaluations of a model of a psychiatric interview were compared for inter-rater relia- bility. The authors make an intriguing point: a basic assumption is that the better student is one whose perfor- mance most closely approximates that of the "expert" and may deter advancement in the clinical sciences. Hutter, gt a; (1977) report deve10ping checklists to evaluate allied health students' interviews in a clinical setting. The checklists were derived from the learning ob- jectives for the unit and covered data that students were required to address in the interview setting with clients. Instructors evaluated taped interviews by the students and found that students using the checklists performed better than those not using checklists. Direct observation by an instructor of a clinical en- counter with a real or simulated patient can accomplish the goal of reliable evaluation of students' skills (Barrows, gt 21, 1976). Unfortunately, direct observation or review of videotaped encounters can represent a tremendous drain on faculty time. The authors attempted to solve the prob- lem by designing a "self-assessment unit" which allows the 26 student to carry out his own evaluation. Evaluation of videotaped encounters with Simulated patients, multiple choice exams with answer sheets, expert models of the simu- lated encounter, and feedback from the simulated patient were used for student self-assessment. Medical students tended to be critical of their own performances and it was necessary to have the sessions taped to accurately record the events. Barrows, gt gt, (1976) view self-evaluation as a critical activity throughout the physician's professional life. Pacoe and co-workers (1976) state that no one is bet- ter able to judge some aspects of the interview such as accurate empathy and non—possessive warmth than someone in the client's position. A training model to provide feed- back from simulated clients was developed. The Department of Psychiatry at Michigan State Univer- sity has attempted to develop new student performance eval- uation modes (Muslin, gt gt, 1974) as it became apparent that no single mode of assessment would adequately measure the diversity of skills expected of the student. The four varying procedures used included testing of cognitive objec- tives, behavior observations, interview skills and self- evaluation. Assessments included extensive use of video- taped behavior to provide a standard simulus and to reduce variability inherent with live patients. The use of video- taped behavior also enabled repeated use of the learning materials and increased the reliability of the ratings. 27 Videotape also lends itself to self-instruction since the student could view the tape independently. The deve10pment of rating instruments for higher level behaviors was prob- lematical due to the difficulty of 1) getting good models of behaviors, 2) increasing inter-rater reliability, 3) setting criterion levels for performances, 4) getting faculty to subject their observational skills and biases to colleague scrutiny, and 5) the non-quantifiable nature of some behavior. The rating forms developed by Muslin, gt_gt (1974) were based on objectives and included a continuum of levels of perfor- mance. Some problems were reported in making the test "fair" to students since different patients were assigned to each student. In general, students were uneasy about live obser- vers and raters. In summary, since ADA has recommended a competency- based curriculum for dietetic programs, many institutions have deve10ped on this model. CUDP's are also required to demonstrate a close coordination between didactic and ex- periential learning. One method which has been recommended and used extensively in dietetic education to allow closer coordination, individualization of experiences, and better preparation for field experiences, is self-instructional materials. Evaluation instruments are continually being deve10ped and tested. Criteria checklists have been tested for evaluation of actual performances; they have also been useful for evaluation of videotaped performances. Assess- ment of inter-rater reliability is recommended. Student self-assessment units have been utilized with success. 28 Transfer Learning is brought about to establish capabilities that will be of lasting usefulness to the individual, i.e., making it possible for an individual to perform in a situa- tion not identical to the learning situation but similar to what is learned for example, applying classroom learning in a field experience site. This is termed "transferability" and can be called lateral transfer since it refers to gener- alization of the skill across a broad set of situations. The transfer, and therefore usefulness, of learning will be increased if it is practiced in as wide a variety of situa- tions as possible when it is learned (Gagne, 1965.) Bruner (1972) recommends inducing active participation on the part of the learner and creating a challenge to solve problems to promote transfer of learning. Bruner (1961) states that active student processing of information encour- ages differentiation and organization of the information more than if it is passively received. If information is stored and organized in terms of a person's own interests and cog- nitive structures, there is more chance of it being acces- sible when needed. Goldstein and Sorcher (1974) have recommended a four- step model for transfer training.‘ The first step is to pre- sent the best possible demonstration of the desired behavior for the learners to observe. The second step is practice of the behavior by the learners. It is viewed as important to 29 organize the training setting to focus on the Specific tasks to be performed. Step three is providing for reinforcement wherein group or individual feedback may be given to the learners. The fourth step is planning for transfer to the real setting by describing some possible problems, limita- tions, etc., which may be encountered and discussing ways of dealing with them. Gropper (1975) defines transfer as the correct iden- tification of a new stimulus which has not been encountered during instruction and the making of a correct alternative response to it which has not been practiced during instruc- tion. He described the skills as 1) being able to see the similarity between the non-encountered Stimulus and other stimuli belonging to the same class, 2) being able to see the Similarity between the non-practiced response and other practiced or non-practiced correct alternative responses. Methods for increasing transfer effectiveness of materials are recommended: 1) provide recognition practice involving pairs of stimuli belonging to the same class, 2) use of dia- grams to call attention to similarities, 3) provide visual or verbal cues to facilitate difficult generalizations, 4) provide model examples varying in similarity, and 5) pro- vide rules which identify relevant and critical properties. Davis, Alexander and Yelon (1974) refer to transfer situations as referent situations or where the student will need what he is learning. They describe a referent Situa- tion test of performance which closely approximates the real setting. 30 Haslerud and Meyers (1958) tested the hypothesis that principles derived by the learner solely from concrete in- stances will be more readily used in a new situation than those given to him in the form of a statement of principles and an instance. Two groups of college students were given the same task. Group A received rules to follow while Group B received only examples of the completed task. When both groups were tested initially, Group A performed better. How- ever, when both groups were tested a week later, Group B performed better. To summarize, it is important that students be able to make an application of knowledge in real settings and en- hancing transfer will assist in accomplishing this applica- tion. Active participation, problem-solving, practice with corrective feedback, and examples of the task, have been recommended to improve transfer. Simulation can encompass a variety of these characteristics; role play and case study are often a part of simulation. Teaching Alternatives Alternative teaching strategies have been researched and recommended as possessing certain advantages. Simulation has been advanced as a methodology for increasing transfer of learning and as such would be a useful strategy for this pro- ject. Case study and role play require students to use skills in an applied fashion and also may tend to increase transfer by allowing students to practice in a variety of situations. 31 Simulation Simulation can be described as a selective represen— tation of reality. It emphasizes crucial aspects of a real Situation and focuses the student's attention, while elimi- nating extraneous, complicating factors (Davis gt gt, 1974.) Simulation, which produces a close approximation of actual events or processes, can represent a highly effective alter- native method of eliciting complex skills or behaviors and allow for practice of those skills. It requires active participation of the respondent (Maatsch, 1974.) Simulation has been used with apparent enthusiasm and effectiveness and has been reported in the literature of education, business, medicine, and allied health education including dietetics (Gohring, 1978; Inbar and Stol, 1972; McLean, 1978; Gines, gt gt, 1978.) Maatsch (1975) describes a comparison of teaching a Simple task using the various methods of 1) lecture manu- script, 2) programmed instruction, 3) nominal lecture, 4) seminar, 5) observation groups, and 6) simulation. The performance of recall, problem-solving, application and recognition were tested. Simulation consistently showed the best results; nominal lecture the poorest results. With subsequent tests thirty days later, it was found that the method did not differentially affect forgetting. Similar performance results were found for active participants and observers of a simulation. The simulation method enabled 32 the students to pace themselves to allow time to process the information. These students also received immediate feedback for incorrect responses and, therefore, obtained a higher performance level. Holmes (1975) describes the difference between achieve- ment when the student is a live observer of a simulation and when he is an observer of a simulation via videotape. Obser- ver performance and satisfaction was not significantly dif- ferent for live vs. televised observation. The author also found that observer performance could be improved by viewing simulation participants with relatively low aptitude for the learning task. This allowed the observer to hear more in- structor feedback and also provided more time for information processing. Muslin gt gt, (1974) have suggested simulation as an evaluation tool and have used simulation in medical education to test interpretive Skills of Simulated clinical and labora- tory data, problem-solving skills, and clinical judgment using simulated problems in patient management, and inter- personal skills and attitude by Simulated interviews and con- ferences. This author states that simulation has some dis- advantages as an evaluation tool since certain aspects of reality or human behavior cannot be economically simulated or apprOpriately measured by this method. For example, recall of factual information is more economically and directly measured by objective testing. Advantages of simulation as an evaluation tool include: 1) the problems more closely 33 correspond to reality, 2) the focus is on the elements of primary concern, 3) the tasks may be standardized for all examinees, 4) the criteria for performance may be specific, detailed, and predetermined, S) the risk to real patients is not a factor, and 6) the learning is enhanced through prompt, specific feedback. Ward (undated) states that the evaluation mode for a simulation will depend on the objective or learner outcomes desirable at the conclusion of the instruction. If the out- come is to be an observable skill, raters such as instruc- tors or possibly other students, can evaluate performance with a checklist specific to that Skill. Other researchers (Towar and Vosburgh, 1976; Fiedler, 1977) have reported a method of training raters in order to develop inter-rater reliability. The degree of acceptable reliability was es- tablished at the discretion of the researcher depending on the difficulty or complexity of the skill being rated. Ward (undated) reports that the evaluation of instruc- tional games or simulations should give account to three aspects of the instructional materials and experience. First, there must be a concern with what has been learned in terms of content information. A second evaluation area is motiva- tion or the students' affective response to the instruction since one of the reasons for use of instructional Simulation is to increase the interest level of the learner to enhance learning. The third evaluation aspect is the concern for transfer of learning. Although traditional education 34 procedures usually have not tested transfer, this can be the primary reason for using instructional games and simulations and should be evaluated. Learning of information may be evaluated using a written pre-test, post-test procedure. Motivation can be evaluated by an observer to the simulation, or the learners may be asked to evaluate their own level of interest. Transfer evaluation is most effectively done in longitudinal studies after learners' entry into the real world setting; while ideal, this is impractical. It was sug- gested that the skills to be learned be evaluated in a dif- ferent but Similar simulation setting. Case Study and Role Play Role playing and written case studies require the stu- dent to use the skills or make an application of the theory to a real problem. The technique of role playing comes from the work of Moreno (1953). Maier 23.21: (1975) stipulate that the objective of role playing is to promote insight into interpersonal relationships by asking one to play the role of another. According to this author, role playing requires the person to carry out an action or idea, permits practice in carrying out that idea or action, promotes attitude change by placing persons in specified roles where it teaches one to be sensitive to the feelings of others, permits a better understanding of the impact of feelings, and enables one to find personal faults in a low-threat setting where training to control feelings and emotions may be obtained. 35 The case study approach to human relations was initiated at Harvard University (Maier, gt gt, 1975.) Case Study may be used to discourage snap judgments about people and be- havior and limit the practice of looking for the "correct" answer. The authors also state that the case study illus- trates how the same set of events can be viewed from dif- ferent perspectives, while it trains one to discuss situa- tions with the emphasis placed on practical thinking. The authors felt that role playing a case study would combine these benefits, but the cases should include a minimum of extraneous detail, produce results that are generalizeable to other similar Situations, and exhibit interesting and challenging experiences. 8 Simulation which is a close approximation of actual events can elicit complex Skills and allow for practice of those skills. Simulation has further been recommended as a methodology to evaluate complex skill performance. Case study and role play can be effective components of simula- tion since they also require students to make an application of knowledge and allow for practice in a variety of situa- tions. Criterion-Referenced Testing andIMeasurement Criterion-referenced testing is appr0priate to this project since the evaluation is of a simulated performance of a progress interview, rather than of the student's infor— mation base. Although measures of criterion-referenced 36 test validity and reliability are not firmly established, some measures have been recommended. A criterion-referenced test is one constructed to yield measurements that are directly interpretable in terms of specific performance standards. The usual norm-referenced test is one that yields test scores that discriminate be- tween individuals on the trait being measured. As developed by McClelland (1976) at the Institute for Competence Assess- ment, criterion-referenced testing has the following traits: 1. Measures use of, rather than storage of, information. 2. Uses a format closely resembling performance- related Situations, and 3. Measures abilities causally related to successful performance rather than being merely correlated with the performance. Glaser (1963) and Popham (1975) were the first to introduce and to popularize the field of criterion-referenced testing. The purpose was to provide the kind of test score information needed to make decisions arising in objective- based instructional programs. Criterion-referenced tests are currently used to monitor individual progress in objective- based educational programs, to diagnose learning deficiencies, to evaluate educational and social action programs, and to assess competencies on various certification and licensing examinations. POpham and Husek (1969) note that test score relia- bility is dependent on test score variability. Since it is not uncommon to observe rather homogeneous distributions of 37 criterion-referenced test scores, they feared that test de- velopers would scrap their tests because of low reliability scores. These authors suggest that test developers should understand low classical reliability estimates for tests Since low values were to be expected. But no alternatives were suggested at that time. Haladyna (1974) suggests that test developers "create" test score variance by "pooling" the two groups of learners (those expected to be masters and those expected to be non-masters, perhaps a group of examinees prior to receiving instruction) then apply one of the clas- sical reliability approaches and interpret end results in the usual way. Livingston (1972) suggests that the purpose of a criterion-referenced teSt was to discriminate each exam- inee's estimated domain score from a cut-off score. The author indicates that it is then possible to re-define var- iations in estimated domain scores and domain scores about the "cut-off" score rather than define the mean domain score which is the procedure in classical test theory. The farther the group mean domain score is from the cut-off score, the more reliable the scores are said to be. Shavelson, Block and Ravitch (1972) suggest that reliability information is needed on each subset of items measuring an objective in- cluded in a test when test items are arranged into clusters according to the objective being measured. Carver (1970) proposes two procedures for assessing reliability of criterion-referenced tests. The first pro- cedure requires the administration of the same test to two 38 comparable groups, and a comparison of the percentages of examinees that were classified as masters. The second pro- cedure requires the administration of two parallel tests to the same group, and a comparison of the percentage of "masters" on the two tests. With either procedure, the more comparable the percentages, the more reliable the tests are said to be. Carver's procedures were based on the repli- cability of distributions, while the usual concept of relia- bility in mental testing is based on the replicability of individual scores, and would be a weak form of evidence for criterion-referenced test reliability. Hambleton and Novick (1973) suggest that the relia- bility of mastery classification decisions should be defined in terms of the consistency of decisions from two adminis- trations of the same test or parallel forms of a test. Several approaches to the determination of test length have been reported (Novick and Lewis, 1974: Fhaner, 1974: Millman, 1972 and 1973: and Wilcox, 1976.) The length of a criterion-referenced test is related to the usefulness of the test scores obtained from the test. Short tests, typi- cally, produce imprecise domain score estimates, and lead to mastery decisions that prove to be inconsistent across paral- lel form administrations or test-retest administrations. When criterion-referenced tests are used to assign learners to mastery states, the problem of determining test length is related to the number of classification errors one is 39 willing to tolerate. One way to assure low probabilities of misclassification is to make the test very long; this is usually not feasible. Millman (1973) recommended consideration of the fol- lowing factors in setting cut-off scores for assigning learners to mastery states: 1. Performance of Others: Set the cut-off so that a pre-determined percentage of a group of examinees pass. 2. Item Content: Have a set of experts inspect items in a test to determine the minimum number of items that learners must answer correctly in order to be considered masters. 3. Educational Consequences: Determine the cut-off score that maximizes the relationship between test performance and some criterion measure such as test performance on a subsequent objective to which the first is a prerequisite skill. 4. Psychological and Financial Costs: Set a low cut-off score when remediation costs are high (Millman, 1974.) 5. Errors Caused by Guessing and Item Sampling: Apply a correction factor to either the cut-off score or learner test score. Block (1972) studied the degree to which varying cut- off scores during segments of instruction influenced end of learning criteria. Six criterion variables were selected for study: achievement, time needed to learn, transfer, retention, interest, and attitude. The results revealed that groups subjected to higher cut-off scores during in- struction performed better on the achievement, retention, and transfer tests. On the interest and attitude survey there was a trend for interests and attitudes to increase 40 until the .85 group, and then to level off. The .75 group fared poorly on the transfer, interest, and attitude measures, suggesting some extra-experimental influence. The results seem to indicate that different cut-off scores may be neces- sary to achieve different outcome measures. Fremer (1974) outlined procedures to increase the vali- dity of criterion-referenced tests under the following topics: A. Preparation of Objectives: "Amplified" objec- tives may be more useffii than behavioral objec- tives. An amplified objective is an expanded statement of an educational goal which provides boundary specifications regarding testing situa- tions, response alternatives and criteria of correctness. B. Generation of Test Items: Items are generated for domains, utilizing principles of item writing used in norm-referenced achievement test con- struction. C. Item Analysis: This includes judgments of test items by content specialists. The judgments are made concerning the extent of "match" between test items and the domains they are designed to measure. Two questions are addressed: are the domain specifications clearly written and is there agreement among content Specialists that a set of items adequately sample a particular do- main? Another approach is to apply empirical item analysis techniques that have been used fre- quently in norm-referenced test construction. Rovinelli and Hambleton (1977) asked content Special- ists to rate test items relative to a set of objectives. Their three possible ratings of a test item had the follow- ing meanings: definite feeling that an item is a measure of an objective, undecided about whether the item is a measure of an objective, and definite feeling that an item is not a 41 measure of an objective. The authors describe a second pro- cedure involving the use of a rating scale. Content experts were asked to rate the appropriateness of test items as mea- sures of objectives. The ratings were tallied and averaged to determine a rating. Item analysis can also be accomplished by Cronbach's (1971) duplication method. Two teams of equally qualified item writers and reviewers work independently in developing a criterion-referenced test. If domain specifications are clear, and sampling representative, the tests should be equivalent. Empirical methods of item analysis may provide more information. Discrimination indices may provide useful information for detecting "bad" items. Henrysson and Wedman (1974) argue that even carefully prepared domain specifica- tions and precise item generation Specifications never com- pletely eliminate subjective judgments that influence test construction. D. Item Selection, Test Length, Cut-Off Scores: This step includes selecting a sample othest items from the pOpulation of test items. Test length and cut-off scores have been discussed above. E. Reliability and Validity Studies: These are com- pleted after seiection ofiitems following guide- lines discussed above. A criterion-referenced test measures use of informa- tion and uses a format closely resembling performance re- lated situations. However, statistical measures of 42 criterion-referenced tests are only now being deve10ped. Recommendations have been made for determining test length, assigning cut-off scores, and increasing validity. Empirical Item Analysis In relationship to competency-based materials, item difficulty and item discrimination statistics may be used primarily for improving objective examinations (Douglass and Olson, undated.) Items with inappropriate difficulty levels and/or low discrimination may reflect a poor item on a norm- referenced exam, but not necessarily on a competency-based examination. Therefore, judgment is required to eliminate or improve items. An item can fail to act as desired for one of three reasons: 1. The item may be faulty; it can contain clues unrelated to relevant knowledge that hint at the correct answers; it can be ambiguously or poorly worded; or it can fail to reflect instruction. 2. The instruction can be misleading or inadequate. The knowledge that the item is intending to mea- sure may not have been learned. 3. The instructional objectives can be inadequately specified. An item may not be measuring well because no specific knowledge area serves as a basis for the item. It is important to know that items should never be accepted or rejected solely on the grounds of item analysis. The instructor's good judgment is used to write appropriate items and it should be used to revise them. Poor items should generally be revised rather than discarded. Four 43 relatively simple and straightforward item statistics are useful for evaluating criterion-referenced test items (Douglass and Olson, undated.) A. The Pre-test Index of Difficulty: This is simply the percentage of the pre-test or uninformed group answering the item correctly. The smaller this index is the better the item. If a high per- centage of the uninformed group of students can answer an item correctly, there is probably a clue in the item or the students may already have the knowledge tested by the item before instruc- tion. Close examination of the item Should re- veal which situation exists. The Post-test Index of Difficulty: This is the percentage of students in the post-test or in- formed group that answer the item correctly. This index should be as high as possible. After in— struction, most of the students should have the knowledge which was taught. If the post-test index is low the item may be misleading or ambig- uous or the instruction may not be adequate in that area. The Pre-test Post-test Discrimination Index: This is the post-test difficulty index minus the pre-test difficulty index. It varies from 1.00 to -l.00. This index should be fairly high Since it is desirable for the post-test index to be high and the pre-test index to be low. The discrimina- tion index measures the group gain from pre—test to post-test. It will be low if either the item was easy for the uninformed group or difficult for the instructed group. Again, a low index can be caused by a faulty item or weak instruction. The authors also state that the students' patterns of response can provide useful additional information for multiple-choice items. The pattern of response consists of the number of students choosing each of the alternatives in the multiple-choice item for pre-test and post-test. Item analysis can aid with improving objective examina- tions. Judgment is still required, as inappropriate dif- ficulty levels or low discrimination may not necessarily 44 reflect a poor item on a competency-based examination. Item analysis statistics include the pre-test and post-test indices of difficulty, pre-post test discrimination index and multiple- choice pattern of response. Attitude Scaling An introduction to attitude scaling is important to this project Since one of the instruments used was a student attitude survey. An attitude is a predisposition to think, feel, perceive, and behave toward a reference or cognitive object (Kerlinger, 1973.) There are three major types of attitude scales discussed in the literature. The Thurstone-type or Equal-appearing Interval Scale, places the individual along an agreement continuum, but also scales the attitude items by importance (Butcher, 1956.) This type of attitude scale may not give as much information as the Likert Scale because of its dichotomous response mode (Isaac and Michael, 1977.) The Guttman-type or Cumulative Scales include a rela- tively small number of homogenous items measuring only one attribute. This scale gets its name from the cumulative relationships between the items and the total scores of individuals and is appropriate when only one attribute is involved (Butcher, 1956.) The Likert-type or Summated rating scales contain a set of items considered equal in attitude or value loading. Subjects can respond to the items with varying degrees of 4S intensity on a scale ranging between extremes. The scores are summed and averaged to find the total score. Summated rating scales appear the most useful in behavioral research since they are easier to develop and yield about the same information as the more laboriously constructed equal- appearing interval scale. Greater variances have been ob- tained with Likert scales (Butcher, 1956) and comparisons have shown that Likert scales produce higher coefficients of reliability (Robinson gt gt, 1968: Maranell, 1974.) Likert (1932) stipulates that in the construction of summated scales: 1) each Statement should be of such a na- ture that persons with different points of view will respond differentially, 2) items cannot deal with statements of fact, only with expressions of desired behavior, and Should be written to deal with present rather than past attitudes (the word "should" is a convenient way of stating the propo— sition so that it involves a desired behavior); 3) each item Should be clear, concise, straightforward, with a simple vocabulary; 4) double-barreled statements Should be written as two separate statements, 5) it is desirable to word the statement so that the model reaction approximately falls in the middle of the possible, and 6) statements Should be worded so that about half of the items have one end of the continuum as the response and the other half have the re- sponse at the other end. Kerlinger (1973) recommends that when constructing the Likert scale, one Should prepare and select more statements 46 than he is likely to use since after testing with a group some statements may be found unsatisfactory. For scoring purposes a numerical value must be assigned to the pos- sible alternative responses. The lower number can be as- signed to either end. Split-half reliability can be deter- mined by correlating the sum of the odd statements for each individual against the sum of the even Statements. Item analysis can be done by calculating the correlation co- efficient for each item. If a negative correlation is found, it indicates that the numerical values are not pro- perly assigned and that the one-five ends should be reversed. If a zero or very low correlation is found, it indicates that the statement fails to measure that which the rest of the statements measure and is undifferentiating and contri- butes nothing to the scale. Poppleton and Pilkington (1963) compared the measure- ment of one particular attitude by four scales. The Thurstone, Guttman, Likert, and Guilford scales demonstrated reliability. The Likert scale exhibited a high degree of validity and was less difficult to use. Robinson gt gt (1968) lists other criteria for atti- tude scales as: 1. Comprehensive set of questions relevant to the tOpic. 2. Item analysis Shows items significant at the .05 level. 47 3. Avoids "response set" to attitude statements for reasons other than the content of the items. Two methods suggested to avoid this are to include interesting and pleasant statements and to occa- sionally list the responses from one to five in reverse order. Of the three major types of attitude scales discussed in the literature, the Likert-type scale is simplest to deve10p and research indicates that it can yield about the same information as other scales. Likert (1932) describes parameters for construction of these scales. Random Sample Sggment Evaluation The random sample evaluation of videotaped perfor- mances was initiated by reports in the literature of work sampling for employee evaluations. This type of work sam- pling is a quantitative technique for measuring and analyz- ing activities, primarily applied to industrial settings and employee evaluation. The technique requires the use of ran- dom, short observations and is based on the law of large numbers which states that the distribution of random samples tends to resemble the total distribution from which the sam- ples are drawn. Each minute of the total population of minutes must have an equal chance of being drawn in random sampling. The accuracy of work sample technique was com- pared with that of continuous time studies using 14 indus- trial operations and the average difference between the two methods was 2.5 percent (Wise and Donaldson, 1961.) Random sample segment evaluation was utilized in a foodservice 48 operation to analyze student employee payroll requirements between kitchen facilities and the results were used to make recommendations for improvements in services (Wilson, 1956.) The theory of distribution of random samples resembling the total distribution from which the samples are drawn has been applied to employee evaluation in industrial settings and can be accurate. An application to student performance evaluation remains to be tested. Diagnosis and Revision in the Development ofIInstructional Materials After testing and evaluation of the instructional materials developed, some revisions will be necessary to improve them. Literature in the area offers guidelines for diagnosis and revision to improve materials. Revision consists of the introduction of, alteration of, or substitution of display, response, or feedback mechan- isms which were used in deve10pment. Gropper (1975) sug- gests revisions of instructional materials based on student achievement data that indicate Significant improvement in learner achievement after testing, evaluation, and revision. This author reports four data sources for evidence on which to base revision decisions: 1. The developers' own characterization of the pro- gram in testing. 2. The results of students' performance. 3. Student characterization of the program. 4. Comparison of program results for differing groups. 49 If the materials are favorably evaluated, the developer can proceed with revision on an ad hoc basis, revising indi- vidual tasks which student error or personal inspection re- veal to be faulty, without negatively altering results. If the materials are not favorably evaluated, diagnostic effects must be more sophisticated and extensive. Student errors during instruction and on criterion- referenced tests have been used to guide program revision. Front-end analysis of objectives identifies what students are expected to learn; diagnosis for revision Should iden- tify what students have achieved or failed to achieve. If a student fails to generalize or transfer from the learning situation to the testing Situation, certain program omis- sions are likely: too few examples were used or they were insufficiently varied (Gropper, 1975.) Summary A review of literature in the area of dietetics indi- cates testing and evaluation of a wide range of instruc- tional technologies; however, there is a lack of information and research in the area of progress interviewing applied to institutional foodservice. Client interviewing in the clini- cal dietetics setting is well represented in the literature and some instructional deve10pment has been accomplished (Breese, gt gt, 1977.) The focus and content of this type of interviewing is very different from employee progress interviewing. Instructional units dealing with communication 50 Skills and the interviewing process are available, but do not deal with the content area of employee evaluation (Welsch, Adam and Fitz, 1979.) The content areas of the employment interview and employee evaluation are discussed to some ex- tent, but no self-instructional units were found (West, gt gt, 1978.) Since this Skill is considered essential, develop- ment of materials is necessary. From a variety of instructional strategies, methodol- ogies effective in increasing transfer would be most advan- tageous in the context of this project since employee inter- viewing is a skill that the entry-level dietetian will be required to achieve. A variety of instructional alterna- tives such as individualized materials to allow students to study the materials at their own pace and at apprOpriate times in relationship to the field experience, simulation to allow students to practice the skills in a variety of Situa- tions, case studies and role play to increase transfer, are recommended. These materials should be developed on a competency-based, criterion-referenced model. Until quite recently, there have been few reliable guidelines for criterion-referenced test construction, assessment, and test score interpretation and this has ham- pered the use of these tests. Standard procedures for test- ing and measurement within a norm-referenced framework have become well known, but work is needed regarding criterion- referenced tests. A basic difference in comparison to norm- referenced tests is that criterion-referenced tests are not 51 constructed specifically to maximize the variability of test scores, whereas a norm-referenced test is so constructed. Since the distribution of scores on a criterion-referenced test will tend to be more homogeneous, there will be less variance which is critical to the usual interpretation of norm-referenced statistical tests. The use of empirical item analysis Statistics has been recommended to improve criterion- referenced tests. Criterion-checklists have been recommended to evaluate videotapes of student performance. Random sampling has been applied to employee evalua- tion in industrial settings and compared with standard em- ployee evaluation. It may be a useful technique in super- visor observation of employee performance to improve em- ployee evaluation while decreasing amount of time Spent in evaluation. After completion of testing and development of mater- ials, analysis of Student errors typically have provided researchers and developers with diagnostic evidence to guide program revision. Front-end analysis of objectives identi- fies what students are expected to learn; diagnosis for re- vision should identify what students have achieved and have failed to achieve. Using the literature as a guide, the specific project addressed in this research was the development, testing, and evaluation of an instructional unit on employee interviewing including self-instructional and Simulation components to allow: 1) students to use the materials in conjunction with 52 appropriate experiences in field sites, and 2) facilitation of transfer of the Skills to real world settings. Evaluation instruments are criterion-referenced. Comparisons were made between performances of students who had varying levels of active participation in the learning process. Evaluation was conducted in three modes to allow comparisons between: 1) instructors' whole evaluation, 2) students' self-assessment, and 3) instructors' random segment evaluation. CHAPTER III METHODOLOGY Methodology is described under the headings of design, preliminary procedures, and research procedures conducted to carry out the planned design. Design The research has been divided into two segments to describe the design developed. The first part was evalua- tion of the self-instructional unit and simulation practice sessions with student performance scores as the dependent variable. The second part was comparison of the three modes utilized to evaluate student performance and again perfor- mance scores were the dependent variable. Evaluation of the Instructional Materials Test groups were formulated to vary the amount of re- quired interaction with the instructional materials to com- pare effectiveness of learning as measured by a score on a post-test Simulated employee interview. Independent var- iables included: for the self-instructional materials 1) a written unit with practice wherein students were re- quired to formulate and write answers to embedded questions, 53 54 2) a written unit with examples wherein students were re- quired to read embedded questions with answers provided; and for the practice session 3) observers who watched the role play practice, 4) directed observers who observed and eval- uated the role play practice utilizing a criteria checklist, and 5) role players who conducted interviews in the practice session. Table 1 describes the test group assignments. The dependent variable was student performance on a post-test simulated employee interview as measured by a criteria check- list. Figure 1 displays the two by two by three way analysis of variance (ANOVA) design. TABLE 1 Test Group Assignments ‘ Student Enrollments for Treatments Academic Terms, 1978-1979 Fall Winter Spring 1. Unit with Examples 3 juniors 3 seniors Observer 2. Unit with Examples 3 juniors 3 seniors Directed Observer 3. Unit with Examples 4 juniors 4 seniors Role Player 4. Unit with Practice 3 juniors 3 seniors Observer 5. Unit with Practice 3 juniors 3 seniors Directed Observer 6. Unit with Practice 4 juniors 4 seniors Role Player 55 FIGURE 1 Two by Two by Three-Way ANOVA Design /' /' f 7 Unit ETI/r with Examples I” Unit with L/} Practice Senior Students Junior Students~ Role Observers Directed Players Observers Comparison of Three Evaluation Modes Student performance on the post-test simulated employ- ee interview was videotaped to preserve the performance for several evaluation procedures. Once again, the score re- ceived by the student was the dependent variable. Indepen— dent variables included: 1) instructors' evaluation based on whole performance evaluation, 2) student self-assessment of performance, and 3) instructors' evaluation based on ran- dom segment evaluation. Preliminary Procedures Prggress Interview Unit Deve10pment The progress interview unit was developed beginning with a list of learning outcomes which the learner should achieve with a final objective to develop the ability to 56 plan and conduct a simulated progress interview (see objec- tives in Appendix B.) Test questions were written for the unit to include many examples of real world problems which required the student to think about and plan the various steps of the interview model. Background information and theory were written as the first part of the unit, followed by the interview model and a written description of each step of the model with situationally Specific examples for each step. A hierarchical analysis of the content of the unit (following Hiob's, 1978, self-instructional module on pre- paring modules) was completed and is shown as Table 2. Hiob's model is based on Gagne's domains of learning, Table 3, which indicate progressively more complex types of lear- ner behaviors. This analysis can be helpful to the instruc- tional developer in sequencing information to facilitate learning. Davis, Alexander, and Yelon (1974) describe a syste- matic design process for instructional materials development. The first stage is describing the current status of the lear- ning system including the purpose, resources, students and teacher qualities. Second is deriving and writing learning objectives which are precise and unambiguous. The third stage is planning an evaluation system to determine if ob- jectives are met. Task description (determining the Steps involved in performing the task) and task analysis (types of learning involved in a task) are completed to guide decisions about sequence and extent of information to include. A flow 57 TABLE 2 Hierarchical Analysis of the Progress Interview Unit PRE—REQUISITES 1 States human motivation theories. 2 Demonstrates interpersonal communication skills. 3 States reasons for employee evaluation and describes employee evaluation techniques. 4 Describes expectancy and contingency motivation theories. Demonstrates characteristics of effective feedback. Identifies effects of the interviewer's attitude about performance appraisal. 7 Identifies problem-solving skills. Discriminates priorities for a progress interview. 9 Discriminates specific job requirements from an individual employee's personal characteristics. 10 Originates pre-planning for an interview. 11 Discriminates employee strengths. 2 Discriminates the counterparts of weaknesses. 13 Chooses interview location and environment. 14 States reasons for advance appointments. 15 States reasons for employee self-assessment. 16 Generates solutions to problems through the employee. 17 Demonstrates controlling the direction and content of an interview. 18 Generates job-related goals with the employee. 19 States reasons for and uses of timelines. 20 Identifies consequences of appropriate performance. 21 Identifies reasons for documentation of the inter- view process and outcomes. 22 States uses of the interview records. 23 Generates an evaluation of the interview process and results. 24 States reasons for a planning guide for interviewing 25 Generates a planning guide for an interview. 26 Originates a simulated progress interview and evaluates it. 58 3ow>uouafi mmoHMOHQ woumHoEHm m moumnflm“uo mmumowmfiuo xmoumpum o>wuficmou ooonan any amcw>fiom-EoHn0Hmv so“: msoanonm op moowHSHow moumuocoo moumuocou ofism aowuo Magma: xomnvoom o>fiuuommo mo mOAHmwuouumhmno moumuumcoEoo moumhumaoaoo oazm maw>aom annonm no wcwawmuu mm mEoHnoum mofimwmmmflu m0aMwmmmHu uaoocou wocwmoa uow>mnon oumfinmonmae mo moocoscomnoo mafimwucopH moflmwucovH umoucoo ououucou zow>houcw onu you mowufinowhm moumcwswhomfln moumcfiefipomfln cowuanfiEwuumwn . Hawxm HanuuoaaoucH mucosunwommm oocm>pm mcfixma how mcomeou moumum moomum :owueshomcH pomecouw>co opmwhmonmmm on a: wcfiuuom mo xmwu may mousooxm mouzuoxm Hmem pogo: mooxoamso now: m3ow>nounfi mmohmosm op ou momooau momoocu owsuwuu< oHQmem eho> xufififinmmau wcficpmoq mo mcmeoa m.o:mmu m mam.75 of the subjects. **One or more of the test items for this objective were answered incorrectly. 78 The Sample Forty student dietitians currently enrolled in the CSP were involved in the study. Requirements for eligibility to apply for entrance into the program are described in Appendix A. From eligible applications, 20 are chosen each year by random computer selection. Students participating in the study completed a personal information questionnaire which was tallied and is shown in Table 5. Senior students were a half year Older than the junior students on the average, had compiled a higher grade point average (both overall at MSU and transfer credit), and had been in attendance at MSU an average of three terms longer than the junior students. Senior students had completed more coursework in related areas such as communications, management, administration, and education. Although work experience in a foodservice posi- tion was comparable between the juniors and seniors, senior students had accumulated more experience in supervisory posi- tions in foodservices. Pre-Post Written Examination Evaluation The written examination was composed of 37 objective test items. This examination was administered to the 40 students prior to distribution of the unit on interviewing and after completion of the written unit. The students were asked to mark answers on a computer answer sheet to assist in the tabulation of evaluation data. The examination was 79 TABLE 5 Description of Subjects Tallied from Personal Information Questionnaire n-ZO ITEM JUNIORS Age Range 20-25 Mean 21 Academic Status Juniors 10 Seniors 10 ‘35 MSU 2.0-2.9 2 3.0-3.49 15 3.5-4.0 0 Transfer 2.0_2.9 0 3.0-3.49 4 3.5-4.0 0 Number of Terms at MSU Range 4-10 Mean 7.9 Academic Training (Number of students who had completed at least one course in the following areas) Communications 4 Psychology 20 Management 7 Administration 15 Labor Relations 2 Education 6 Work Experience Foodservice Worker Less than 6 months 5 6 months - 1 year 4 1 - 2 years 1 More than 2 years 5 Foodservice Supervisor Less than 6 months 4 6 months - 1 year 0 l - 2 years 1 More than 2 years 0 n-ZO 233193.31 20-23 2L5 20 OOH 6-16 10.85 20 17 20 12 bU'IU‘O‘ HONU'I 80 subjected to an item analysis to obtain data relevant to im- provement of the examination after testing and evaluation of the materials. An item difficulty assessment was completed so that a pre to post comparison of item discrimination could be com- pleted (see Table 6.) Larger numbers on the item discrimina- tion assessment indicate a more discriminating item. Items 5, 7, ll, 13, 18, 20, 22, 24, 25, 27, 28, 34, and 37 may measure prerequisite or previously learned information Since more than .80 of the junior students were able to answer these items on the pre-test or the item had an obviously correct response. These items test information concerning use of the interview information, job Specifications, prob- lem solving model components, discrimination of important incidents, appropriate feedback technique, reinforcement and expectancy motivation theories. Items 10, 13, 15, 18, 19, 24, 25, 26, 30, and 31 were answered correctly by more students on the pre- than the post-test. Seven of the 10 items listed are attributed to responses by senior students. Three of the items identified below as most difficult to answer correctly (10, 19, and 31) are also in this list. These items test information con- cerning the purposes of giving performance feedback, prob- lem solving model components, evaluation of the interview, feedback techniques, determining consequences for meeting objectives, and type of advance information relayed to the employee. The results seem to indicate that this information in the instructional materials was confusing to students. 81 TABLE 6 Written Objective Examination Item Analysis Statistics, Juniors (n=20) and Seniors (n=20) Pre-test Index Pest-test Index Pre-Post Index Of of Difficulty of Difficulty Discrimination* Tbst Item, JUniors Seniors JUniors Seniors JUniors Seniors 1 .50 .60 .95 .95 .45 .35 2 .20 .25 .45 .80 25 .55 3 .05 .30 .70 1.00 65 .70 4 .45 .30 .90 .95 45 .65 5 .85 .85 .90 .75 05 .10 6 .75 .60 .90 1.00 15 .40 7 .90 .95 .95 1.00 .05 .05 8 .70 .75 .70 .90 .00 .15 9 .20 .35 .50 .90 .30 .55 10 .20 .40 .40 .20 .20 -.20 ll .85 .80 -.95 1.00 .10 .20 12 .10 .25 .65 .85 .55 .60 13 1.00 .95 .95 1.00 -.05 .05 14 .65 .90 .90 1.00 .30 .10 15 .60 .90 .75 .80 15 —.10 16 .65 .45 .80 .95 .15 .50 17 .70 .85 1.00 1.00 .30 .15 18 .95 1.00 .85 1.00 -.10 .00 19 .60 .65 .65 .55 05 -.10 20 .85 .90 .95 .85 10 .05 21 .60 .55 .70 .85 10 .30 22 1.00 1.00 1.00 1.00 .00 .00 23 .75 .95 1.00 1.00 .25 .05 24 1.00 .95 .80 1.00 -.20 .05 25 .95 .95 1.00 .90 05 -.05 26 .75 1.00 .95 .95 20 -.05 27 1.00 .90 .95 1.00 .05 .10 28 .85 .80 .90 .90 05 .10 29 .55 .65 .60 .95 05 .30 30 .75 1.00 .95 .95 .20 -.OS 31 .70 .90 .70 .65 00 -.25 32 .45 .40 .95 .90 50 .50 33 .30 .60 .50 .75 20 .15 34 .80 .80 1.00 .95 .20 .15 35 .75 .95 .90 .95 .15 .00 36 .70 .90 .95 .90 .25 .00 37 .85 .80 1.00 .95 .15 .15 *Ihis is the post-test difficulty index minus the pre- test difficulty index and measures gain from pre-test to post-test (Douglass and Olsen, undated.) 82 Items 10, 19, 31, and 33 on the post-test were iden- tified as most frequently answered incorrectly (1.75 of the students answering correctly.) These items test information concerning purposes of giving performance feedback, eval- uating the interview, type of advance information relayed to the employee, and reasons for timelines. Examination of the multiple choice response pattern Shown in Table 7 details more clearly the specific incorrect responses chosen most frequently on certain items. The junior students most frequently chose incorrect responses on items 2, 9, 10, 21, and 33. Senior students demonstrated incorrect responses on items 10 and 19. These test items cover information concerning planning to be completed be- fore the interview, reasons for appraisal unreliability, purposes of giving performance feedback, evaluation of the interview, discrimination of important incidents, and rea- sons for timelines. A comparison of Table 7 with the writ- ten examination allows a detailed analysis of the student errors to use in revision of the materials. The item analysis results facilitate a determination of which enabling objectives were met satisfactorily. Table 4 displays the match between test items and Objectives. There are four objectives which both juniors and seniors did not satisfactorily meet: A2, B3, B5, and F6. These objec- tives include use of motivation theories, choosing reasons for employee interviews, choosing reasons for making ad- vance appointments, and choosing a criteria for evaluating the interview. TABLE 7 83 Multiple Choice Question Pattern of Response by Percent of Subjects Choosing Each Response, Pre and Post-test Written Examination, Juniors (n=20) and Seniors (n-ZO) Test test Item Pre-Test Post-Test tem Pre-Test Post-Test Jrs. Srs. Jrs. Srs. Jrs. Srs. Jrs Srs. la .0 .40 .0 .0 21a .15 .05 .20 .10 b .50 .40 .05 .05 b .65 .55 .70 .85 c* .50 .60 .95 .95 c .20 .40 .10 .05 2a .80 .75 .55 .20 22a .0 .0 .0 .0 b .0 .0 .0 .0 b* 1.0 1.0 1.0 1.0 c* .20 .25 .45 .80 c .0 .0 .0 .8 d .0 .0 .0 . 5a .15 .10 .10 .25 23a .20 .05 .0 .0 b* .85 .85 .80 .75 b* .75 .95. 1.0 1.0 c .0 .05 .0 .0 c .05 .0 .0 .0 6a .05 .0 .0 .0‘ 24a .0 .05 .0 .0 b* .75 .60 .90 1.0 b .0 .0 .20 .0 c .20 .40 .10 .0 c .0 .0 .0 .0 d -0, .0 .0 .0 d* 1.0 .95 .80 1.0 9a .45 .15 .45 .10 25a .0 .0 .0 .0 b* .20 .35 .50 .90 b .05 .05 .0 .10 c .35 .45 .05 .0 c* .95 .95 1.0 .90 10a .05 .0 .0 .0 26a .20 .0 .05 .0 b .75 .60 .60 .75 b .0 .0 .0 .0 c* .20 .40 .40 .20 c* .75 1.0 .95 .95 d -0 -0 -0 -0 d. -0 .0 .0 .0 13a .0 .0 .0 .0 33a* .30 .60 .55 .75 b* 1.0 .95 .95 1.0 b .70 .35 .45 .25 c .0 .0 .05 .0 c .0 .0 .0 .0 d '0 '05 'g '3 0 so 95 95 14a .0 .0 . S . 4a . . . . b* .65 .90 .90 1.0 b .20 .15 .05 .05 c .0 .0 .0 .0 c .0 .05 .0 .0 d .35 .10 .05 .0 19a" .60 .55 .65 .55 35a .r .6 .o .0 b .0 .0 .0 .0 b* .75 .95 .90 .95 c .40 .45 .35 .45 c .25 .05 .10 .05 20a .15 .10 .05 .05 36a .20 .05 .05 .0 b‘. .85 .90 .95 .85I b .10 .oo .0 .05 c .0 .0 .0 .0 c* .70 .90 95 .90 37a .10 .10 .0 .0 b* .85 .80 1.0 .95 c .05 .10 .0 .05 *Indicated correct response for each item. 84 Item Analysis of Criteria Checklist A modified item analysis was completed of the criteria checklist to identify items which were ambiguous or difficult to rate, as well as items on which students did not perform well. The two instructors' ratings on each item are shown in Tables 8, 9, 10, and 11. Some items indicate notable trends. The desirable trend is for the students' scores to increase on the post-test evaluation and it is important to note that in some Cases they did not, e.g., items 5 and 10 in the case of the junior Observers, unit with examples. Items 5 and 10 focus on employee strengths and the conse- quences of meeting goals. Items 1 and 10 for the junior directed observers, unit with practice, were not improved and pertained to deve10ping rapport and determining conse- quences of apprOpriate employee behavior. Item two did not improve for the junior observers on the unit with practice. This item entails encouraging participation on the part of the employee. Item 10 for the seniors of all groups on the unit with practice was not improved. Item 10 appears to most consistently pose a problem and students may not have been given enough information to be able to handle a dis- cussion of consequences of performance. 85 .mouoom :memm on vONHHHuz mwcHumu .muouosuumcH 03H mo cmoze OH 92 3 ~39 9H 3 9H 93 2 2. 21 2. 2. 2. . a. 3.21 92 c2 92 2a 2 9H 2. 2. 2.. 2. 28 2. 2. 2. 2. 92 ea 2 2. 21 21 2. 2. 2. 2. 2. 28 o; c2 22 Pa 2 21 2. 2. 2: 94 - 23 2. 2. 2. 2. 2. Pa 2 S. 2. 2. 2. 2. 2.? 3.8 2. 2. 2. 2. 2.. 2. S. 2. 93 a 2: 21 2. 2. 2. 2. o. 2. 3.3 2. 8.. 2. 2. 2. 2. 2. 2. 2. 93 a 21 2. - 2. 2. 2. 21 3.8 2. 2. 2. 2. 2. 2. 2. 2. I2. SE A 2. q? 2. 2. 2. 2. 28 2. 2. 2. 2. 2. 2. 2. 2. 2a a 2. 2. 2- 2. 2. 2. 2. 2. «3n 2. S. 2. 2. 2. 2 2. 2. 2. «E m 2. 2. 2: 2. 2: mm. 2. 2 2. 21 2.3 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. Pa 4 2. 21 21 3. o. 2. “all 2. 2. 2. 2. 2. 2. 2. 2a n [-31 2. 2. 2. 21 21 Eon S. 2. 2. 2. 2. .3. I2. 23 N 2. 2+ 2. 2. a; shoe 2. 2. 2. 2. 2. 2. 2. S. 2. 2a H sopH v m N H v n N H v n N H oHeum wcHumm mmncv mHOSHOmpo AVE 30me - 0H9— finué $35238 eunuch-E «mumou-umom use Ohm .umHquonu «HHONHHU :O mououm HOOHuOmum :HHx uHaav mHOOHHSm HOHcsn mo owaucouuom w mamuomoo Renew mHOHEHm-OHo¢ Amaze muo>homoo wouooaHn «mumou-umom was one .umHquonu «HHOHHHU :o mouoom HmoHnmem zqu «Heav mavennsm HOHcsn HO owuucoouom m mqmpomao fivucv muoxuam-ofiom fining mumsuomno wouuuufin ‘mumou-umom van ohm .umwaxuogu «wuvuwuu :o monoum aouwpuaum an“: pansy mucovsum nowcom mo ammunouuom cu mam<~ 88 .moHOUm :mwmmm cu wonfifius mwcwuau .muOuusuumafi osu mo cause 2H 2. 2. o.H .28 9H o.H o.H 88 2 2. 2. , 2.. 2.. 2. 2. .28 . 2H 9H o.H Pa 2 2. 2. 2. 2. 2. 2. m2? 2. 28 9H 9H 9H 92 NH 2. 2. 2. 2. 21. 2. 2.8 2H 9H 2H 8L8 HH 2. 2. 2. 2. 2. 2. 2. 228 2. 2. 2. 2. 2. 2. 2. 2. 88 2 2. 2- 2. 2. 2. 2. final 2. 2. 2. 2. 2. . 2. 2. 2. 8m 2 2H 2. 2. 2 2. 2. .28 2. 2. 2. 2. 2. 2. ohm a 2. 2. 2. 2. 2. 2. No. 2.8 2. 2. 2. 2. 2. 9H «8 N 2- 2- 2. 2. 2. 2- 2.. 2. .28 2. 2. 2. 2. 2. 9H 98 e 2.. 2. 2. 2. 2. 2. 2. 2.8 2. 2. 2. 2. 2. 2. 2. 2. 8m m 2. 2. 2. 2. 2. 2. 2. .28 2. 2. 2. 2. 2. 2. 8m e 2. 2- 2. 2. 2. 2w 28 2. 2. 2. 2. 2. 2. 2. 2. 9a 2 S. 2”. on. cm. 3. 2... «2.2 2. 2. 2. 2. 9H .88 N 3 3 2. 2. 28 2. 2. 2. 2. 2. 2. 98 H scum e 2 N H e 2 N H e 2 N H 232 wcwuma finncv mum>uomao hence muoxufim-02ox amucu mum>uomno wouoouwn «mumouuumom can mum .umwfixuosu «Mucuwuu no monoum Amongsaxm sea: uwcau mucovSum uoflcow mo cumucouuom HH mqm.2 Role Player vs. Observer vs. 11.19 2 5.60 Directed Observer (C) A x B ' 46.28 1 46.28 2.75 >.2 A x c 12.37 2 6.18 B x c 142.2 2 71.1 4.22 <.05 A x B x C 28.21 2 14.1 Within Cells 471.21 28 16.83 (error) *There is no total SS when using weighted means for unequal cell sizes. Predictability of Practical Performance from Performance on the written Examination H0: The performance of students on a progress interview written examination will not correlate positively with the students' performance on the progress interview transfer test. It was felt that assignment to a "mastery"or "non- mastery" position on the unit might be different if a writ- ten examination was used than if a videotaped interview was used as the evaluation device. Scores as a percentage of 96 total possible score on the written examination for each of the subjects were compared with scores on the practical examination to determine if the level of performance on the written examination would be a predictor of performance on the practical examination (see Table 17.) A Pearson Product Moment Correlation Coefficient was used as the test statis- tic. The formula is described as: sz X 22 where N=number of paired observations N For the first trial with junior students, r =-0.038, indicating no correlation between the two sets of scores. The correlation for the second group of seniors was r =-0.353, indicating a slightly negative correlation be- tween the two sets of scores and H0 is maintained. TABLE 17 Comparison of Subjects' Written Examination vs. Videotaped Interview, Percent of Total Possible Score Written Exam Videotape Exam r (n=20) Juniors Mean=83.15, S.D.=8.58 Mean=79.15, S.D.=9.26 -0.038 (n-ZO) Seniors Mean-89.2, S.D.=4.45 Mean=80.15, S.D.=6.31 -0.353 97 Comparison of Instructor, Student, and Randbm Sample Segment Evaluations Three forms of evaluations were compared for differ- ences in assessing the students' performance on the taped progress interview: 1) each student evaluated his own interview on the taped pre- and post-test interview using the given criteria checklist, 2) the primary instructor and one other qualified instructor evaluated the pre- and post- interviews using the same criteria, and 3) a second quali- fied instructor evaluated the post-interview using the same criteria checklist by random sample segments. The mean score given to the observations was used as the total mean score for the evaluation comparison. Inter-Rater Reliability for Pre- and Post-Test Videotaped Interviews Three evaluations were obtained on the practical, videotaped pre-test and post-test (see Tables 18 and 19.) The calculation of a reliability coefficient was used for estimating the reliability of the two independent evaluators (Ebel, 1972.) The formula is: r = any-ZXZy (nzxz-(szY) (nzyZ-(zy)2) 98 TABLE 18 Junior Students' Videotaped Interview Scores, Pre and Post-test by Two Instructors and Students' Self-Evaluation Pre-Test Post-Test Instructors (n=20) Instructors (n=20) l 2 Mean Student 1 2 Mean Student Mean 29.95 33.4 31.68 40.1 45.2 43.0 44.1 45.9 S.D. 5.06 5.87 5.47 7.4 5.2 5.58 5.24 4.98 TABLE 19 Senior Students' Videotaped Interview Scores, Pre and Post-test by Two Instructors and Students' Self-Evaluation Pre-Test Post-Test Instructors (n=20) Instructors (n=20) 1 2 Mean Student 1 2 Mean Student Mean 25.75 24.75 25.25 30.25 46.4 43.2 44.8 48.1 S.D. 2.36 2.47 2.42 4.18 3.7 3.92 3.81 4.83 The results of the formula applied to the pre-test indi- cated that the reliability coefficient between the two in- structors for junior students was .59 and was .34 for the two evaluators' ratings of the senior students. The score can range from zero to one with scores near one indicating high reliability between raters. 99 An average of the two instructors' scores was then com- pared with the junior students' self-assessment scores. The results of the formula in this situation indicated that the junior students to instructor reliability was .006, indicat- ing that the two evaluators had less than one percent reli- ability, while the senior students to instructor reliability was .34. Students in the GDCSP are required to evaluate all assignments utilizing an appropriate criteria checklist. This is a new procedure for the entering junior students and they tend to check off all items on the lists whether or not they have been completed. This may cause the low reliability seen in the junior students' self-assessment of their pre- test interview. A similar procedure was completed with post- test scores on the practical videotaped examination. First the two instructors' evaluations of the interviews were com- pared for reliability. For the juniors' post-test, the re- liability between the two instructors was .82 which is con- sidered a high reliability and improved over the pre-test reliability (.59.) For the senior students' post-test, the reliability between the two instructors was .70, also an improvement over the pre-test reliability (.34.) Since there is no set level of reliability recommended in the literature, individual researchers have assessed re- liabilities based on the complexity of the behavior to be evaluated and on the uses of the scores obtained from the evaluators. In the context of this study, it would be de- sirable to have a reliability of at least .70 to give an 100 individual student a grade on the unit with confidence. In terms of programmatic evaluation or formative student evalua- tion it may be possible to accept lower levels of reliability (i.e., from .50 to .70) since an indication only of general events is necessary. H0: The students' self-assessment of performance on the progress interview practical examina- tion will not differ significantly from the instructors' evaluations of the students' performance on the progress interview prac- tical examination. The mean score received by the student from the two instructors was compared with the student's self-assessment on the post-test. The coefficient was .64 in the case of the junior students, a considerable increase from the pre- test at .006. The post-test instructor to senior student correlation was .49, an increase from .34 on the pre-test. Inter-rater reliability coefficients are summarized in Table 20. Since the directed observers were assigned the task in the classroom procedure of using the checklist to evaluate the role play interviews, it was thought that they might be more reliable self-evaluators than the other students in the project. The scores of directed observers are recorded in Table 21. As seen in Table 20, the reliability coefficient be- tween the two instructors' mean score and the junior students' self-assessment was .84, indicating a rather high degree of reliability. The reliability between the two instructors' mean score and the senior students was lower at .42. 101 TABLE 20 Inter-rater Reliability Coefficients Between Instructors and Students on Videotaped Post-test Scores Instructor Instructor Mean 1 and 2 and Student Pre-Test Junior (n=20) .59 .006 Senior (n=20) .34 .34 Post-Test Junior .82 .64 Senior .70 .49 Junior Directed Observer .84 Senior Directed Observer .42 TABLE 21 Comparison of Directed Observers' Self-Evaluation with Instructors' Mean Evaluation Score (n=20) (n=20) Juniors Seniors nstructor Student Self— Instructor Student Self? Mean Evaluation Mean 'Evaluation Mean 43 42.8 43.2 49 S.D. 6.63 5.74 4 1.25 2.82 102 Random Sample Segment Evaluation Ho: The instructors' evaluations of the videotaped simulated progress interview will not differ significantly from the instructors' evaluations of the videotaped simulated progress interview by a random sample segment method of evaluation. Two instructors viewed random samples of the post-test interview videotapes and evaluated them using the same cri- teria checklist described above. Table 22 shows summary statistics of scores. Since only parts of the checklist were completed during this evaluation method, the following formula was applied to determine the "average score" for the interview to use in making comparisons with the full-length evaluation: Total Points Received ' X 14 = Score Assigned Total Number of Items Scored The formula requires that points assigned for the items eval- uated be totalled, then the total is divided by the number of items scored to get an average score for each item. This average score is then multiplied by 14 (the number of items on the criteria checklist) to obtain an average score for the interview. This procedure allowed comparisons with scores obtained on the whole evaluations. Using Ebel's reliability coefficient, inter-rater reli- ability during evaluation of junior students was measured as .42 for the two instructors who evaluated the interviews by segments; inter-rater reliability during evaluation of senior students was .65. Since the inter-rater reliability of .42 for junior students was comparatively low, and that 103 TABLE 22 Students' Videotaped Interview Random Sample Segment Scores, by Two Instructors (n=20) (n=20) Juniors Seniors Instruc- Instruc- ’Mean Instruc- Instruc- Mean tor 1 tor 2 tor l tor 2 Mean 47.5 42.9 45.2 44.1 41.2 42.7 S.D. 2.58 4.22 2.89 3.4 5.1 3.97 reliability of the evaluations might tend to improve with practice, the interviews were grouped into halves: first 10 rated vs. second 10 rated to compare reliabilities. Ratings as divided into two halves are recorded in Table 23. The first half reliability was .58 for juniors and .64 for sen- iors while the second half reliability was .28 for juniors and .48 for seniors. If one reviews the number of observations on each interview from Table 23, it can be seen that there are interviews in the second half with fewer observations than in the first half. Also, the highest number of observations was for an interview in the first half. Since the number of observations might affect the reliability of the evalua- tions, the reliability of the evaluations with eight or more observations was determined as .53 for junior students and .51 for senior students. Table 24 summarizes these data. 104 TABLE 23 Sequence of and Number of Observations for Subjects on Post-test Random Sample Segment Evaluation Juniors subject Seniors IMean No. Observations Sequence Mban Nb. Observations 9 2 9 9 4 10 8 8 10 11.5 10 5.5 11 14 9.5 8 16 8.5 6.5 17 6.5 8.5 18 8 8.5 19 8 7.5 20 5.5 7 1 6 10 2 8 9 5 6.5 8.5 6 11 6.5 7 11.5 5 9 8.5 11 11 10.5 9.5 12 11 8.5 13 9 9.5 15 10 TABLE 24 Inter-rater Reliability Coefficients Between Two Instructors Using Random Sample Segment Evaluation Instructor First Second *More than Eight 1 and 2 Half Half Observations naZO) Junior .42 .58 .28 .53 n=20) enior .65 .64 .48 .51 105 Finally, the mean of the two instructors' evaluations was compared to the mean rating given by the whole evalua- tions. Random samples and whole evaluation scores are shown in Table 25. Inter-rater reliabilities of .45 for juniors and .51 for seniors were calculated. Ho is rejected since reliability of the random segment method is unacceptably low. TABLE 25 Comparison of Subjects' Whole Evaluation and Random Sample Evaluation Scores on Videotaped Interview Senior Senior Junior Junior Whole Sample Whole Sample Evaluation Evaluation Evaluation Evaluation Mean 44.8 42.7* 44.1 45.15** S.D. 3.81 3.97 5.24 2.89 *r = .51 **r = .45 Students' Attitude Survey HO: The measured attitudes regarding the progress interview unit of students taught by one of the methods "Reading with Examples", "Reading with Practice", "Observer", "Directed Observer", or "Role Player" will not differ significantly from comparable students taught by any other of the methods. Results from the student attitude survey are displayed in Table 26. A sample of the attitude survey appears in Appendix C. The survey items were grouped into categories 106 5 £1.25 5 :3 5 Ban: 8a 8 1a .2 .2 36:35 :5 355.8 m 2.” 3 3 3 .3 m Smog—4m ma: was: we coumo m REM—human 395—5 8.8.3.3 8583 3.33 53.30:: 023 can? 39.8% want/om EL: :2: mass—m ES .52: 6:8 mined $338 @0335 3me $390.8 $595.6 uuompam van :oflmwzo .3 panama mwcwumm can: sot—aw 3:55: on manta. 107 to facilitate analysis. Questions 1, 2, 8, 9, 10 and 11 were grouped together under the category "clarity" and in- cluded items such as "I was often unsure of what was sup- posed to be learned", "The unit was well organized", "Parts of the unit were unclear", and "I had to ask a lot of ques- tions to clarify it". Questions 3, 7, and 12 were grouped together under the category "reasonableness", and included such items as: "There was too much information in the unit", "I think the unit was worth the amount of time spent on it", and "The objectives of this unit were clear". Questions 4, 5, and 13 were grouped together under the heading "percep— tion of amount learned" and included such items as: "I learned a lot in comparison with a usual method such as a lecture", and "I learned a lot from the role play session". Questions 14, 15, and 16 were grouped together as "percep- tion of feedback". Items under this heading included: "I didn't get much instructor feedback on how I was doing dur- ing the unit", "I would have liked more instructor feedback during the role play session", and "The instructor's dis- cussion was helpful in learning the material". The survey data were first divided into the test groups described above: role player, observer, directed observer, with either unit with practice or unit with examples, and by junior and senior students. A two-way analysis of var- iance using weighted means was used for class (A) vs. type of materials (B). The junior students perceived learning more by using the materials that included examples, while 108 the senior students perceived learning more by using the materials which required practice. Cell scores and means are reported in Table 27, and the analysis results are shown in Table 28. In order to assess clarify, it was necessary to com- bine students under two categories: unit with practice and unit with examples. A two-sided t-test was performed which gave t=l.486, p:.2. Examination of the senior level student data gave t=2.l89 with p:.05. In the case of the senior students, students completing the unit with practice felt that the materials were clearer than those completing the unit with examples. Data were placed into the original test groups to as— sess the students' attitudes about "reasonableness". There was no meaningful relationship. There was also no meaning- ful relationship in the category "perception of feedback". Ho is rejected since meaningful and significant differences between student attitudes were located. Student comments received on questions 18, 19 and 20 are included in Appendix D. Generally, comments were very favorable about the unit. Under "What suggestions would you make for improvements?" students suggested that they know their roles before the session to allow better preparation and that the instructor summarize the information before the role play session. Students also suggested that they be per- mitted to review the pre-test videotape before practice 109 TABLE 27 ANOVA Cell Means on Attitude Survey under Category "Perception of Amount Learned." (n=20) Juniors Role Player Observer Directed Observer Unit with Examples 3.66 5.33 3 Unit with Practice 4.5 6 6 (n=20) Seniors Unit with Examples 7.75 7.33 7.66 Unit with Practice 5.66 4.33 6.66 TABLE 28 Two Way ANOVA Table for Attitude Survey under the Category "Perception of Amount Learned." Source ‘SS 9;. MS §_ p-level Junior vs. Senior (A) 35.125 1 35.125 13.097 Unit with Practice vs. Unit with Examples (B) '876 1 '876 A x B 27.915 2 13.958 6.979 <.001 Within Cells 91.2 34 2.682 110 sessions and that a more difficult employee be used in the scenarios to allow more Opportunity for problem solving. Comments under the questions: "What was the best fea- ture of the unit?" included favorable statements about role playing interviews followed by critiques. The students in- dicated that it was a practical unit and that the evaluation was helpful. Other comments related to the clarity and organization of the unit itself. The last question was "What was the worst feature of the unit?" Comments indi- cated that: 1) role playing was difficult, 2) everyone should have had a chance to play a role, 3) the materials were too time consuming, 4) that a lecture would have been better, and 5) the workbook was difficult to use without a table. Costs The costs of utilizing self-instructional materials will not be different from costs of traditional teaching modes. Costs are presented in three sections: 1) initial development costs for the Progress Interview Unit, 2) anti- cipated implementation costs, and 3) costs for traditional teaching method. The critical issue is a comparison of the materials, implementation costs and the costs of the tradi- tional teaching method. In an attempt to realistically esti- mate all developmental costs associated with the progress 111 interview unit, data are presented in Table 29. (Student time is indicated but not costed.) Development costs occur at only one time and would be spread over the extended use of the unit. TABLE 29 Developmental Costs. pevelopment of the Unit riting and Revisions-80 hours at $7.50/hour Tabulating Reviewers Responses-9 hours ommunicating with and Preparation of Materials for Reviewers-12 hours yping-lS hours at $5.00/hour uplication of Units at .03 per page ormative Evaluation Sessions-5 hours Student evaluators-5 hours Testingpof the Unit Pre-tests, classroom practice, post-tests Instructor, 9 hours Each student-4.5 hours x 40 students =180 hours Actress-45 hours at $4.39/hour Videotapes-4 60-minute tapes at $30.00 each valuation of the Unit Pre- and Post-test Item Analysis-5 hours Videotaped pre- and Post-test evaluation (80 x 15 minutes) Videotape random sample evaluation (80 x 8 minutes) Statistical consultant and Computer service Revision of the Unit riting-5 hours yping-4 hours evelopment of Instructor's Manual Mriting-6 hours Typing-2 hours Total: $ 600.00 67.50 90.00 75.00 105.00 37.50 67.50 197.55 120.00 37.50 150.00 80.00 120.00 37.50 20.00 65.00 10.00 1903.85 112 Anticipated Implementation Costs Implementation costs of the unit into the present curriculum would be small. Costs of duplication of the ma- terials for 40 students is approximately $105.00, but it is possible to re-use the workbooks and save subsequent costs of duplication. Videotapes and videotaping equipment are available for use within the department at no cost. Student time would remain approximately the same at 4.5 hours each including time for the pre-tests, workbook completion, prac- tice session, and post-test. Instructor time for classroom sessions and post-test evaluation is two hours for the class- room session and approximately one hour in evaluation for each student. The evaluation time could be reduced if al- ternative procedures such as random sample segment evalua- tion were developed, refined and utilized. Instructor Time, classroom 15.00 (two hours) Instructor time, evaluation 300.00 (optional) 315.00 The Costs for the Traditional TeachingiMethod Currently, costs are instructors' time for preparation, classroom sessions in lecture, and practice of the interview- ing skills. Approximately four hours have been required on the part of the instructor and all students for the unit on progress interviewing. In comparison with the new progress interview materials, instructor classroom time is reduced 113 while one-on-one contact for student evaluation has been in- creased. The new materials also make it possible for stu- dents in the general dietetic program to work through a unit on progress interviewing whereas this was not possible pre- viously. Instructor time, preparation 30.00 (four hours) Instructor time, classroom 30.00 (four hours) Instructor time, evaluation 300.00 (Optional) m H0 is rejected since although costs are similar between the traditional and self-instructional methods for 40 stu- dents, cost per student would decrease in the case of self- instructional materials as they are utilized by larger num- bers of students. CHAPTER V SUMMARY AND CONCLUSIONS A summary of the statistical results is presented as a review with discussion and conclusions in the following paragraphs. Progress Interview Unit Preliminary Evaluation The review of the materials by experts was beneficial to the development in the early stages. Since both content experts and instructional deve10pment experts acted as re- viewers of the materials, it would be helpful to have devel- oped different types of review forms which more specifically assessed areas of review expertise rather than using only one form for all reviewers. Input from reviewers was use- ful in revising objectives, test items, and the criteria checklist. Pre-Post Written Examination Evaluation Item analysis was completed on the written examination to allow improvement of the examination after completion of testing and evaluation. Item analysis was useful in diag- nosing difficulties in the materials for purposes of revision. 114 115 There were four objectives which both senior and junior stu- dents did not satisfactorily meet: A2, B3, B5, and F6, in- cluding use of motivation theories, choosing reasons for employee interviews, choosing reasons for making advance appointments and choosing a criteria for evaluating the interview. Three junior students and five senior students reached the minimum level of performance (.75 or 28 points and .80 or 30 points for the junior and senior students respectively) on the written pre-test; 19 junior and 20 senior students reached the minimum performance level on the post-test. With t=10.67 and t=10.7l there was a significant level of improve- ment from pre-test to post-test for the junior and senior students respectively. Ho The performance on the progress interview writ- ten examination of students taught by "Reading with Practice" will not differ significantly from comparable students taught by the method "Reading with Examples". H1 The performance on the progress interview written examination of students taught by "Reading with Practice" will be significantly higher than com- parable students taught by the method of "Reading with Examples". Results from the two groups, unit with practice and unit with examples, were compared for differences using a t-test. With t=.1916 and t=.1099 respectively for junior and senior students, there was no difference between these two groups. Junior students' performance on the written examina- tion was compared with performance of senior students using 116 a t-test. With t=.725 there was no significant difference between the two groups and H0 is maintained. It may not be worthwhile in terms of performance on a written examination to require students to spend time con- structing and writing answers to embedded questions. It may be possible for students to learn as well from model answers to embedded questions. In classroom situations where stu- dents are responsible only for the written information, the least time consuming method may be most useful. However, senior students who wrote answers to questions tended to think the unit was more clear and perceived learning more than those who only read answers. Evaluation of the Videotaped Post-test HO The performance on the progress interview of students taught by "Reading with Practice" will not differ significantly from comparable students taught by the method of "Reading with Examples" and from the performance on the pro- gress interview of students practicing by one of the methods "Observer", "Directed Observer", or "Role Player". Three-way ANOVA was utilized to test for difference between the test groups on performance on the videotaped post-test. No student met the minimum criteria level on the pre-test interview; 15 junior and 19 senior students met the minimum criterion level (.75) on the post-test. ANOVA indi- cated that there was a significant interaction between the type of unit completed and the role played in the classroom 117 sessions. Junior students assigned as directed observers, who completed the unit with practice, did less well on the videotaped post-test than the other groups of students. (F=4.22, p:.05) Since the directed observers on the unit with practice scored significantly lower than other groups while at the same time evidencing very high self-assessment reliability (.84) it may be possible that these students were learning to self-evaluate in the practice session rather than learn- ing to interview, in comparison with other students. No other significant differences were found and this tends to support Holmes (1975) discussion reported in the review of literature in which he states that observers tend to prac- tice by covertly responding during the simulation. All stu- dents in this project read and prepared the scenarios for practice in the classroom and this may increase their abil- ity and tendency to covertly respond and compare their re- sponses with the responses of the role player. It is notable that while there were no significant differences in student performances on the videotaped post- test (other than that described) that there is a large dif- ference in terms of assignments to mastery states: 15 jun- iors reached mastery level, while 19 seniors reached mastery level. This has implications for research on determining cut-off levels since differences which statistically may be due to chance can affect mastery placement decisions. 118 Predictability of Practical Performance from Performance on the Written Examination H : The performance on the progress interview written examination by students will not correlate posi- tively with the students' performance on the pro- gress interview transfer test. A Pearson Product Moment Correlation Coefficient was used as the test statistic. For the first trial with junior students,:r=-0.038 indicating no correlation between the two sets of scores. The correlation for the group of senior students wasr=-0.353,indicating a slightly negative cor- relation between the two sets of scores. The written objective examination measures stored know- ledge while the criteria checklist measures skill perfor- mance. They are two different sets of abilities and in this case were not correlated. Objective and practical examina- tions each have advantages and limitations and may be comple— mentary for comprehensive evaluation of students. Practical examinations, e.g., simulation settings, may be useful in identifying qualified practitioners, but they are also more difficult to construct, administer and evaluate. Student Self-Evaluation of Peiformance Ebel's reliability coefficient was used to calculate reliabilities as reported. Reliability between the two in- structors in the case of the junior students was .59 and was 119 .34 for the two evaluators' ratings of the senior students' pre-test performances. The score can range from one to zero with scores near one indicating high reliability between raters. An average of the two instructors' pre-test scores was compared with the junior students' self-assessment scores. The results of the formula indicated that the junior student to instructor reliability coefficient was .006 while the senior student to instructor reliability was .34 for the pre-test videotaped interview. A similar procedure was com- pleted with post-test scores on the practical videotaped examination. First the two instructors' evaluations of the interviews were compared for reliability and were .82 and .70 for the junior and senior students respectively, which are considered high reliabilities and considerably improved over the pre-test (.59 and .34 for the junior and senior stu- dents respectively.) These reliabilities are used to com~ pare with student self-assessment reliabilities. H0: The students' self-evaluation of performance on the progress interview post-test will not differ significantly from the instructors' evaluations of the students' performance on the progress interview post-test. An average of the two instructors' scores was then com- pared with the junior students' self-evaluation scores. The results of the formula indicated that the junior student to instructor reliability coefficient was .006, while the sen- ior student to instructor reliability was .34, for the pre- test videotaped interview. A similar procedure was completed 120 with the post-test scores on the practical videotaped examination. First the two instructors' evaluations of the interviews were compared for reliability and was .82 for the junior students which is considered a high reliability and considerably improved over the pre-test (.59). The reli- ability was .70 for the senior students' post-test evalua- tion, also an improvement over the pre-test reliability (.34). The mean score given the subject by the two instruc- tors was compared with the students' self-evaluation on the post-test and was .64 in the case of the junior students and .49 for the senior students, both increases from the pre- test student-instructor reliabilities of .006 and .34 respec- tively. Since the directed observers had more practice with using the checklist to evaluate interviews, it was thought that they might be more reliable evaluators. The reliability coefficient between the two instructors' mean score and the junior students self-evaluation was .84, indicating a very high degree of reliability. The reliability in this case for the senior students was lower at .42. There is some indication that student self-evaluation can be reliable and that students can learn to more reliably evaluate themselves with practice. The high reliability on the part of the junior directed observers and low perfor- mance on the interview post-test may indicate that students should learn to self-evaluate separately from learning ad- ditional content information. Students appeared to have 121 more confidence in their pre-test performance in comparison with instructors' evaluations; however, their self-assessment scores did improve on the post-test assessment as well as become more reliable. Random Sample Segment Evaluation H0: The instructors' evaluation of the taped pro- gress interview will not differ significantly from the instructors' evaluation of the taped interviews by a random sample segment method of evaluation. Two instructors viewed random sample segments of the post-test interviews and evaluated them using the same cri- teria checklist. Ebel's Correlation coefficient was used to calculate reliabilities. Reliability was .42 and .65 for the junior and senior students respectively. The mean of the two instructors' evaluations was com- pared with the mean rating given during the full length eval- uation and reliabilities of .45 and .51 for junior and senior students respectively were calculated. This level of reli- ability would probably not be satisfactory for student eval- uation. Since the reliabilities were relatively low, the inter- views were divided into two halves (first 10 rated and second 10 rated) to see if reliability improved with practice. The first half reliability was .58 for the juniors and .64 for the seniors, while the second half reliability was .28 and .48 for the juniors and seniors respectively. Since there 122 were a variable number of observations recorded for each interview and the number of observations could affect reli- ability of the evaluation, evaluations with only eight or more observations were compared for reliability with the full-length evaluations. This coefficient was .53 and .51 for the junior and senior students respectively and is pro- bably not acceptable for assigning students to mastery or non-mastery states. A problem identified in the random sample evaluation which may lead to low reliability was that it is not pos- sible to identify omitted items on the checklist. Even so, there is a trend to improve in reliability as the instruc- tors had more experience with the random sample method and with increasing numbers of observations on the checklist. Comments from the instructors using the random sample method for evaluation included observations that there was time to think about and record each segment as the tape was fast- forwarded to the next segment; during the whole tape eval- uation, observation and evaluation were completed concur- rently. Since the entire tape was not viewed during the random sample evaluation, several problems presented themselves with the evaluation checklist as it was developed. Items such as "Discusses all Objectives on Evaluator's Guide" were diffi- cult since we may have seen the interviewer discuss only one or two-~and it was not possible to determine if all had been 123 discussed. It is recommended that items be listed separately so that the particular ones which are viewed can be checked off separately. It was found that many items could be evaluated by "assumption". If we heard only the last sentence of the em- ployee summarizing the interview, it could be assumed that the employee had summarized the interview. By comments throughout the segments of the interview it could be deter- mined that at some earlier stage the problems had been dis- cussed. Student Attitude Survey H : The measured attitudes regarding the progress interview unit of students taught by one of the methods "Reading with Examples", "Reading with Practice", "Observer", "Directed Observer", or "Role Player" will not differ significantly from students taught by any other of the methods. H1: The measured attitudes regarding the progress interview unit of students taught by any one of the methods "Reading with Examples", "Reading with Practice", "Observer", or "Directed Obser- ver" will be less favorable than the measured attitudes of comparable students taught by the "Role Player" method. Survey items were grouped into categories to facilitate analysis by clustering items under the headings "Clarity", "Reasonableness", "Perception of Amount Learned", and "Per- ception of Feedback". Items 18, 19 and 20 requested written responses from students and were presented. The survey data were then divided into test groups for comparisons. A two- way ANOVA using weighted means indicated a significant 124 difference, F=6.979, in that junior students perceived learn- ing more by using the materials that included examples, while the senior students perceived learning more by using the ma- terials which required practice under the category "Percep- tion of Amount Learned". Under the heading "Clarity", a t-test indicated a significant difference with t=2.l89. Senior students completing the unit with practice felt that the materials were more clear than senior students completing the unit with examples. Based on results, H1 is rejected. It may be that sen- ior students felt more challenged by the application of know- ledge to simulated situations than did the junior students. Perhaps senior students recognize value in using information in application questions. Costs H0: The costs of utilizing self-instructional materials will not be different from costs of traditional teaching modes. H1: The costs of utilizing self-instructional materials will be less than costs of tra- ditional teaching modes. Initial development costs for the progress interview unit, excluding student time, is approximately $1,903.85 for 40 students. Implementation costs include only the instruc- tor's time for approximately two hours of class and one hour of evaluation time for each student (which could be optional) for a total of $315.00. Student time requirements remain at 125 four and a half hours for the completion of the unit. In comparison with the traditional method, instructor classroom time is reduced. Costs for the traditional mode includes four preparation hours, four classroom presentation hours, and one hour of evaluation time for each student (which could be optional) for a total of $360.00. Therefore, uti- lization of the materials could decrease the cost of teaching this unit somewhat. The new materials also make it possible for students in the general dietetics program to complete a unit on progress interviewing whereas this was not possible previously. There are many inconsistencies associated with cost assessment, particularly in relationship to benefits accrued. One question relates to the fact that a unit on interviewing has traditionally been included in the course HNF 480 so the reported costs of hours spent do not represent an absolute increase in effort. It is difficult to place a dollar value on the number of hours spent by the instructor in relation- ship to other objectives which may have been accomplished in place of the development (or "shadow costs"). It is also difficult to place a value on the number of hours spent by students in relation to their gain in knowledge. A11 stu— dents did show an increase in skills of interviewing. How- ever, students required to write answers to the embedded questions spent approximately three-quarters of an hour to one hour longer on the unit and yet did not show a signifi- cant higher level of performance on the practical examination. 126 Since performance level was essentially the same, it may be possible to say that the unit with examples is more cost effective than the unit requiring some answers to be written. Results on the attitude surveys do show some interesting dif- ferences between the groups, however. Another important consideration in relationship to de- velopment costs is the number of students who will be able to use the materials over the long run. Michigan State Univer- sity specifies limited enrollment in the GDCSP to a total of 40 students. However, since the content area of progress interviewing will remain fairly stable over the next few years, several classes of students will be able to use the materials. As well, all or part of the materials may be used by dietetic students in the traditional dietetiC‘ program. Since a minimum of one course in foodservice management is currently mandatory, all dietetic students could utilize the materials for several years. The materials could also be disseminated for use in other dietetic programs. Monies could be recouped to cover development costs and allow for further development of additional materials. Since instructors in any of these situations could assign the unit on a self-instructional basis, large amounts of time could be saved for more personal student contact, one-on-one teaching, etc. 127 Generalizability Since the sample involved in this study was not a ran- dom sample of the whole population of coordinated study plan dietetic students, it is not possible to freely generalize the results of this study to the larger group. However, there are certain features of the study and of the require- ments for dietetic programs which allow limited generaliza- tions. The 40 students participating in this study were randomly selected from a group of 80 applicants to the GDCSP at Michigan State University. The characteristics of the students at MSU are similar in some respects to dietetic students nationally since ADA has set the minimum criteria for undergraduate program competencies and also for GDCSP competencies. Students across the country receive similar kinds of coursework and learn similar kinds of skills. Se— lection procedures for entry into GDCSP's are not identical but are similar in many respects, tending to support the concept that MSU students are similar to other program stu- dents in some ways. Therefore, it can be said that the materials as devel- oped, tested and evaluated in this project, could be useful to enhance learning in the area of employee progress inter- viewing in other dietetic programs in the United States. Since the results of the practice session role differentia- tion tend to support previous research, it may also be pos- sible to say that similar effects would occur in other dietetic programs. 128 Summary Learning materials on employee progress interviewing did significantly improve student knowledge and performance of interviews as measured by a written examination and simu- lated interview with a criteria checklist. There was no difference in performance on the written examination between students who wrote answers to embedded questions and students who read model answers to the em- bedded questions. It may not be worthwhile to require stu- dents to spend time constructing answers; however, it is notable that senior students who wrote answers tended to think the unit was more clear and perceived learning more than those who only read the answers. Junior directed observers who completed the unit with practice did not perform as well as the other students on the post-test simulated interview. It may be possible that these students were learning to self-evaluate in the prac- tice session rather than learning the content of the inter— view unit since this group also showed a very high reli- ability in self-evaluating in comparison with the other groups of students. It may not be necessary to allow all students to actually participate in a role play session in order to learn a skill. Students can learn by observing other students role play parts and may tend to internally respond to the simulation events. Covert responses may be enhanced by student pre-preparation. 129 There was no positive correlation between student per- formance on the written examination and student performance on the simulated interview. Since the theoretical informa- tion tested in the examination is not the same as the be- havior required in an interview situation, a positive cor- relation was not anticipated. Students' self-evaluation reliability in comparison with the instructors' evaluation tended to improve with practice and with increased knowledge in the subject matter. Random sample segment evaluation reliability is relatively low, particularly for use in assigning grades to individual students. It may be useful for programmatic or formative student evaluation. Senior students felt they had learned more from the materials which required written responses and also felt that the materials were more clear. Perhaps senior students recognize the value of applying the information learned in the classroom setting. Costs for 40 students for the traditional teaching method and the self-instructional method were comparable. However, since the self-instructional materials can be used with greater numbers of students at little additional cost, they become more cost effective with more use. Although the results from this project cannot be freely generalized to the entire population, it is possible to pro- ject that the materials developed, tested and evaluated in 130 this research will be useful in enhancing learning of pro- gress interviewing in other dietetic programs and that the results of the differentiation of role in the practice ses- sion would be seen in other programs. CHAPTER VI RECOMMENDATIONS Recommendations follow from the conclusions of the project which can be briefly summarized as: 1) learning materials on employee progress interviewing did signifi- cantly improve student knowledge and performance of inter- views as measured by a written examination and simulated interview with a criteria checklist, 2) students observing practice role play interviews can learn as much as students who actually conduct role play interviews since they have a tendency to covertly respond, and covert responses may be enhanced by student preparation for practice interviews; 3) scores on the written examination were not predictive of performance on the interview, 4) evaluators tended to improve in reliability with practice from pre- to post-test, 5) com- parisons of instructors' mean scores with student self- assessment scores indicate that students' reliability in- creases to an acceptable level with exposure to the evalua- tion procedures and practice, 6) random sample evaluation reliability was low in the case of the first trial, and mod- erate in the second trial; and 7) attitudes were different in two cases as senior students perceived learning more by 131 132 using the materials which required practice and that the materials requiring practice were more clear. Recommendations are included under two general headings of: "Implications for Future Research" and "Suggestions for Revisions in the Materials as Tested". Implications for Future Research Future related research is suggested in two stages. First, further testing of simulation use in the classroom; second, testing of the employee interview model. The area of student self-evaluation is relatively un- tapped, although there is a requirement for allied health professionals to be able to assess their professional per- formance as well as the performance of their peers. If students can be trained to do self-evaluation, this skill may be applied and practiced with experience in the real world setting to improve their delivery of skills. Future research could develop methodology described here in other varied instructional settings to assess the reliability of self-evaluation skills. Student peer evaluation is another related area which could be researched in similar fashions. Videotaped examples of the practice sessions in class and of selected pre- and post-test interviews were saved. Using the videotaped interviews with the self-instructional unit in place of classroom practice would be a beneficial and interesting comparison. The videotapes would be useful 133 to supplement the formal classroom situations since the stu- dents may learn as much from them as from the actual practice in the classroom. Maatsch (1974) describes some parameters of the "exem- plar" to be the role player in the group instruction, and many of his suggestions can be followed with prOper selec- tion of videotaped models. The model should not be the slowest student since others may become bored; on the other hand, the best student in the class may be a poor exemplar because his pace is too fast to allow others time to think through their response before the exemplar has responded. The bright student may not make enough mistakes to afford Opportunity for feedback and discussion during the critique session. Students observing may tend to learn as much as the role players since they tend to covertly respond as if they were the participating student. This may be facili- tated if the model is someone with whom students can iden- tify (approximately their own age, similar sex, etc.) Re- search could continue in their area. In relationship to the evaluation component, it would be interesting to use only the audio segment of the video- tape and compare instructors' evaluations of those with in- structors' evaluations of the videotapes. Since audiotape is relatively less expensive to purchase and requires less expensive equipment to re-play, there may be advantages in using audiotape for interview practice and assessment. Whether or not the video has impact remains to be researched, 134 for example, in terms of the effects of non-verbal communi- cation. Having only audio may eliminate some of the instruc- tor bias in terms of student recognition. Two interviews were transcribed into typewritten form to allow use of example interviews in situations where other delivery modes are not feasible or desirable. Another use- ful comparison may be having the interview models available to students in script form to be read rather than heard and/ or seen. A script of a sample interview would allow the students as much time as desired on certain segments of the interview to study, re-read, and evaluate. The area of interviewing in foodservice has been cov- ered extensively in the literature, but no self-instructional materials were located. An implication for future research in this area would be to use the module with foodservice supervisors and/or managers to determine if it has impact in the real world setting as a methodology for conducting an employee interview program. It would be possible in a large foodservice setting to determine the viability of the model as described in the unit for: 1) increasing employer inter- est in interviewing, 2) positive employee morale, and 3) de- creasing employee turnover. The model might also reduce personnel costs. 135 Suggestions for Revisions Suggestions for revisions include improvements in the progress interview materials, in the criteria checklist, as well as the addition of an instructors' manual to accompany the materials. Progress Interview Materials Based on testing and evaluation data, several revisions could substantially improve the materials. Since results were generally favorable, minor revisions could be made without extensive diagnosis. The first 22 pages of the unit contain theory in sup- port of the actual interview model, which is then presented in the following 28 pages of the unit. It was suggested by a reviewer and in student comments, that the theory should be placed in the back of the interview model so that the students could see the application first, then read the theory section for additional clarification and information if necessary. Senior student scores on the written exam- ination tend to indicate that many knew the theory prior to this instruction. The interview planning guide appeared in the final three pages of the module. One reviewer suggested that the planning guide should appear earlier in the module to give the student a framework for the pre-planning which is dis- cussed in the text Of the unit. 136 The item analysis of the written examination and the students' performance on the videotaped interviews indicate several other necessary modifications in the unit. There should be clarification of the meaning of "consequences" and the use of "consequences" in discussion with the employ- ee. Most students omitted this on the practical interview. Some students also had difficulty with the "problem-solving" step of the interview. More problem-solving examples in an interview setting should be added, particularly in relation to identifying possible solutions. Students had some dif- ficulty discriminating between training needs and interview setting needs on the written examination and additional examples of these items wOuld be beneficial. Several students commented that a more difficult em- ployee should be created in a scenario for practice. The researcher felt that the important objective was for the student to learn the format of the interview rather than have to deal with difficult employees and this suggestion was not incorporated. More information could be included in the unit, however, about control of the interview situation and who determines what will be discussed during the inter- view. Criteria Checklist Question Two, "Encourages Participation" should be moved to the end of the checklist since it could only be judged at the conclusion of the interview. Question Thirteen 137 should be placed after fourteen since frequently the inter- viewer asked for the interview evaluation after signatures had been Obtained. In item 10, the description under 1 point and 2 points should be reversed. One evaluator suggested eliminating the "for the next period" segment of the "Em- ployee objectives are set for the next period" since no definite period was stated in the scenario materials. An additional item addressing control of the interview should be included. It was found that in some interviews, where all the points were addressed, there was considerable waste of time while the interviewer lost control of the interview to the interviewee and just let the conversation ramble. In the problem-sOlving item, it should be asked "who suggested what the problem is?" and "whose solutions are finally agreed upon". Instructor's Manual An instructor's manual to facilitate use of the module by instructors in other programs with dietetic students has been developed. The instructor's manual includes general descriptions of the materials and their uses, scenarios, examinations, and criteria checklists. A table of contents of the instructor's manual is included in Appendix B. APPEND I CE 8 APPENDIX A ADMISSION REQUIREMENTS FOR MSU GENERAL DIETETIC COORDINATED STUDY PLAN EVALUATION STRATEGIES FOR HNF 480-FOOD SERVICE SYSTEMS MANAGEMENT GENERAL OBJECTIVES FOR RESIDENCE HALL EXPERIENCE APPENDIX A ADMISSION REQUIREMENTS FOR THE MSU GENERAL DIETETICS COORDINATED STUDY PLAN The General Dietetics Coordinated Study Plan has a limited enrollment of twenty students per year. This limi- tation is imposed by the quantity and quality of facilities and clinical faculty available for the essential field ex- periences which are integral components of the curriculum. In order to be eligible for admission to the Coordinated Study Plan, students must meet the following criteria: 1. Have declared a major in dietetics at the time of application. Have not previously earned a Bachelor's degree in Foods, Nutrition, or Dietetics. Have successfully completed (assumes a grade 1.0, credit, pass or waiver) a minimum Of 24 credits at MSU prior to the application deadline for admission to the GDCSP. Have achieved a minimum overall GPA of 2.75 (in reference to a 4.0 scale) on all MSU credits earned (a) prior to the application deadline for admission to the GDCSP and (b) prior to the first term of en- rollment in the GDCSP. Have achieved a minimum overall undergraduate GPA of 2.75 on all credits earned irrespective of the institution attended. Have completed a minimum of 90 credits acceptable toward MSU graduation requirements prior to the first term of enrollment in the GDCSP. Have successfully completed the following minimum requirements for Groups I, II and III (with no course having been repeated for credit more than once) prior to the first term of enrollment in the GDCSP. 138 139 Group I General Education Term Credits American Thought and Language Humanities Social Science Introductory Psychology Sociology of AnthrOpology 42.9-4:.th Group II Supporting Science Courses Inorganic Chemistry 5-8 Organic Chemistry 3 Biochemistry Algebra Anatomy Physiology coo-1mm! Group III Beginning Professional Courses Elementary Food Preparation Basic Nutrition Food and the Consumer Laboratory for Food Management Family in Its Near Environment MNLNLN-b 8. Have submitted application materials by designated due date with all supporting documents attached. From the pool of applicants meeting all the stated eligibility requirements, 20 students will be selected for tentative admission and the remaining students will be listed as alternatives. Tentative appointees will be granted final appointment to the program only if all admission requirements are fulfilled prior to the first term of requested enroll- ment in the GDCSP. Selection will be made using a computer- ized random number procedure which provides all eligible applicants an equal opportunity for selection. This proce- dure of selection does not discriminate on the basis of sex, age, religion, ethnic origin, race, color, creed, and/or familial or marital status. APPENDIX A HNF 480 FOODSERVICE SYSTEMS MANAGEMENT Evaluation Strategies COMMUNICATOR 1.1 Applies principles of professional communication to com- municate with clients, employees, and colleagues. 1.1.1 In preparing written reports and assignments, and writing exams, use acceptable written communica- tion skills, meeting the stated performance cri— teria. 1.1.7 Using a completed layout design, present project to foodservice manager and instructor, meeting the stated performance criteria. COMMUNICATOR 1.2 Applies principles of interpersonal communication to com- municate with clients, employees, and colleagues. 1.2.4 In a workshop session, give and receive feedback, meeting the stated performance criteria. 1.2.5 Using a communication problem you have identified in a foodservice facility, describe and analyze the problem, meeting the stated performance cri- teria. (elective) 1.2.6 Using a foodservice facility to which you are as- signed, draw a sociogram of the interpersonal com- munication, meeting the stated performance cri- teria. (elective) 1.2.7 Using the foodservice facility to which you are assigned, design a communication network, meeting the stated performance criteria. (elective) 140 141 FACILITATOR 2.1 Applies principles of problem-solving to solve personal and professional problems. 2.1.3 Using professional problems you have identified in your assigned facility and a selection of readings, describe a solution and the process by which you arrived at that solution, according to the stated performance criteria. (elective) 2.1.4 Given a folder of related readings, write a list of tasks to be accomplished on the first day as a consultant to a nursing home, meeting the stated performance criteria. (elective) FACILITATOR 2.2 Applies principle of interviewing to interview clients and employees. 2.2.5 Using a selection of readings on employee inter- viewing, describe and analyze an observed or hypo- thetical situation related to interviewing, meeting the stated performance criteria. (elective) 2.2.6 Given an assigned role, participate in a role-play on employee interviewing, meeting the stated per- formance criteria. (elective) 2.2.7 Given simulated employee interview situations, con- duct the interview, meeting the stated performance criteria. FACILITATOR 2.3 Applies principles of group process and learning to facili- tate group achievement. 2.3.2 Given a selection of readings on the change process, describe and analyze in writing a real or hypothe- tical situation related to change, meeting the stated performance criteria. (elective) 2.3.3 Given a selection of assigned roles and guidelines for the role-play, facilitate a role-play in class dealing with implementing change, meeting the stated performance criteria. (elective) 142 2.3.4 Using assigned projects, participate as a contri- buting group member, meeting the stated performance criteria. 2.3.5 In foodservice assignments, interact effectively with the foodservice personnel, meeting the stated performance criteria. FACILITATOR 2.6 Applies principles of evaluation to provide quality assur- ance in nutritional care. 2.6.3 In class, give an oral review of JCAH standards for dietetic services, meeting the stated perfor- mance criteria. (elective) 2.6.4 In class, report on OSHA guidelines and your assigned residence hall's methods of compliance, meeting the stated performance criteria. (elective) FACILITATOR 2.7 Utilizes knowledge of the computer as a tool and theory of information systems to facilitate dietetic services. 2.7.1 Using a recipe of your choice, code your recipe for inclusion in the Sentry System, meeting the stated performance criteria. 2.7.2 Using employee schedules, evaluate the computerized production sheets, meeting the stated performance criteria. 2.7.3 Using the facility to which you are assigned, out- line uses of Sentry computer systems in that fa— cility, meeting the stated performance criteria. (elective) FACILITATOR 2.9 Uses knowledge of merchandising, quantity production and nutritional needs to plan menus for various institutional settings. 2.9.6 Using cookbooks or any source of recipes, select a recipe which complements one of the 3 meals for which you are assigned responsibility, meeting the stated performance criteria. 2.9.7 2.9.8 2.9.9 2.9.10 143 Using selected references, standardize a given recipe for 100 servings, meeting the stated per- formance criteria. (elective) In the MSU test kitchen, extend and test a recipe to be used at a meal, meeting the stated perfor- mance criteria. Present an oral review of The Ready Foods System For Health Care Facilities By Gordon Friesen in class, meeting the stated performance criteria. (elective) Given a choice of foodservice Operation types and a folder of related readings, write a two week cycle menu for the facility, meeting the stated performance criteria. (elective) FACILITATOR 2.10 Applies knowledge of purchasing and inventory control to procure, receive, store and distribute food and non-food items in a foodservice system. 2.10.1 2.10.2 2.10.3 2.10.4 2.10.5 Using the percentage guide for forecasting, fore- case for a minimum of three meals in your assigned facility, meeting the stated performance criteria. In the foodservice facility, using the production sheet and menus, prepare production sheets for each area: cooks, salads, bakery, meeting the stated performance criteria. Using master order forms, receive and assist in the storing and issuing of food and non-food items, meeting the stated performance criteria. Using master order forms and physical inventory reports, order all food and non-food items for at least three meals, meeting the stated performance criteria. Using a specified reference, explain in writing the procedures for purchasing in a facility in the ab- sence of a computer system, meeting the stated per- formance criteria. (elective) 144 FACILITATOR 2.11 Uses knowledge of foods, environmental safety and equipment maintenance to assist in the development of safety and sani- tation programs. 2.11.3 In the foodservice facility, complete at least one temperature check study on selected food items, using form provided. 2.11.4 In an assigned foodservice facility, evaluate the facility using the Department of Public Health Sanitation checklist on at least two occasions, meeting the stated performance criteria. 2.11.5 Using a folder of assigned readings, write an out- line of a safety program to be implemented in a fa- cility, meeting the stated performance criteria. (elective) FACILITATOR 2.12 Utilizes knowledge of purchasing, space design, equipment and work simplification to design a foodservice subsystem. 2.12.1 Using your assigned facility and a specified ref- erence, analyze equipment requirements, meeting the stated performance criteria. (elective) 2.12.2 Given a menu, list the equipment necessary to pro- duce it, meeting the stated performance criteria. (elective) 2.12.3 As a two-member team, select a layout and design problem area in the facility and re-design the area, meeting the stated performance criteria. 2.12.4 Given a selection of readings on alternative food- service delivery systems, compare the residence halls system with one other delivery system, in writing, meeting the stated performance criteria. (elective) FACILITATOR 2.13 Applies principles of financial management to evaluate the financial performance of the facility. 2.13.1 Using the form provided, gather and evaluate data to use in controlling the foodservice operation, according to the stated performance criteria. 2.13.2 2.13.3 2.13.4 2.13.5 2.13.6 2.13.7 2.13.8 145 Using Sentry Consolidated Storeroom Sheets and standardized recipes, determine the total food cost for three meals, meeting the stated performance criteria. Using the daily personnel cost print out and other cost information, determine average cost per in- dividual client for three meals, meeting the stated performance criteria. Using schedules and daily personnel cost print out, determine the total hours and labor costs for three meals, meeting the stated performance criteria. In your assigned facility, develop and implement a practical plate waste reduction campaign, meeting the stated performance criteria. Outline accounting procedures used in your assigned facility, meeting the stated performance criteria. (elective) Given selected readings, list factors considered in deve10ping an institutional budget, meeting the stated performance criteria. (elective) Write recommendations for your assigned facility to conserve energy, meeting the stated performance criteria. (elective) FACILITATOR 2.14 Utilizes principles of personnel management and labor rela- tions to select, supervise and deve10p personnel. 2.14.1 2.14.2 2.14.3 2.14.4 Report on an assigned text on personnel management in class, meeting the stated performance criteria. (elective) Using the folder of assigned readings, describe and analyze in writing a motivational situation, meeting the stated performance criteria. (elective) Using the folder of assigned readings, describe and analyze an employee evaluation situation, meeting the stated performance criteria. (elective) Write personal goals and evaluation strategies for HNF 480, using MBO model, meeting the stated per- formance criteria. 2.14.5 2.14.6 2.14.7 2.14.8 2.14.9 2.14.10 2.14.11 2.14.12 146 Facilitate "employee qualities" game in the class- room, meeting the stated performance criteria. (elective) Using the folder Of assigned readings, describe and analyze personnel problems, meeting the stated performance criteria. (elective) Facilitate a role-play in class concerning a per- sonnel problem, meeting the stated performance criteria. (elective) Using the folder of assigned readings, describe and analyze a labor relations problem, meeting the stated performance criteria. (elective) Facilitate a role-play in class concerning a labor relations problem, meeting the stated performance criteria. (elective) Facilitate a role negotiation role-play in class using role negotiation, meeting the stated perfor- mance criteria. (elective) Given a folder of readings related to personnel management, write an analysis of the "Bob Knowlton" case study, meeting the stated performance criteria. (elective) DevelOp a Scanlon-Model plan for increasing pro- ductivity in the foodservice facility, meeting the stated performance criteria. (elective) EDUCATOR 3.1 Applies principles of teaching and learning to provide edu- cational programs for clients, employees and colleagues. 3.1.2 3.1.3 Selecting a topic, plan, construct, test and eval- uate a simulation to teach an aspect of foodservice management, meeting the stated performance criteria. In a simulated planning group, plan overall train- ing programs for a fiscal year in a defined food- service facility, meeting the stated performance criteria. 147 MANAGER 4.2 Applies principles of management in foodservice systems to manage a foodservice system. 4.2.2 4.2.3 4.2.4 4.2.5 In your assigned foodservice facility, evaluate three meals, meeting the stated performance cri- teria. Using a folder of assigned readings on merchan- dising and consumerism, write a management plan for addressing consumer's needs, meeting the stated performance criteria. (elective) Given a site visit, complete a site evaluation form and participate in class discussion of the foodservice subsystems, meeting the stated per- formance criteria. Having completed a site evaluation of all food- service sub-systems, describe both orally and in writing the sub-systems and their functioning, meeting the stated performance criteria. ADVOCATE 5.1 Applies principles of advocacy to serve as an advocate for improved nutritional care. 5.1.2 5.1.5 Using current publications, orally present infor- mation concerning local, state and national issues in nutritional care, meeting the stated performance criteria. In class, report on the future trends in food- service, meeting the stated performance criteria. (elective) PROFESSIONAL 6.1 Utilizes knowledge of professional behavior to function as a professional dietitian. 6.1.3 6.1.4 Using observations of daily events in foodservice facilities, complete at least 20 anecdotal records, meeting the stated performance criteria. Using a folder of readings on management styles, assess in writing your leadership style and des- cribe the difference between management and lea- dership, meeting the stated performance criteria. (elective) 148 6.1.6 Demonstrate professional behaviors by consistently performing in a professional manner in the food- service facilities, clinical settings and class- room, meeting the stated performance criteria. PROFESSIONAL 6.2 Utilizes knowledge of the profession of dietetics to develop as a professional dietitian. 6.2.1 Given an outline, compile an information resource file, meeting the stated performance criteria. 6.2.2 Given a written comprehensive examination, meet 75% of the stated performance criteria. 1. APPENDIX A HNF 480 FSM General Objectives for Residence Hall Experience (In addition to specific assignment objectives) Gain large quantity food production experience by: a. Preparing a variety of food items in the following categories: meats, eggs, cheeses vegetables pasta ‘ sauces and gravies soups vegetable salads fruit salads entree-type salads desserts (if any are prepared on-premise) OWNO‘M-hLNNH o o o o o o o o o b. Planning production schedules for residence hall menus. Increase knowledge of foodservice equipment by: a. Using all types of equipment in the facility. b. Cleaning all types of equipment in the facility. Increase knowledge of foodservice sanitation and safety by: a. Evaluating the facility using the sanitation checklist. b. Practicing safe and sanitary procedures. Increase knowledge of computerized information systems by: a. Using Sentry forms in the facility. 149 e. f. 150 Preparing information required to produce Sentry forms. experience in managing foodservice personnel by: Working on the job with foodservice personnel in a variety of jobs. Working with foodservice supervisors and observing their activities. Analyzing routine and critical employee incidents in the facility. Participating in employee time scheduling. Observing employee interviews when possible. Managing employee or other meetings if possible. Increase knowledge of the foodservice manager's role by: 3. Analyzing the foodservice manager's interface with the foodservice facility. Applying management principles to situations oc- curring during the experience. Reading policy and procedure manuals, employee handbooks, etc. Becoming involved with setting standards and con- trolling to meet those standards. APPENDIX B PROGRESS INTERVIEW UNIT: 1. Objectives and Test Items for Expert Review 2. Selected Items from Progress Interview Unit MICHIGAN STATE UNIVERSITY DEPARTMENT OF FOOD SCIENCE AND HUMAN NUTRITION EAST LANSING ‘ MICHIGAN ° 48824 HUMAN ECOLOGY BUILDING September 20, 1978 Thank you very much for agreeing to contribute your exper- tise in employee interviewing to assist with this research project. We hope to deve10p instructional materials in the area of employee interviewing that will be a help to many people in the profession. Please review the progress interviewing module objectives first and rate them. The second task is to rate the test items which have been designed to measure the objectives. The checklist which will be used to evaluate the final objective--the student actually performing an evaluation interview-~is also included. Please make comments on it as well. ‘ I would appreciate your returning the materials to me as soon as you have completed them. Sincerely, Deon Gines, R.D., M.S. Instructor 151 PROGRESS INTERVIEW MODULE OBJECTIVES Please rate each Objective using the scale provided. Space has been left between each objective for comments or sug- gestions; please feel free to make suggested changes on this sheet. 1. This objective is very important. 2. This objective is important. 3. This objective is important but needs revision as indicated. 4 This objective is not important and should be deleted. 1 2 3 4 l. The student will demonstrate knowledge of motivation theory by discriminating be- tween contingency and expectancy theory, by choosing more than one theory to apply in dealing with employees, and by applying mo- tivation theory in the interview setting ac- cording to the stated performance criteria. 2. The student will demonstrate effective feedback techniques by listing characteris- tics Of good feedback and by using effective feedback techniques in the interview situa- tion according to the stated performance criteria. 3. The student will demonstrate knowledge of employee evaluation theory by stating reasons for employee evaluation, by des- cribing employee evaluation techniques in- cluding use of anecdotal records, and by applying this knowledge in an interview setting according to the stated performance criteria. 4. The student will list the effects of the interviewer's attitude about perfor- mance appraisal on the outcome of the ap- praisal. 5. The student will demonstrate knowledge of problem-solving skills by defining problem-solving steps and by applying pro- blem solving skills in an interview setting. 6. The student will demonstrate knowledge of job specifications by stating their pur- poses and utilizing the job specification information in an interview setting. 7. The student will list the major com- ponent parts of the progress interview. 152 153 1. This objective is very important. 2. This objective is important. 3. This Objective is important but needs revision as indicated. 4. This objective is not important and should be deleted. 1 2 3i74 8. Given a scenario and anecdotal records, the student will determine objectives for an employee progress interview. 9. Given a scenario and anecdotal records for an employee, the student will be able to discriminate between important and unimpor- tant events to discuss with an employee. 10. Given a scenario and anecdotal records for an employee, the student will be able to discriminate between the items to discuss with the employee and the items which repre- sent training needs within the department. 11. Given anecdotal records, a planning guide, and a scenario, the student will pre- plan an employee progress interview. 12. The student will state four criteria for an appropriate interview location. 13. The student will describe the advantage of making advance appointments for progress interviews. 14. The student will state reasons for em- ployee self-assessment and will assist an em— ployee to generate a self-assessment in the interview setting according to the stated performance criteria. 15. The student will state reasons for using time lines and will generate a time line given data to graph. 16. The student will list uses of interview records (documentation). 17. Given a scenario, the student will de- monstrate discriminating employee strengths by listing them and by discussing them with the employee in an interview setting, meeting the stated performance criteria. 154 1. This objective is very important. 2. This objective is important. 3. This objective is important but needs revision as indicated. 4. This objective is nOt important and should be deleted. 1 2 3 4 18. Given a scenario, the student will demon- strate discriminating the counterparts of em- ployee weaknesses by listing them, and by dis- cussing them with the employee in an interview setting, meeting the stated performance cri- teria. 19. The student will evaluate a progress interview meeting the stated performance criteria. 20. The student will conduct a progress interview in a simulated setting meeting the performance criteria as stated on the evaluation checklist. 155 TABLE B1 TALLY OF REVIEWERS' RATINGS OF OBJECTIVES Mean Mean Objective Rating* Objective Rating* 1 2.67 11 1.17 2 1.83 12 2.0 3 1.67 13 2.67 4 2.0 14 1.3 5 1.67 15 3.0 6 1.83 16 2.17 7 1.50 17 1.83 8 1.67 18 2.67 9 1.83 19 1.67 10 2.3 20 1.5 *Scale Descriptors: b LNNH This objective is This objective is This objective is indicated This objective is very important important important and needs revision as not important and should be deleted. 156 .wumzczow oumuuowoc ou xuu :ocu .uumzoh m umommsm ou oozoumso ecu How uumz .w .oqu wasoz ocm moucoscomcoo umcz omAOHQEO ecu xmm use uumz usc .ucue cu mucuumommsm oEom o>m: .o .uoum>uuoe woow m u.:mu uu oocum xocoe Om: uo>oz .n .oozouaso ecu pom wumzou doom m mu Mcucu 50% umcz umomwsm .m "wusocm so» .mo>uu -OOHQO wcuuooe uo:\m:uuooa how moocoscomcoo mcmcuEuouow ou :Ouumaou :H .o -coo >2 muocuo .muou>mcon ozu mo wcuuuouu Ou was muOu>mcon oEom ou Oucum -mom on Sue uu umcu mu xuoocu u;oEoouowcuou mo :OuuMOuHQmm ucoaowmcwe < .w .oEoouso am How monogamoum stwu>uvcu uo Ocu was uOu>mcoc m wo moEoouso xuoxuu usonm mvuo: comuom ocu mo acuuuczw m mu u0u>maon umsu mouwum xnooau zucmuoonxm .o . on ude o: .ucoEOHQEu ou Shoocu Mancuum>uuoa Ono zuco mcumoocu cmcu uozumu .mumswu>uv:u mo kuOuum> m cuuz mumow Mommcme Ocu oucum .m .Eu: uom umon mxuoz nous: mooonQEo auuz om: ou Ouzum HHCOuuw>quE umuzouuuma oco ecu omooco wfisocm uommcme xuo>m m H .m m N H .mouuooau use moamuocuua acuum>uuoe mouummm can moumum .H .wououou on vusogm paw o>quOflOo ecu -uwcu mm :oumu>ou mpoo: usn .O>uuoohno .o>muOOmno .woumo ecu whammos ou cumuumoummm mu Eouu umou muse onu ou=mmoe ou oumuumoumam mu Eouu umou much .H Ousmmoe ou oumuumoummm uoc mu Eouu umou much .m .N uwou ecu :Oucz Hones: o>uuooflno ecu ou momcmno woumommsm ome ou comm Hoom ommoum ”mGOuumOMMSm Ho mucoEEou how mEouu we meuuoom :oozuon umou coon mm: oommm .OMSmmoE mEouu homou :ouuoom none :0 muonesc one .uooam mucu no .wowu>oum oumom ecu mcum: mEouu umou ecu mo some oumn mmmoum mzmHH Hmme maanoz 3mH>MMHzH mmmMuomm 157 .mcoHuesHa>e eeermEe :uumuu: cuuz mEeHceua henna eeucu umuq um .cen poem e mcHoe ou HmHoouo eum cuec eoch eosmEuomuem pen mm HHez mm moHumHueuoeumco Hm:Omuem ue>oo eHoecm mHemHmumoe eeeroEm m H .e .eeeraEe ecu we emcu Hecumu ecu ec eHeecm :OHmmsome we msoow ece .e .meuoweoeum wee meHuHHom Hmuceauummen .e .HmHuceuoa hec\mu: .u .mumwaam com m.eeonoEe ece .c .meocmEuemuem .meezeHQEe Hecuo .m "ou :oHueHeu nu mu eocmEuomuea m.ee>oHQEe ecu eumeuw>e ou he: umwm ecb .u .mecmEem :oHcs ueez .e .eucmEHemuea cue: e>ouaEH .o .mecmum ec euecz 30cc eeermEe ecu uec .c .Euc :H weumeueucu mu Hewmcea ecu umcu 30cc eeermEe ecu uec .m "eu mH eocmsuomuem con wcuwumweu comceeew eeonmEe ecu wcu>uw we emomusa ecb .m .mHmmueuoam HoH>ecec new pew mcemmeu eeucu umuc .m .zuoecu :euumSHm>e eeonmEe we emweazocx meuammm ecu meumum .m .xomceeem weem we moHumHHeuomumco xum umHH .e .mescuccoeu comceeem e>uuoemme meHHmom ecu meueum .N m .WeuWHee ec wueocm use e>Huoemco ecu eu=mmeE eu eumuumoummm ue: mu Eeuu umeu much .m -He:H me :OHmH>eu meee: usc .e>Huoemce ecu ehsmmee eu eumuHQOHmme ma EeuH umewvwwmw .N .e>uuoemce ecu eusmmee eu eumuumeumae mu Eeufi umeu chH\ .H .mee>OHnEe wcuzeu>ueucu :H euemHOHuce ucwus so» mEefipeum eeucu umeeH um umuc .m .wcuzeu>ueucu eeerQEe we muoemme e>wummom e>mm.umeeu um umuc .e .Hmmueumme euceEuemuem unoce ewsuuuue m.ue3eu>ueu:u ecuxmo muuemme eeumHuceeH .w .meuuuuesc ue: .eecesuemuem ou e>uueHeu ec wflsocm mwueoeu Heuowoec< .E . Q pan . 2 ”en OHDOcm mwueoeu Heuowuece .ceuueeue>e eeermEe cu e>uuuewme ec OH .H . oeec mquH>uem3m ecu uecu weecee -Eeueu mu MM .eeceEuemueo m.ee»oHQEe ecu we eueueu e meec eu uefiue cH .c ..H .Hmmfleumge euceEuomuem weuceuueuueemxwe emeuce>wemue e:e umuc 158 .mceuueoue>e ow ou veaueuu :eec u.:mec ueuesHe>e ecH .O .mOuumHueuoeuece eHcm>uemcecs eeoHocu meHeom wcuuem .c .weuueu ecu meow uomu>uem5m eco cho .e "mu eHceHHeuco mu Hemueumme eezoumEe kc: someeu uenee ecy .H .eeeuezeu wcuec eue muuomwe muc mzocc eeonaEe ecu uecu em mcoHueEoua use memcecu zueHem me ceom mceumueee cuuz xuuoeuue 3eH>ueucH ecu ccHH eu uceuuemEH mu uH m H .c .Hemueummm euceEuewueo eeuceuuo-aeem we emeuce>we eeo umuc .w m N H .eeueuee ec eHoocm wee e>Huoemmw ecu eusmeeE eu eueuumeumme uec mu Eeuu umeu muck .m .eeueo -Heeu me ceumu>eu meeec usc .e>uuoemce ecu eusmeee eu eueuumeumme mu EeuH umeu much .N .e>uuoence ecu eusmeee eu euewuoeumme mu Eeuu umeu much .H .wcHuHc eeNonEe meow ecz Heweces ecu we em: ecu How HHHueEHHQ :euuHuz eue meeHueuHmHoemm cow m H .c .mceHueeHmHeemm pen we memomusm ozu eEez .e .mceHueeHwHeemm cen meNHku: wce .uem memoouom meueum um .GOHusHem e>Hueemme\umeu umoE ecu mcHuCeEeHQEH .mumoo e>HueHeu mchmemme .mceHusHom Heue>em wchHmen .c .mceHusHem umec ecu He Heue>em me :eHue: -Hcho e wcHuceEeHmEH .mcoHusHem mcHueuecem .EeHcouQ ecu mchHwea .O .eoHuDHom ecu mcHumsHe>e wee mcHuceEeHmEH .zuu eu :eHueHOm e wcHueeHem .mceHusHOm eHmHuHSE wcHueuecem .EeHcoum ecu wchHmeo .c .mcoHusHem ecu wcHuCeEeHmEH .He::0muem Eeum quHHceumeoee uHecu wchHEueuee .mcoHuoHOm we zueHue> e mchHuemem .e "eue mcH>Hom-EeHcouo we muuem uceeomEee eHmec ecH .e 159 m H .meezeHmEe HeoeH>HeaH cuHB eeueucseeee mEeH -ceum ecu Hem mceHusHem umemwom ou zuHHHcheemmeu m.uewecee ecu mH uH .o .mzeH>ueucH mcH>Hom-EeHceuq Eoum munecec ozu uch .c . uecz use uecz aeezuec eeceuemHHe ecu mH EeHceua < .e .mHHHcm wcH>HOmuaeHceum we eweeHzocc meHHmme pee meueum .m .cen ecu :o euceEuomuem m.ee»eHmEe ecu uuewme u.cez checeum 3eH>ueucH HemHeumae ecu Ho eeceuuomEH ecu usece eOSuHuum m.ue3eH>ueucH ece m H .e m N H .eeueHee ec eHoecm wee e>Hueence ecu eu5meee eu eueHuQeumad ue: mH EeuH umeu chH .m .eeuee -HecH me :eHmH>eu meeec uec .e>HuoeHco ecu eHSmeeE eu eueHumeumme mH EeuH umeu chH .N .e>HquHce ecu euemeeE eu eueHuaeumme mH EeuH umeu chH .H 160 u3eH>HeueH ecu How e>Huoence :e we e>ec so» stecm uecz .meeerQEe ecu kc eecHH-HHe3 mH ece .cen poem e wcHoe How HeHuCeuom e>ec eu mueemae .HHHOHSV con ecu eecumeH mec ecm .mxee zume we eeonmEe :e cuH3 3eH>ueucH mmeuwoum e Hem wcmueneum eue :0» .e .3eH>ueu:H mmeumoua e How me>Huuence meueueceu .w .3eH>ueu:H ecu we COHueuCeesuee .me>Hueemce wcHueeE ue:\mcHueeE we meecescemcoo wcHeHEueuee .mucHeo Heez ch ou :eHueHeu :H ecee e>ec eHsocm ec uecz eeonQEe ecu mcHHHeu .cem ecu ou :eHueHeu :H mucHoa cee: use wcouum ch eeermEe ecu meHHHeu .eeernEe ecu cuH3 me>HuoeHce wcHaeHe>eQ .o .ee>uemee HH :oHueEeum e ee>OHaEe ecu mcH>Hw .3eH>ueu:H ecu He coHueuceeeoee .me>HuoeHce wcHueeE ue:\m:HueeE we meucesvemceo mchHEueuee .coH ecu eu :eHueHeu :H ucwHu ecew mec ec uec3 30cc eeeroEe ecu mcHuueH .eeeraEe ecu cuH3 me>HueeHce wcHoeHe>eQ .c .eeHueueeE -aeee How Euem :eHueeHe>e me use meHHHHH .memmecceez ch 20cc eezeHQEe ecu mcHuueH .Hee>uemee mHv zen :H emeeuocH :e eeeraEe ecu m=H>Ho .e "3eH>ueu:H mmeuwoum weAOHmEe ecu we muuem ucecemEee Homes ecu mewzHecH ceHc3 EeuH ecu emoecu .e .3eH>ueu:H mmeuwoua ecu we muuem use:OQEoo HeneE ecu mumHH .n .mmeHece coH :o cemec uo: eue xecu HH HuoueeHEHuomHe ec eu eeueeHmcoe ec :ee mHemHeume< m H .w .eusue: :H Heuecem ec eHeocm :oHueOHHHeemm new ecu .uceEemeceE eeH>uemeeem cqu HeHHHEem ec uo: Hes use .meeermEe He wcH -eeeHOm HeHuHcH How eHchcoameu ec Hes eunwe Heccemuem ecu eech m H .O m N H eeueHee ec stecm wee e>Huoehce ecu euoweee ou eumHumeumme ue: mH EeuH umeu chH .m .weueo -HeaH we :eHmH>eu meee: usc .e>HueeHce ecu enameee eu eueHumeumme mH EeuH umeu chH .N .e>HuoeHco ecu eusmees ou eueHumeumme mH EeuH umeu chH (NH 161 m H .eeuueHHe eEHu uuecm ecu :H mEeHceumlHeue>em mmsemHe eu menu ueeuueQEH mm cOHc3 EeHceun ece :c.m:oeM\eu e>HuoeHHe eueE mH qu .e .ceHmmsemHe 3eH>ueucH Hem meHuHueHum muem wee meSmmH uceuueaEHce wee uceuueaEH :eezuec,meue:HEHuomHn .m -oemme eHmHerm eEem smuw wee eHueceom eHQEexe mmwkcHHem ecu ue HeoH "H030M>H09=fi 0:». ma MHOWHQOHA HOW m0>flu .U .e>eumEH.HHH3 mmnHueu ch uecu em uemH>uemzm ch ee ceHmmeumEH ueuuec e eer eu ee eu meee: ec uecz eeernEe ecu HHeH .e .uemmeeuu e Hem eeHOHmEe ecu useEEeoem .e .mnHmceHHecO euee ue ueuuec uH ecee eu cen ch :H ow eu muse: eeermEe ecu uecz use ech .c .emHeu e Hem eeeraEe ecu eneEEeoem .e ecu :H eeeraEe ueex e:e e cuH3 3eH>ueucH mmeuweua e eeHseecem e>ec no» "3eH>ueu:H ecu Hem e>Huoence :e we umemmSm sex eHsez uecz .mme -Hece cen ecu eezeH>eu emHe e>ec so» .eec He weem uecuHe eusuea useOHHH: -mHm e we muce>e e: eue euech .eHHH ch mcHzeH>eu eue wee uceauuemee eeHem .m N H .xueHem uemueH e use zuHHHchcemmeu eueE we :eHuHmem e eu:H e>eE eu uec eHHH v.30» eezeHmEe ecu HHeH .o .eoceEueH -uee e>euaEH eu eeerQEe ecu cqu mHeem uem cue ucesuuemee ecu :chHz eeceEueHuem uec usece eeem mH uecz HHHeOHHHOeQm eezeHaEe ecu HHeH .c .:eHueNH:emue ecu :H eoce> nee uec eem eu eHHH eHzez so» uecu use .eezeHaEe :e we uec e>ec eu >9 -aec eu.:o> uec HHeu .Huez uceHHeoxe uec :e eeermEe ecu useEHHoEou .e .eeueHee ec eHsecm can e>Huoence ecu eHSmeeE eu eueHumeumme ue: mH EeuH umeu chH .eeueo -HecH me :OHmH>eu meeen usc .e>HuueHce ecu eHSmeeE eu eueHumeumme mH EeuH umeu chH .e>wuoemce ecu eusmeee eu eueHumeueme mH EeuH umeu che 162 .EeHceum ecu 0u chHusHem Heem eu eEHu eueHumeumae :0 mH 3eH>ueucH ecu .muouue meceE zHuceumecoe ece eewmou eweE eu eecHeuu deec mac eeermEe ecu MR m H. .m .meschceeu :eHueuHcem :H emHSOO Hecmeumeu uuocm e e>Hm ou eEHu eueHumeumae :e mH 3eH>ueucH HemHeumge ecu .meuoeeeouo :eHueuHcem 000m 30HH0H u.:me00 eeermEe ecu HH m H .e .meSmmH mchHeuu 0:0 meommH HemHeumme Geezuec meueeHEHuemHo .oH .eoeHm mchueo e ue>o Hecee eeHem 0 0:0 eezeHQEe ecu :eezuec :ecuuHH ecu cH uceESmue c< .O .ecsoue u.:mH uemH>ueoem sec: ucHeuumeu HHec e mcHueez uez .c .mcueee me umeH ecu :H mHe>Huue eueH 039 .e ”3eH>ueu:H mmeumoum ecHuseu e mcHuse eeerm -Ee cm cuH3 mmsomHe eu emeoce 50> eHzez mEeuH mcH30HHeH ecu He cech .0 .eueH :eumo mH eezeHmEm .ceHueumaeuo mcHuse muemcHH cuH3 mEeuH meumeu eeonaEm .eEHu coceH 0cm ceeuc e>HmmeOXe meceu eeeroEm .ucHeuumeu uHmc ume: u.cmeew xHucesceuH eeermEm .EHOHHcs we use eeonmEm uuceuuomEH umoe ecu mcHec ece .Eecu mcHuecED: zc mEeuH mcHonHom ecu eNHuHHOHum .c .e>Hueeflco ecu euemeeE eu eueHumdumme uo: mH EeuH umeu chH .m .eeueo -HecH we :eHmH>eu meee: uec .e>HuoeHce ecu eusmeeE ou eueHuoeumae mH EeuH umeu chH .e>HuoeHce ecu eusmeeE ou eueHumeumme mH EeuH umeu mmcbl HN .memueceom meHzeHHeH ezu ecu we cwee Hem mzww>ueucH :eHnueum .o .memmeaceez wee mcumceuum m.eeHeHnEe ecu use HOHQ use 3eH>ueucH ecu Hem me>HuoeHce :30 use» meHe>ee eu Huu .eHnEexe ecu cmaeucu veeu so» m< .ceHm eum eu cuch new emeu e we wee: ec HHHz eHueceom wee :eHueoHHHuemm ceH mcH3eHHeH ech .m .eemmSOmHe ec eu mucHen ueHeE .me>HuoeHce m.ue3eH>ueu:H .o .euee 3eH>eu 0cm .ueE eue me>HuoeHce HH meocescemceo .me>HuoeHce m.ee3eH>ueueH .c .meuee 3eH>eu .eemmsomHe ec eu mueHem ueHee .me>HuoeHce m.ee3eH>ueu:H .me>HuoeHce m.ue3eH>ueueH .e "3eH>ueu:H ecu euemec eecceHm ec eHsecm cuch mEeuH HHe mewsHocH mcHZOHHeH ecu we cchz .e 163 .mzeH> -ueucH mceHmueua use meeHsm,m:H::eHaw:e mchceHm-eum Hem mcemeeu meueum .HH .HHuceHeHHme a: uem eHceu cue: ch eNHcemue u.:meen .o .eHceHHe>e u.:mH emHeeu ecu co EeuH ecu HH meeHeeu ueHe HeHeemm :H memceco HeeHE mecez .c .ceHueueae ecHHHeuu meHuoe mHeuu :0 mEeuH memmHE HHHeeeHmeooo .e w3eH>ueucH mmeumeum ecHuseu e mnHuse eeHeHQ -Ee ecu cqu mmeomHe eu emeece 00H eHsoz muce>e mcHzeHHeH ecu we cOch .e .e>HuoeHce ecu eusmeeE eu eueHumeummm ue: mH EeuH umeu chH .m .eeueo -HecH me :eHmH>eu meee: usc .e>HuoeHce ecu eusmees eu eueHumeumne mH EeuH umeu chH .N .e>HuoeHme ecu eHSmeeE eu eueHumeumne mm EeuH umeu mmch .H 164 .eeHmeEe HeseH>HecH ecu uHH ou com ecu emcecu :ecu uecueu eeueeeHHee eue Hecu me mceH eucH uHH ou eHaeem wcHM10u Huu szocm uemecee ecP m H .e .uceEmmemmeuMHem e eueuecem eu eeHOHmEe ecu mumHmme 0:0 How mcemeeu meueum .vH .ee:e>we :H 3eH> -ueucH ecu He me>HueeHco ecu we eeEuewcH ec eHSOcm eeHOHmEe ece m H .c .uemm: uem 0cm Huuoz ou eEHu e>ec u.:meee eeHonEe ecu uecu em mzeH>ueucH mmeumoug uom muceEucHeer eece>ee eceE eu uoc eeeH 000m 0 mH uH m H .e .mueeEucHomme eece>0e How mcemeeu meueum .mH .meoHu -oeuueueH ec Hes euecu eoch eeue cue: m.eeH0HeEe ecu :H ec ue>ez .0 .HuHHeEHOH He mcHHeeH e eueeuo eu eeHonEe ecu ece uemecee ecu :eezuec cmee e eesHecH .e .eHceuHOHEoe .eue>Hum .ueHsv em .c .eeHeHmEe ecu eu mcheueeucu ec Hes chu eech eonme m.uemeceE ecu ec uoz .e ueHsecm uceeceuH>ce 3eH>ueucH ecH .c .eue>Hua ewes ec :ee 0cm ueHDU HHe>HueHeu HHHezm: mH uH eech 3eH>ueucH mmeumoum m Hem eeeHQ 000m e mH eewae m.uemeceE ece m h .e .3eH>ueu:H ecu H0 :OHueooH Hem eHueuHue meueum .NH N H .eeueHee ec eHoecm 0cm e>HuueHco ecu eHSmeeE ou eueHumeumme ue: mH EeuH umeu chH .m .weuee -HecH me :OHmH>eu mwee: usc .e>HueeHce ecu eHSmeeE Ou eueHueoueme mH EeuH umeu chH .N .epwueemme ecu euSmeeE eu eueHumouQme mH EeuH umeu chH .H 165 .meeHonEe Hem :eHueeceEEeeeu H0 mueuueH mcHuHuz .o .mconHoeu :OHuecHEueu .HueHem .uemmceuu .ceHueEOHQ mchez .c .meusweooum HeccOmHem we mcoHuemHume>cH Huomm .mmczv uaeecue>oo .0 "How HHHueEHHoxwem: ec HHHz Hmeuoueuv :eHueuceesoom 3em>ueucH .e .mpuooeu 3eH>ueucH ecu we mew: meueum .OH .mcuC0E eeucu uxe: ecu cH HemH>uem=m ecHHHmuu How HenceE meusweeeum cam HOHHOQ emH>eu HHHz eeHOHQEm .u .30uueEou mchchec HemH>uem3m ecHeruu uceumHmme 3e: :Heuu HHH3 eeHonEm .c .mmeHo Hmeuecu ueHe mceH cee3-ucmHe cm :Hmec HHH3 eeonmEe ecu .mceez 03u :H .e "ceHueHm50o muH Hem ecHH eEHu e zeue .ceHueEueH:H mcHzeHHom ecu ce>Ho .c .OHchm :eHueEHomeH ecu eceE OH .o .mememusm coHueuceeseoe How euee 3eH>ueucH ecu eNHcemue OH .c .euceueHeu e eeH>0ua wee mu:e>e eeuoeuucee ecu HeHmmHe OH .e "mH mcHuuem 3eH>ueucH :e :H ecHH eEHu e mcHueHQEOO Hem cemeeu ueHeE ecH .e .ecHH eEHu e meueuecem wee mecHH eEHu we em: Hem mcemeeu meueum .mH .coH ecu eu coHueHeu :H ece mH uecz Ho ceHumeouea uceueHHHe m ec Hes euecu .EHc He uses -mmemme use» Eeum Heep ueeum e meHue> useEmmemwe-HHem m.eeHeHmEe ecu HH .c m N H .eeueHee ec eHsocm pee e>HueeHce ecu euzmeeE 0u eueHumeumme uec mH EeuH umeu chH .m .eeueo -chH we conH>eu meee: uec .e>HueeHc0 ecu eHSmeeE eu eueHumeumme mH EeuH umeu chH .N .e>HuueHco ecu eusmeeE eu eueHumoumme mH EeuH umeu chH \MH .equHew e we eewuemeu ec umea 3eH>ueucH ecu .me>HueeHco ch ceeeu ue: meow eeerQEe ecu MH m H .e .muHemeu use mmeoeue 3eH>ueueH ecu H0 :eHuesHe>e :e meueueceu .meuoeeeoum e: HOHHOQ ecu :H eeumum me .mchec uHHcm ecu euowec HHeu ue .eEHu no em .0 .eEHu :0 ec u.:eo ecm HH HHe um cH eEee u.:ea .o .eueH :H mcHEoo m_ecm :ec: HHeu .c .eueH :H mcHEOO neum .0 H00 eHeocm ecm uecu uec 166 HHeu 50> eHsez uecz .ceuwe eeu eueH mH mcHzeH>ueucH eue 30H eeHeHmEe ecF .c .mchemHom meow we emeee :05500 e eue mecec weuecHEeuceo emoeeec mecec ch cmez .o .uceEeuHsoeu cuHmec He useEuuemee e mH uH emseoec HHucesceum mecec ch cmez .c .HuuHe uem Hecu emdeeec HHucezweuH mecec ch cmez .e «00 eHsocm ec EHc HHeu 50» eHsoz uecz .0eHHew :eec e>ec Hecu ueuwe mecec ch cmez eu muemuew 0c: eeHOHQEe :e e>ec :0» .e .wommmfivflwosa MO WHRMQHOHHHDOU QSH WOHMEHEHHUmHQ .eeeH>oum meuooeu Heuoeuece ece :0HueonHeemm ceH ecu meHueeHmCOO .ccom unece mEeuH e>HuHmea e>Hm umeeH ue uch .e .mcumceuum eeHOHmEe meuecHEHHOmHQ m N H .emueHee ec 0Hsocm wee e>HuoeHc0 ecu eHSmeeE ou eueHumeumme uo: mH EeuH umeu chH .eeueo -HecH me cOHmH>eu meeec usc .e>HuoeHce ecu euemeeE eu eueHHmeumme mH EeuH umeu chH .e>mueeHco ecw euemeee ou eueHumoumme mH Eeuwwumeu chhl 167 HuMHHHoeco.wecoeuue eemv .umHHcoeco :eHueeHe>e ecu :e eeueum me eHueuHue eoceEueHuen ecu meHueeE mcHuuem eeueHSEHm e :H 3eH>ueueH mmeumeug e uooecoo HHH3 uceesum ech .3eH>ueu:H eeHeHQEe :e meuemHmHuo .om .eoceEueHuem cue: ch mcH>eumEH uem eeHeHmEe ecu cqu uem eue me>HueeHce eeucu umeeH u< .o .uemm: eEeoec ue: meow eeHeHmEe ece .c .3eH>ueu:H ecu uem me>HuoeHce :30 mH: .e useH>ueucH ecu He mmeOUSm ecu He eusmeee e we em: ue3eH>ueueH ecu eHsecm uecz .c m N H .eeueHee ec 0Hoecm use e>HuoeHce ecu euemeea eu eueHumeumme ue: mH EeuH umeu che .m .eeueo -HecH me :eHmH>eu meee: unc .e>HueeHce ecu eHSmeeE eu eueHumeumme mH EeuH umeu chH .N .e>Huoemme ecu eusmeee eu eueHumeumme mH EeuH umeu chh .H 168 TABLE BZ TALLY OF REVIEWERS' RATINGS OF TEST ITEMS Test Mean ‘Test Mean Objective Item Rating* Objective Item Rating? 1 a 1.83 7 a 1.3 b 1.5 8 a 1.13 c 1.83 b 1.13 d 1.8 c 1.13 e 1.5 9 a 1.5 2 a 1.17 b 1.83 3 a 1.2 c 1.83 b 1.17 10 a 1.0 c 1 b 1.3 d 1.67 c 1.13 e 1.5 11 a 1.0 f 1.67 b 1.3 g 61.67 c 1.6 h 1.3 12 a 2 i 1.67 b 1.13 j 1.0 13 a 1.5 k 1.8 b 1.3 l 1.3 14 a 2 m 1.5 b 1.83 4 a 1.5 15 a 1.5 b l b 1.13 c 1.3 16 a 1.67 5 a 1.8 17 a 1.4 b 1.17 18 a 1.5 c 1.17 b 1.3 d 1.3 19 a 1.6 6 a 1.17 b 2.16 b 1.17 20 a 1.0 c 1.5 d 1.5 *Rating Scale 1: 2. 3: This test This test but needs This test Descriptors: item is appropriate to measure the objective. item is apprOpriate to measure the objective, revision as indicated. item is not appropriate to measure the ob- jective and should be deleted. APPENDIX B PROGRESS INTERVIEW MODULE Target Audience: Junior or Senior Dietetics students; Junior or Senior Hotel/Restaurant students with interest in institutional foodservices. Prerequisites: Interpersonal communication skills train- ing; introduction to psychological principles of motivation; introduction to employee evaluation objec- tives and types. Enabling Objectives: On a written examination, the learner will: A. Demonstrate knowledge of motivation theories by: 1. Matching theories with examples of them. 2. Indicating which technique the interviewer should use. 3. Indicating the variance between perceptions of desired consequences. B. Demonstrate knowledge of employee evaluation by: 1. Choosing the appropriate criteria for evalua- tion. 2. Choosing items which effect evaluation relia- bility. . Choosing reasons for job performance evalua- tion. . Choosing the purposes of job specifications. . Choosing reasons for making advance appoint- ments. Choosing a primary use of interview records. Indicating an affect of attitude on the out- comes of interviews. 3 4 5 \IO C. Demonstrate knowledge of problem-solving skills by: 1. Choosing a list of problem-solving components. 2. Indicating the employee's role in problem- solving. D. Demonstrate knowledge of feedback techniques by: 1. Choosing statements which meet the criteria. 2. Indicating the effect of making salary deci- sions in a progress interview. 169 Terminal and A. 170 Demonstrate knowledge of criteria for interview locations by: 1. Choosing a list of criteria. Demonstrate knowledge of interview components by: 1. 2. Choosing a list which includes the major com- ponent parts of a progress interview. Choosing a list of items to plan before the interview. Determining objectives for an employee pro- gress interview. Discriminating between important and unimpor- tant events to discuss with an employee. Discriminating between items to discuss with an employee and items which represent training needs within the department. Choosing criteria to use in evaluating the interview. Choosing a reason for completing a time line during the interview. Choosing reasons for employee self-assessment. Objectives: Given a scenario, job specification, anecdotal records, the learner will: Conduct a progress interview meeting the perfor- mance criteria as stated on the evaluation check- list. Evaluate their own interview using the evaluation checklist. 171 “euem "eezeH>ueu:H ”H030H>HOHGH .m .H .H euezxen< nesHueeHco HH encamm HHHz uncz .HH .v .v .m .m .m N .N .N H AH NH 0 "meuea 3eH>em ”me>muoowcb upo "meuen :eHueHmEo .e .v .m .m .N .N . .H .H "mcHem em mHeec "mucHemlcewB .v .m .N .H "mucHem mzeuum .m .N aw HeHceuemeeE .eHceeceumueecs .emHeceov "me>HueeHco mQHDU UszzmmHzH 172 ROLE PLAY CASES - PROGRESS INTERVIEWING The following are four cases to use with the module on progress interviewing. They were written to be challenging, but simple, and realistic. Four cases were developed so that one could be chosen as a pretest; one or two can be chosen as practice cases in the classroom for role play; and one can be chosen as the final practical examination. The placement on the pages was designed to allow students to make strategy and planning notes as they prepare for the interview. There is introductory information for the manager, the employee, and an evaluator's guide which can be used as sug- gested criteria for use with the general interview check- list, but which can also be given to students to evaluate their own interviews after the interview. A job specifica- tion is included for each of the four jobs and an organiza- tional chart to help the students visualize the organization. To prepare for role-playing the scenarios, distribute the following information. Job Organiza- Evalua- Role Speci- tional tion Instructions fication Chart Guide Manager X - Manager's X X Employee X - Employee's X X Evaluator X - Both X X X 173 m eeeuo .umm< mceeu _ N eeeuo N eeeuo N eeeuw m eeeuo m eeeuo m eeeuu N eemuo N eeeuo muecuez muecuez muecuoz muecuez .umm< mcoeu muecuoz mHHeHu ueHQ uceuseumem emeu ecHHHeuH eeHem uecem EeeucmHn _ _ _ _ c m eeeuw m eeeuw m eeeuo v eeeuo o eeeuu Heou m eeeuu mceHuHueHQ .>uem5m .>ueaom .>ueasm .>uea:w uecem wee: .>uem=m HeoHcHHu uceuseumem e mu ecHHHeuH \meem EooucmHQ H _ _ _ 1.— _ HceHuHueHev HceHuHueHev mceHuHueHo HeoHcHHu wee: .umz uceuseumem w eHueueHeu uemecez ceceuHc :Hez _ HueueHQ Ho uoueeuHQ \lllfi _ aoHuenunHeHeE HeuHamo: _ _ FIMueuueuHm we eueem HeuHQmecl_ ZOHHeumEH :H Hamem: ec HHH3 euee ech .Hez Hue :H uH:: ecu :0 eeeum use» uoemme ue: HHH3 antmceHumesd meHonHeH ecu meHueBmee :H umecec pee Heeum ec emeeHm He>usm eeSuHuu< m.u:ee:uw III. mmmzommmm mm>mmmmo ZMHBHmz OmmHDOmm :HHE 9H2: mm>mmmmo DmeummHm mm>u:m chu :0 eEe: use» use ue: 00 emeeHm mcamoz UZHZmH>mm92H U xHszmm< 184 eeumemHa HHm:euum .m eewwe -mH: :Heuueo:: m eeum< eeuw< HHm:euum H .:0Hmmem HeHm eHeu ecu m:Hu:e ceeceeem ueuoeuum:H eues e>ec eu ecHH eHsez H .mH .uH:: ecu m:Hu:e m:Hee we: H 30c :e coeceeem ueuoeuum:H cone uem u.:eHe H .eH .uH:: chu :H 0e:ueeH H uecz em: HHHz H uecu Heem H .mH .eHcm:emeeu euez uH:: chu we me>HueeHce ech .NH .ueeHo euez uH:: chu we me>HuoeHco ecH .HH .uH HHHHnHu eu m:eHume:w He ueH e cme eu eec H .oH .ueeHo:= eue: uH:: ecu He muuea .m .mumeu:eo u:euue:EH :0 euauoeH ue :eHueE -ueH:H :euuHuz euee ec 0Hsecm euech .w .uH :0 u:e:m eEHu He u::eEe ecu cuuez me3 uH:: ecu uecu c:ch H .H .:eHmmem HeHm eHeu ecu eeHeH:e H .o .:0Hmmem HeH: eHeu ecu scum ueH e ee:ueeH H .m He.u:oov He>uom eesuHuu< m.u:eesum 185 NuH:: ecu He eueueem umuez ecu we: uecz .ON NuH:: ecu He euoueew umec ecu we: uecz .mH Nmu:eEe>eu:EH uem eces :0» eHeez m:eHumemmem uecz .mH .mu:e0:um uecue cuH3 uH m:Hm: eueHec uH:= ecu He :eHueeHHHeee e:eeseoeu stez H .HH .HeHueueE ecu m:H:ueeH :H Hem -mHec we: :eHmmoomHe m.ueuo:uum:H ech .oH eeumemHa eeume :Heuueo:: eeum< eeum< HHm:euum -mHm HHm:euum m e m N H He.uaeev cessam nesuHuu< n.ueeeaum APPENDIX C PROGRESS INTERVIEW CHECKLIST Items A through F are intended as reminders for the student interviewer. Raters begin evaluation with number one on the reverse side. BEFORE THE INTERVIEW: A. B. Review job analysis data for the job of the employee to be interviewed. Review employee's performance records; determine speci- fic situations to be discussed. Write objectives for the interview and complete planning guide. Make an appointment with the employee in a non-threatening manner and arrange the apprOpriate environment. DURING THE INTERVIEW: E. Use listening responses: Silence when appropriate Non-verbal encouragement Verbal encouragement Open-ended questions Clarification Empathy Check to see if things are understood by the employee Avoid communication pitfalls: Leading questions Verbal crutches (and-uh, you know, etc.) Non-verbal distractions AFTER THE INTERVIEW: F. Evaluate the interview. 186 187 .ve:Heu -ce meusue:me .Uemmeuuue umz .meusue:mHm veueHnEeu 3eH>ueu:H ecu H0 :oHueuceasue . .muHH -Hueam eueum eu .mm:HHeew Heue:em meueum eeHoHaEw .eonmwueen uwz .zeH>ueu:H ecu ou :OHueeeu m.ee»e :Ee mumesve . .MONMHMEEDm HO3OH>houfi_ .eezeH>eu ec HHH3 me>HueeHme mxem .<. z eeHOHHHEe mmemw .meHHueE .ueu:H xc veu .ueu:H euewmm .3eH>ueu:H mmeumoun uem eueHuaeunne:H e: uem - mmmK .wemmeuwwe wez .eemmeuwee uez .zeH>ueu:H H0 :He:eumue0:= m.eeHeH:Ee use mmoecu .NH .uemmeumwe mez .me>HuueHce eEem .uezeH> -ueu:H Hc eeueum .uom WH OH“ 30H>0h .0e:HEueuee eue mHeem m:HueeE 0 mee:e:cem:e .<.z -He>ew .ommmeemm mumemmsm eeermmw e>HuoeHmo Heue:eu .memmeuwme uoz .mEeHcou ou :eHueHeu :H e:ee .mmeeeum m:H>HOm .m:eHu=H0m uem .m:eHu:Hem mumem .m:H>H0m :eec e>ec eHsocm uec3 eezeH: .<.z -EeHmeun moon mewme eeonnem -msm uezeH>ueu:H -EeHceun wzi .eeHau .He>m :0 .mEeuH .EeuH u:euu0m .memmsumHv .mmaumHe eu mu:e>e .<.z mEeuH mue>oo eEem memmaomHn -EH:= memmeemH mEeWH 02 e «Huaeuqne memee .mea H:ceeu coeceeew 000m .eemmaumHv .eHHH ecu .eemmeunxe new: “mu:Hen cue: OHHHeenm .cwz mu:Hen cue :H memme:ueez memme: we: eEo .mmeuMbe u.:meep m Eeeer Ee memmeomHn .w .eemmso .eHHH ecu :H .eemmeu xe .Eecu meueum .mcum:euum .<.z -me Heue>e mEeuH OHHHuen mmwmceuum Heue:eu .mmeuwme u_:meen .memme:cee2 0:: mcum:euum menu .m:eHume:c eeueH .ceH ecu uzece .u:eEmmemme .<.z - Hm ou memHau -eu Heue>em mcm< mHeeH ec 30c mwm< .mme u_:meen -HHem uem eeHOHnse mHm< .e .eeHao .He>m .me>HuoeHco .zeH>ueu:H 00 .eeueum .zeH>ueu:H ecu uem .<.z :e me>HueeHco Heue:em meueum ou mH e>HueeH me>HuoeHme oz .eu:e NH:0 .eEHu ecu HHec .cHeu ou eeerQEe HHemEHc mmeunxe .m:oHume=v muezm:u .<.z mcHMu eeHoHnEm cm: eu me=:Hu:ou ou eeHoHQEe mcm< HH:O eexownaw .emme eueeuu eu .mu:eEEou .HH:0 .uceucoe .<.z uuewme Heem HeHeem Heue>em eHHec mxem cuH3 mchem :0: cqu uueaneu mQOHe>en 1H m v m N H APPENDIX C PROGRESS INTERVIEW UNIT Name Date 1. Choose the item which includes the major component parts of the employee progress interview: a. Giving the employee an increase in pay (if de- served); letting the employee know what his weaknesses are; filling out an evaluation form for documentation purposes. b. Developing objectives with the employee; letting the employee know what he has done right in relation to the job; determining consequences of meeting/not meeting objectives; documentation of the interview; giving the employee a promotion if deserved. c. DevelOping objectives with the employee; telling the employee his strong and weak points in relation to the job; telling the employee what he should have done in relation to his weak points; determining con- sequences of meeting/not meeting objectives; documen- tation of the interview. 2. Which of the following includes all items which should be planned before the interview? a. Interviewer' s objectives, employee's Objectives, major points to be discussed, review dates. b. Employee's objectives; consequences if objectives were met, and review date. c. Interviewer's objectives, major points to be dis- cussed. 3. T F Information concerning salary increases and pro- motions should be shared during the evaluation interview so that the employee knows his efforts are being rewarded. 4. T F Employee appraisals should cover personal char- acteristics in addition to job performance since both are crucial to doing a jood job. 5. Interview documentation (records) will be used pri- marily for: a. Government (NLRB, EEOC) investigations of per- sonnel procedures. b. Making promotion, transfer, salary, termination decisions. 188 189 c. Writing letters of recommendation for employees. 6. The best way to evaluate the employee's performance is in relation to: a. Other employees' performances. b. The employee's job specification. c. His/her potential. d. Departmental policies and procedures. 7. T F Appraisals can be considered to be discriminatory if they are not based on job analysis and specifications. 8. T F Every manager must choose the one particular motivational style which works best for him to use with all his employees. 9. A major reason why employee appraisal is unreliable is: a. Every employee's needs are unique. b. The evaluator hasn't been trained to do evaluations. c. There often is no formal evaluation program. 10. The purpose of giving the employee feedback regarding job performance is to: a. Let the employee know that the manager is inter- ested in him. b. Let the employee know where he stands. c. Improve work performance. d. Meet union demands. 11. T F Job specifications are usually used only by the manager who does employee hiring. 12. T F It is the manager's responsibility to initiate solutions for the problems encountered with individual employees. 13. The basic component parts of problem-solving are: a. Describing a variety of solutions, determining their acceptability to personnel, implementing the solution. b. Defining the problem, generating possible causes, generating solutions, selecting, implementing, and evaluating the solution. c. Defining the problem, generating solutions, imple- menting a combination of several of the best solutions. d. Defining several solutions, assessing relative costs, implementing the most cost/effective solution. 190 14. The interview environment should: a. Not be the manager's office since this may be threatening to the employee. b. Be quiet, private, comfortable, informal. c. Include a desk between the manager and the employee to create a feeling of formality. d. Never be in the employee's work areas since there may be interruptions. 15. T F It is better to focus on one problem which is important than to discuss several problems in the short time allotted. 16. T F The manager's office is always the best place for a progress interview since it is usually quiet and can be private. 17. T F It is a good idea not to make advance appointments for progress interviews since it may worry and upset the employee. 18. T F If the employee does not reach his objectives, the interview must be regarded as a failure. 19. What should the interviewer use as a measure of the success of the interview? a. His own objectives for the interview. b. The employee doesn't get upset. c. At least three objectives are determined for the employee. 20. Which one of the following items would you choose to discuss with an employee during a routine progress interview: a. Two late arrivals in the last six months. b. Not wearing a hairnet whenever the supervisor isn't around. c. An argument in the kitchen between the employee and a salad maker over a parking place. 21. Which of the following events would you choose to discuss with an employee during a routine progress interview? a. Occasionally misses items on trays during trayline operation. b. Makes minor changes in special diet recipes if the item on the recipe isn't available. c. Doesn't organize his work table set up efficiently. 22. You have scheduled a progress interview with a one-year employee in the salad department and are reviewing his file. There are no recorded events of a significant nature either good or bad. You have also reviewed the job specification. What would you suggest as an objective for the interview? a. Recommend the employee for a raise. 23. 191 b. Find out what the employee wants to do in his job to make it better or more challenging. c. Recommend the employee for a transfer. d. Tell the employee what he needs to do to make a better impression on his supervisor so that his ratings will improve. ’ You are preparing for a progress interview with an employee of sixty days. She has learned the job quickly, appears to have potential for doing a good job, and is well- liked by the employees. What should you choose as an objec- tive 24. for the interview? a. Compliment the employee on her excellent work, tell her you're happy to have her as an employee and that you would like to see her advance in the organization. b. Tell the employee specifically what is good about her performance within the department and set goals with the employee. c. Tell the employee you'd like to move her into a position of more responsibility and a larger salary. The employee you are interviewing is often late. What would you tell this employee that s(he) should do? 25. they 26. a. Stop coming in_late. b. Call when she's going to be late. c. Don't come in at all if she can't be on time. d. Be on time, or call before the shift begins, as stated in the policy and procedure manual. You have an employee who forgets to wash his hands after have been soiled. What should you tell him to do? a. Wash his hands frequently because they get dirty. b. Wash his hands frequently because it is a health department sanitation requirement. c. Wash his hands because contaminated hands are a common cause of food poisoning. In relation to determining consequences for meeting/not meeting objectives, you should: 27. a. Suggest what you think is a good reward for the employee. b. Never use money since it isn't a good motivator. c. Have some suggestions in mind, but wait and ask the employee what consequences s(he) would like. d. Wait for the employee to suggest a reward, then try to negotiate downward. T F The interviewer's attitude about conducting the appraisal interview probably won't affect on-the-job perfor- mance of the employee. 192 28. T F Since the personnel office may be responsible for initial screening of employees, and may not be familiar with foodservice management, the job specification should be general in nature. 29. T F If the employee doesn't follow appropriate sanita- tion procedures, the appraisal interview is an apprOpriate time to give a short refresher course in sanitation tech- niques. 30. T F If the employee has been trained to make coffee and consistently makes errors, the interview is an appro- priate time to seek solutions to the problem. 31. T F The employee should be informed of the objectives of the interview in advance. 32. T F The manager should try to find peOple who fit into jobs as they are delineated rather than change the job to fit the individual employee. 33. The major reason(s) for completing a time line in an interview is: a. To display the contracted events and provide a reference. b. To organize the interview data for documentation purposes. c. To make the information public. 34. An example of reinforcement motivation theory is: a. If the manager gives the employee praise (the reward) for being on time, then the employee will be on time more Often. b. If the employee thinks that a promotion (the reward) is desirable, and attainable, s(he) will take certain steps to reach that goal. c. If the employee doesn't have any goals within the organization, the manager will not be able to motivate him/her. . 35. Employee self-assessment should be encouraged because: a. It relieves the manager of responsibility for doing the whole evaluation and encourages the employee to assume some of it. b. It will help the manager and employee to discover differences in their perceptions of what is important on the job. c. It will let the manager know the underlying reasons for problems in the employee's performance. 36. 193 Choose the item which is the best suggestion as a method to increase reliability of evaluations: 37. a. Schedule evaluations more frequently. b. Have the employee's immediate supervisor do the evaluation. c. Train the evaluators about how to do the evaluation interview. An example of expectancy motivation theory is: a. If the manager gives the employee praise (the reward) for being on time, then the employee will be on time more often. b. If the employee thinks that a promotion (the reward) is desirable, and attainable, s(he) will take certain steps to reach that goal. c. If the employee doesn't have any goals within the organization, the manager will not be able to motivate him/her. APPENDIX D STUDENT ATTITUDINAL COMMENTS APPENDIX D STUDENT ATTITUDINAL COMMENTS Junior Students' Comments on the Feedback Given by the Actress The following comments were taken directly from the students' post-test evaluation rating scales: I enjoyed the experience but it needs much improve- ment. It is a hard role for me to be in. I enjoyed the experience. Talking afterwards was really helpful. The feedback from the actress was very good. Overall, the whole assignment was worthwhile. I felt that this method of learning was excellent. I feel that I have come a long way since we first started. I understand the format better. I feel this was one of the most worthwhile projects in class this term and I really benefited from it. I thought the videotape was much better with the actress. It was less structured because she was not going by guidelines. It made it more challenging because you really didn't know what she was going to say. Felt the interview was a very important part of class this term. I feel that I learned a lot and was given a lot of good points. What hit me the most was even though we had gone through what I had on my agenda, 1 have to realize that they (the employee) also have things they will want to discuss. Very helpful and enlightening with gaining insight to interaction of people. 194 195 The problem-solving was a hard point. Overall, I got a lot out of this interview and received some key points to look at in terms of further interviews. I feel that video recall is a very good approach to learning interviewing. The discussion following the interview was helpful. Senior Students' Comments on the FéedEaEk Given by the‘Actress I recognize a need to listen to employee comments and not be so concerned with just accomplishing tepics in the assignment. Shelley was very helpful and I learned a lot. Terry was very helpful in her comments about the interview. She gave both good and bad points that need improving. She played a role that gave good experience to someone learning to interview. This was an excellent Opportunity for me to see what I can act like when given the opportunity to play the supervisory role. The entire thing was very beneficial to me and Shelley was able to show me some weaknesses in my communication skills that might affect other inter- views. Shelley was a good evaluator-~honest in showing areas in need of improvement. In general, this was very helpful and showed me that I should listen a little more in the future. Shelley gave me excellent feedback and made me aware of areas that I could work more to improve. I felt pretty good about the interview--slightly ner- vous about being videotaped. Terry did a good job as interviewee. I enjoyed getting some real good suggestions from Terry afterwards. I also feel I learned so much from doing this unit and that I will use this information in the future. Shelley gave me excellent feedback as to my interview and made me feel good about my interviewing technique. She gave me both positive and negative comments as well as some very interesting theories on manager-supervisor relationships. 196 I have learned how a supervisor needs to be very sensitive to his employees and everything happening in his life. A good interviewer needs to imagine himself in the employee's position and work from that point of view. These guidelines are very helpful and have given me a framework for interviewing. Enjoyed the session. Shelley was very helpful and gave a lot of constructive advice. Good experience. Very valuable for me. I got good feedback and will work on making the appropriate changes. I think this exercise was helpful in preparing us to interview. I learned a lot of my own weak points that need work. 197 Student Comments from Progress Interview Unit Attitude Survey Junior Students Unit with Written Responses, Role-Player: 1. 3. What suggestions would you make for improvements? More preparation on the part of role-players for practice. Instructor summarize material before role-play. There was a lot of material--possib1y break the unit into two parts, then the third part could be prac- tice. Students should be able to be role-player or ob- server beforehand, so that preparation is better. What was the best feature of the unit? I did learn how to conduct an interview prOperly. I did learn what steps are necessary for an effec- tive progress interview. Role-playing allows a very clear understanding of all the problems involved in a real interview. Role-playing--to actually see some interviews being done. Critique and gradual improvement during practice session. I learned a lot about employee interviewing that I will be able to apply. What was the worst feature of the unit? Playing a role. Length of the module (reading material) Everyone should be able to play a role. As a role-player, I was being judged. I was really put on the spot. 198 Unit with Written Responses, Observer: 1. What suggestions wou1d you make for improvement? Unclear about the importance of this unit. Review pre-test tapes afterwards before moving on. What was the best feature of the unit? Asking questions throughout the unit. Practicing of interviews was helpful. Pre-test and interviews in the classroom--very interesting and helpful. I know they will come in handy. What was the worst feature of the unit? All the reading on our own. Cut down slightly. Unit with Written Responses, Directed Observer: 1. What suggestions Would you make for improvements? I found this to be a very effective way to learn. Improve scenarios to include more problem-solving. Allow more time for reading materials. What was the best feature of the unit? Active role-playing with the use of evaluation checklists. I feel that as a directed observer, I understood the purpose of the material better. To be able to take part-~either role-playing or as directed observer. Actual practice on videotape. Also guidelines to interviewing. What was the worst feature of the unit? I wish that we all could have experienced the role- playing. 199 Unit with Required Written Responses, Role Player: 1. What suggestions would you make for improvements? More time to discuss the unit before practicing. Write a more hostile role for the employee so we can learn ways to handle them. Don't have other role-players present during other interviews--they pick up ideas from the first ones. 2. What was the best feature Of the unit? Practical application. I always learn a lot more by doing than by simply listening to a lecture. Everything was outlined. Evaluation was the best part. Allowed me to see a lot of mistakes I would have skipped over. It is good to get feedback about your performance. 3. What was the worst feature of the unit? Have to write in answers. Took a long time to read. The feeling Of being unprepared. Unit with Required Written Responses, Observer: 1. What suggestions would you make for improvements? Include more real life situations. It's hard to imagine what to do in different circumstances. A scenario where employee was not agreeable. Explain more before starting role-plays. 2. What was the best feature of the unit? Role-playing helped bring the information together and showed me where I needed more help. Open discussions after role-plays. Helped clarify the concepts. Role-playing--also writing in answers. 200 3. What was the worst feature of the unit? Pre-test -- it confused me because I can't remember what I did. Questions in unit came too soon after the informa- tion. Unit with Required Written Responses, Directed Observer: 1. What suggestions would you make for improvements? More discussion about material in the unit. 2. What was the best feature of the unit? Role-playing -- good use of concepts. Having to rely on ourselves to provide answers, comments, and reasons for them. I didn't have to listen to a lecture. I think I retained a lot of the information. 3. What was the worst feature of the unit? A lot of written material. Too many pages. Senior Students Unit with Written Responses, Role-Players: 1. What suggestions would you make for improvements? Answers to questions in the unit aren't reinforced enough in the material. Correct typos. Add more examples. Add more questions, answers. Show a model of the interview. 2. What was the best feature of the unit? Examples were good. Well-organized, self-explanatory. Instructor feedback while role-playing. 201 3. What was the worst feature of the unit? Lack of interaction with the instructor while learning the material. Prefer lecture-discussion type of session. Too lengthy. Unit with Written Responses, Observer: 1. What suggestions would you make for improvements? Instructor clarify material before role-playing. Model of interview before role-playing. Give feedback on the pre-test interview. 2. What was the best feature of the unit? Instructor feedback on role-plays. Module was concise, read easily, implemented im- mediate feedback for more positive learning. Steps in the interview and what areas to emphasize. 3. What was the worst feature of the unit? More specific and more examples (cases). Too much information. Unit with Written Responses, Directed Observer: 1. What suggestions would you make for improvements? Demonstrate interview before role-play. Give examples of key phrases. Give specific examples of problem-solving. Lecture rather than the unit. Want feedback on the VTR pre-test. 2. What was the best feature of the unit? Seeing role-play and getting feedback. Objectives stated and easily read. 202 Written tests helped me to realize what I had to learn and did learn. Good scenarios. What was the worst feature of the unit? Not seeing an example of an interview. Some misspelled words. Lecture rather than unit. TOO long. Unit with Required Written Responses, Role-Player: 1. What suggestions would you make for improvements? A lot of material was repetitive-~condense some parts. Have everyone role-play. Clarify difference between general and specific objectives for the interview and which to discuss with employee. What was the best feature of the unit? Allowing practical application during the role-play. Role-playing helpful--you really must organize your thoughts before conducting this type of interview. Descriptions of the components of the model. Flowchart helped me the most to pull all the steps together. What was the worst feature of the unit? TOO long. Having to do an interview in front of the class and camera, but I realize it's helpful. Interview planning sheets don't contain as much information as I would like. 203 Unit with Required Written Responses, Observer: 1. What suggestions would you make for improvements? I really enjoyed learning from the module and feel I learned a lot from it. Possibly including more peeple in role-play because by the second time through, the four may have been too familiar with fresh input, and may have been more bene- ficial for discussion. Receive feedback after post-test. Would liked to have been an interviewer in role-play to get feedback. 2. What was the best feature of the unit? Having the answers in the back for reference. Discussion of role-plays was good--honest and help- ful. The module was good and not too time consuming. Test (objective One) was excellent. An objective evaluation of the information in the self-study guide. 3. What was the worst feature of the unit? The pre-test. Self-study module was too long. It's hard to read and write on the module especially without a table. Unit with Required Written Responses, Directed Observer: 1. What suggestions would you make for improvements? The unit was well-organized and Objectives were clear. It gets tiring seeing a lot of interviews, yet every- one should have a chance to practice them through role- playing. More introductory information before starting the unit--clarify purpose of the pre-tests and give feed- back on pre-test. 204 2. What was the best.feature of the unit? Ability to do on own time. Good to have for future reference. Case studies were good to have. Good to have answers to questions to refer to, to see how you're doing. Booklet was well-organized. 3. What was the worst feature of the unit? Difficult to determine exact wording of answers to questions. Tests--some questions didn't have clear-cut answers. BIBL IOGRAPHY BIBLIOGRAPHY Adams, C.H. and Fitz, P.A. Simulation exercises for inter- view training in dietetics: A module on listening skills. Journal of the American Dietetic Association, 1979, 14:50-52. Barrows, H.S., and Tamblyn, R.M. Self-assessment units. Journal of Medical Education, 1976, 51:334-336. Block, J.H. Student learning and the setting of mastery performance standards. Educational Horizons, 1972, 50:183-190. Breese, M.S., Welch, A.C. and Schinysfhouser, F. Computer- simulated encounters. Journal of the American Dietetic Association, 1977, 103382-388. Bruner, J.S. The Relevance of Education. New York: Norton -Publ. Co.,71972. Bruner, J.S. The act of discovery. Harvard Education Review, 1961, 31:21-32. Butcher, H.J. A note on the scale product and related methods of scoring attitude scales. British Journal of Psyghology, 1956, 31:133-139. Carroll, J.G. and Monroe, J. Teaching medical interviewing: a critique of educational research and practice. Journal of Medical Education, 1979, §j;498-500. Carver, R.P. Special problems in measuring change with psychometric devices. In: Evaluative Research: Stratggies and Methods. Pitt§burgh: American Institute fer Research, 1970. Chambers, M.J. and Hubbard, R.M. Assessing achievement for minimum academic competency, I and 11. Journal of the American Dietetic Association, 1978, 73:27-31. 205 206 Cronbach, L.J. Test validation. In: R.L. Thorndike (Ed.) Educational Measurement (2nd Ed.) Washington: American COuncil on Education, 1971. Davis, R.H., Alexander, L.T. and Yelon, S.L. Learning System Design, An Approach to the Improvement Of Instruction. New York:iMcGraw-Hill Book Co., I974. Douglass, J.B. and Olson, L.A. Improving criterion refer- enced tests through item analysis. Learning and Evaluation Services, Michigan State University, East Lansing, Undated. Ebel, R.L. Essentials of Educational Measurement. Englewood Cliffs, New Jersey: Prentice-Hill, Inc., 1972. Essentials for Coordinated Undergraduate Prpgrams in Die- tetics with Self-Study Guide. Chicago: ’American Die- tetic Association, 1976. Fhaner, 8. Item sampling and decision-making in achievement testing. British Journal of Mathematical and Statis- tical Psychology, 1974, 21:1723175. Fiedler, K. and Beach, B.L. Development of a model for establishing content validity and inter rater relia- bility of performance evaluation tools. Foodservice Systems Educators Council, St. Louis, Mo. Proceedings of the Ninth Conference, March, 1977. Fiel, N.J., Griffin, R.E., McNeil, J.A., Ajunwa, M.I., Salisbury, C.A. and Aasved, C. A model for evaluating student clinical psychomotor skills. Journal of Medi- cal Education, 1979, 54:511-513. Foodservice Systems Management Educators Council (FSMEC), Denver, Colorado. Proceedingg of the Eighth Confer- ence, March, 1975. Foodservice Systems Management Educators Council (FSMEC), San Antonio, Texas. Proceedings of the Seventh Con- ference, March, 1973. Fremer, J. Handbook for Conducting Task Analyses and De- velo in Criterion-Referenced Tests oiiLanguage Skills. PR 73-13. Princeton, New Jersey: Educational Testing Service, 1974. Gagne, R.M. The Conditions of Learning. New York: Holt, Rinehart and Winston, Inc., 1965. 207 Gines, D.J., Morrissey, M.J. and Yelon, S.L. Development of three simulations for training dietetic practi- tioners. Journal of the American Dietetic Association, 1978, 12:519-522. Glaser, R. Instructional technology and the measurement of learning outcomes. American Psychologist, 1963, 18:519-521 Glass, G.V. and Stanley, J.G. Statistical Methods in Edu- cation and Ps cholo . Englewood Cliffs, New Jersey: Prentice-H311, 1970. Glossary, 1974. Commission to develop a glossary on ter- minology for the association and profession: titles, definitions, and responsibilities for the profession of dietetics. Journal of the American Dietetic Association, 1974, 64:661-665. Gohring, R.J. A beginners guide to resources in gaming/ simulation. Audio-Visual Instruction, 1978, 23:46-49. Goldstein, A. and Sorcher, M. Changing Supervisor Behavior. New York: Pergaman Press, nc., 974. GrOpper, G.L. Diagnosis and Revision in the Development of Instructional MateriaIE. Englewood Cliffs, New Jersey: Education Technology Publications, 1975. Haladyna, T.M. Effects of different samples on item and test characteristics of criterion-referenced tests. Journal of Educational Measurement, 1974, 11:93-99 Hambleton, R.K. and Novick, M.R. Toward an integration of theory and method for criterion-referenced tests. Journal of Educational Measurement, 1973, lg:159-170. Hambleton, R.K., Swaminathan, H., Algina, J. and Coulson, D.B. Criterion-referenced testing and measurement: A review of technical issues and deveIOpments. Review of Educational Repearch, 1978, 48:1-47. Hart, M. Competency-based education. Journal of the American Dietetic Association, 1976, 62:617-620. Haslerud, G.M. and Meyers, S. The transfer value of given and individually derived principles. Journal of Edu- cational Psychology, 1958, 39:293-298. 208 Henrysson, S. and Wedman, I. Some problems in construction and evaluation of criterion-referenced tests. Scandi- navian Journal of Educational Research, 1974, 1§:l-12. Hiob, F. Deve10pment and pilot testing of a self-instructional manual for the design of learning modules: A research and development study. Ph.D. Dissertation, Michigan State University, 1978. Holdman, H. The Trainin of Simulated Patients. Office Of Medical Edfication, esearch and Deve10pment, College of Osteopathic Medicine, Michigan State University, East Lansing, Michigan, 1976. Holmes, T.F. VIM II: The effect of direct passive obser- vation versus televised remote passive Observation on cognitive learning and satisfaction with instructional method. RIME conference, AAMC, November, 1975. Howard, V.A. and Shiller, R. Competency-based education in a career mobility program in dietetics. Journal of the American Dietetic Association, 1977, 11:428-431. Hutter, M.J., Dungy, M.D., Zakers, G.E., Moore, V.J., Ott, J.B. and Favret, A.C. Interviewing skills: a compre- hensive approach to teaching and evaluation. Journal of Medical Education, 1977, 11:328-333. Inbar, M. and Stol, C.S. Simulation and Gaming in Social Science. New York: The Fiee Press, 1972. Ingalsebe, N. and Spears, M.G. Deve10pment of an instrument to evaluate critical incident performances. Journal of the American Dietetic Association, 1979, 11:134-140. Isaac, S. and Michael, W.B. Handbook in Research and Eval- uation for Education and the Behavioral Sciences. San Diego, Ca: EDITS Puhlishers, 1977. Kerlinger, F.N. Foundations of Behavioral Research. New York: Holt, Rinehart and Winston, Inc., 1973. Lansley, D.G. and Aycrigg, J.B. Filmed interviews for testing clinical skills. Journal of Medical Education, 1970, g§;52-58. Lewis, M.N. and Beaudette, T.M. Coordinated education in dietetics. Journal of the American Dietetic Associa- tion, 1977, 19:5967601. 209 Likert, R. A technique for the measurement of attitudes. Archives of PsyChology, NO. 140. Columbia University, .1932, 43-53. Livingston, S.A. A utility based approach to the evaluation of pass/fail testing decision procedures. COPA Research Report. Princeton, New Jersey: Educational Testing Service, 1975. LOpez, F.M. Personnel Interviewing Theory and Practice, Second Edition. New York} McGraw Hill Book Co., 1976. Loyd, M.S. and Vaden, A.G. Practitioners identify compe- tencies for entry-level generalist dietitians. Journal of the American Dietetic Association, 1977, 11:510-517. Maatsch, J.L. An Introduction to Patient Games: Some Fun- damentals of Clinical Instruction. East Lansing, Mi.: MiChigan State University, 1974. Maatsch, J.L. VIM l: The effects of instructional models on comprehension and retention of several dependent performance variables. RIME conference, AAMC, November, 1975. Maier, N.R.F., Solem, A.R. and Maier, A.A. The Role-Play Technique: A Handbook for Management ahdhLeadership Practices. LaJOIla, Ca.: University Associates, Inc., 1975. Maranell, G.M. Scaling, A Sourcebook for Behavioral Scien- tists. Chicago: Aidine PuhliShing Co., 1974. McClelland, D. Proceedings of a National Conference for Evaluating Competence in the Healih'Professions. Nov. 11 and 12, 1976. Sponsored By the Professional Exami- nation Service, New York. McLean, H.W. Are simulations and games really legitimate? Audio-Visual Instruction, 1978, 11:12-13 and 57. Michigan State University, Department Of Food Science and Human Nutrition, Unpublished entry-level competencies for generalist dietitians, 1976. Michi an State Universipy Publication: 1979 Description of ourses. Vol. 73,9No. l, Septemher, 1978, p. A 83. Michigan State Universipy Student Employment Manual. Place- ment Services, Septemher, 1978, p. 22. 210 Millman, J. Determining test length: passing scores and test lengths for objectives-based tests. Instructional Objectives Exchange, Los Angeles, Ca., 1972. Millman, J. Passing scores and test lengths for domain- referenced measures. Review of Educational Research, 1973, 11:205-216. Moreno, J.L. Who Shall Survive? New York: Beacon House, 1953. Muslin, H.L., Thurnblad, R.J., Templeton, B. and McGuire, C.H. Evaluative Methods in Psychiatric Education. Washington, DTC.: TAmerican Psychiatric Associhtion, 1974. Novick, M.R. and Lewis, C. Prescribing test length for criterion-referenced measurement. In: C.W. Harris, M.C. Alkin, and W.J. Pepham (Eds.). Problems in Criterion-Referenced Measurement. CSE monograph series in evaluation, No. 3, Los Angeies: Center for the Study of Evaluation, University of California, 1974. Pacoe, L.V., Naar, R., Guyett, I.P.R. and Wells, R. Training medical students in interpersonal relationship skills. Journal of Medical Education, 1976, 11:743-750. Pietrzyk, D.J., Brittin, H.C. and Chamberlain, V.M. Pro- grammed instruction in institution purchasing for die- tetic students. Journal of the American Dietetic Association, 1978, 11:520-524. Popham, W.J. Educational Evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, I975. Popham, W.J. and Husek, T.R. Implications of criterion- referenced measurement. Journal of Educational Mea- surement, 1969, 6:1-9. Poppleton, P.K. and Pilkington, G.W. The measurement of religious attitudes in a university population. British Journal of Social and Clinical Psychology, 1963, 1:20-36. Report of the task force on competencies: Council on Edu- cational Preparation, The American Dietetic Association, Journal of the American Dietetic Association, 1978, 21:281-284. Richetto, G.M. and Zima, J.P. Fundamentals of Interviewipg. Chicago: Science Research Associates, Inc., 1976. 211 Roach, F.R., Hoyt, D.P. and Reed, J.G. Evaluation of a co- ordinated undergraduate program in dietetics. Journal of the American Dietetic Association, 1978, 11:154-160. Robinson, J.P., Rush, J.G. and Head, K.R. Criteria for an attitude scale. In: Measures of Political Attitudes by J.P. Robinson, Ann Arbor, Mi.: ’SUrvey Research Center, 1967. Rovinelli, R.J. and Hambleton, R.K. On the use Of content specialists in the assessment of criterion-referenced test item validity. Dutch Journal for Educational Research, 1977, 1:49-60. Santogrossi, D.A. and Colussy, S.A. A methodology for systematic evaluation of the components of instruction units. Journal of Personalized Instruction, 1976, 1:45-46. Shavelson, R.J., Block, J.H. and Ravitch, M.M. Criterion- referenced testing: Comments on reliability. Journal of Educational Measurement, 1972, 2:133-137. Terrace, H. and Parker, 8. (Eds.). Psychological Statistics, Vol. 111. San Rafael, California: Individual Learning Systems, Inc., 1971. Towar, J.B. and Vosburgh, P.M. Development of a rating scale to measure learning in clinical dietetics. Journal of the American Dietetic Association, 1976, 11:440-449. Unklesbay, N. Students in food systems management contri- bute to nutrition programs for the elderly. Journal of the American Dietetic Association, 1977, 1§;516-520. Ward, T. Evaluating Instructional Games and Simulations. Professor of Education, Michigan State University, (undated). Watson, D.R. Coordination of classroom and clinical exper- ience. Journal of the American Dietetic Association, 1976, 62:621-625. Welsch, P.K. Identifying verbal communication roadblocks and effective responses. (unpublished interviewing units), Food Science and Human Nutrition Department, Michigan State University, 1978. 212 Wenberg, B.G. Student Handbook for the Profession of Die- tetics. East Lansing: Michigan State University Press, 1977. West, B.B., Wood, L., Harger, V. and Shugart, G. Food Ser- vice in Institutions. New York: John Wiley and Sons, Inc., 1978. Wilcox, R. A note on the length and passing score of a mastery test. Journal of Educational Statistics, 1976, 1:359-364. Wilson, M. Determining workloads by random ratio-delay sam- pling. Journal of the American Dietetic Association, 1956, 11:719-723. Wise, B.I. and Donaldson, B. Work sampling in the dietary department. Journal of the American Dietetic Associa- tion, 1961, 19:327-332. HICHIGQN STAT W! NHIII 312931 E UNIV. LIBRQRIES WIIIIIWIIINHIWIIIIVHINI 00632276