OVERDUE FINES ARE 25¢ PER DAY PER ITEM Return to book drop to remove this checkout from your record. A DESCRIPTIVE ANALYSIS OF THE CLINICAL REASONING OF COLLEGE CHEMISTRY TEACHING ASSISTANTS IN A TUTORIAL SETTING By Philip S. Heller A DISSERTATION Submitted to Michigan State University in partial fulfiiiment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Administration and Higher Education 1979 ABSTRACT A DESCRIPTIVE ANALYSIS OF THE CLINICAL REASONING OF COLLEGE CHEMISTRY TEACHING ASSISTANTS IN A TUTORIAL SETTING By Philip S. Heller To better understand competent tutorial performance, this study sought to assess the applicability of a model of medical inquiry in analyzing and explaining tutoring. The medical inquiry model portrays a diagnostic intellectual strategy consisting of generating diagnostic hypotheses of disease states (learner deficiencies) and acquiring and interpreting information (cues) derived from the interaction to test the accuracy of these hypotheses. A basic assumption underlying the adaptation of this model was that thoughts lead to action. There- fore, understanding competent tutoring required an analysis of tutor mental processing and behavior. This research was guided by the belief that the tutor can be characterized as a clinical information- processor. Within a self-paced, tape-tutorial, freshman chemistry course, two experienced tutors were tape recorded and observed while interact- ing individually with twenty-four students. The interactions focused upon four types of practice or exam, ideal gas problems (tasks) for which the students sought help. The knowledge required to successfully Philip S. Heller solve these problems was represented either as procedural flow charts or as lists of propositional rules. These knowledge representations were used to characterize (l) the tutor's hypotheses of what the stu- dents knew or didn't know during an interaction (the student's state of knowledge), and (2) the chemistry knowledge the tutor and students dealtlwithexplicitly. During a post-tutorial stimulated recall ses- sion which was tape recorded, the original interaction was replayed in order to stimulate the tutors' memory and help them recall previous interactive thoughts. The interaction and stimulated recall tapes were transcribed and inferences, based upon the medical inquiry model, were made concerning the tutor's mental processing. Specific instances of the constructs of the medical model and indicators of method valid- ity were identified and counted in a second pass through the tran- scripts. Each interaction and the appropriate set of inferences were summarized in the form of a referenced flow chart and a cue x hypothe- sis matrix. Finally, these intermediate summaries were combined for each type of task and for each tutor into eight models of tutor behav- ior and diagnostic mental processing. The findings from this study indicated that the diagnostic model of medical inquiry does have some applicability in that the tutors studied did attempt to determine the student's state of knowl- edge. They did this by generating and testing hypotheses about the student's pre-tutorial deficiency with the chemistry knowledge and the student's conception of the subject matter as the interaction evolved. These hypotheses were subject matter specific; they were defined in terms of the specific knowledge to be learned. The results of this Philip S. Heller study also show that information (cues) obtained mostly from student behaviors is attended to and interpreted by the tutors to evaluate generated hypotheses. In terms of the number of generated hypotheses and acquired cues, greater variations were found across the instructional problems of focus during the interaction for any one tutor than between the tutors for any one problem. Thus, components of the hypothesis- testing strategy tended to be problem specific. However, the tutors did differ significantly in the portion of cues that were tutor elicited rather than volunteered. The tutor's ability to diagnose student deficiencies varied as a function of the specific subject matter that was the focus of the interaction. Rendering a correct diagnosis did not always elimi- nate the student's deficiency. Student opinion data showed that these tutors were well liked and appreciated, even by those students who continued to exhibit knowledge deficiencies. The data suggest that valid representations of the tutors' models of course chemistry knowledge were developed and that the reports by the tutors during stimulated recall are probably a combi- nation of accurate recall and post-hoc reconstructive explanations. Recommendations in this study have been made concerning future studies of tutoring using an information-processing perspective and introspective methodologies. ACKNOWLEDGMENTS The final substance and form of this dissertation is due to the assistance of several people. The encouragement and support pro- vided by Professor Max Raines is much more than is manifested between these covers, and his efforts in making this learning experience worth- while are deeply appreciated. Professor Lawrence Alexander is specially thanked for warmly sharing so much of his own experience as a learner, teacher, researcher, and consultant and for providing an opportunity to put theory into practice. The constructive criticism and preparation assistance by Dr. Howard Hickey are very much appre- ciated. A particular debt of gratitude is owed to Dr. Edward Smith, whose many suggestions have added significantly to the basic content and clarity of this work. His keen insights and perceptive questions always stimulated new ideas. For this and much more, he is thanked. Special appreciation is expressed to Dr. Lee Shulman for many delight- ful, unique, and thought-provoking perspectives. Initially, serving as committee chairperson, Dr. William Sweetland, who died before see- ing this work completed, is thanked for his kind wisdom and leadership. The assistance of Dr. Robert Hammer is gratefully appreciated, for without his help this project could not have been completed. The tutors who served as subjects are also thanked for their extra effort and interest in educational research. ii The many long hours of typing by Cheryl Hoerauf and Helen weber with assistance from Marilyn Hebb are graciously acknowledged. Also, Phyllis Erickson is thanked for producing the fine graphic illustrations. The available love and devotion of my immediate and surrogate family laid the foundation for this endeavor. Ms. Leslie Fadem deserves special mention. Though having to relinquish much of our shared time together, her loving support and open and honest companionship were an important source of strength. TABLE OF CONTENTS Page LIST OF TABLES ........................ vii LIST OF FIGURES ........................ ix DEFINITION OF TERMS ...................... xi Chapter I. THE PROBLEM ...................... 1 Need ....................... . 1 Introduction ................... l Problem Statement ................. 2 Contributing Factors ............... 3 Summary ...................... 4 Purpose ....................... 5 Theoretical Orientation ............... 6 Definition of the Tutorial Setting ........ 6 Theoretical Conception of the Tutor ........ 9 Objectives ..................... ll Overview of the Study ................ 13 General Methodology ................ l3 Rationale ..................... 14 Synopsis of Specific Procedures .......... l5 Limitations ..................... 17 Theoretical Restrictions ............. l7 Procedural Restrictions .............. l7 Implications .................... 18 Overview ...................... 19 II. SELECTED LITERATURE REVIEW .............. 21 Introduction .................... 2l Tutoring: An Overview ................ 22 Conceptions of Tutoring .............. 22 Approaches of Research .............. 24 Theories of Tutoring ............... 28 An Intuitive Analysis of Remedial Tutoring ..... 32 Theoretical Approach to Clinical Reasoning ..... 36 iv Chapter A Comparison of Two Research Traditions ...... Advantages of an Information-Processing Approach for the Present Study .............. A Model of Medical Diagnosis and Its Applicability to Tutorial Reasoning .............. A Diagnostic Model of Tutoring ........... A Summary of Five Theoretical Models ....... A Tutorial Diagnostic Model ............ Summary ...................... Methodological Review ................ Representation of the Problem Space ........ Process Tracing .................. Chapter Summary ................... III. METHODOLOGY ...................... Phase I ....................... Description of the Course ............. Topic Selection .................. Subject Matter Representation ........... Task Analysis ................... Knowledge Representation ............. Phase II ...................... Subjects ..................... Data Collection .................. Protocol Analysis ................. Operational Definitions and Evidence Enumeration Development of Tutor Models ............ Chapter Summary ................... IV. SUMMARY OF RESULTS .................. Introduction .................... Summary Flow Charts ................. Format and Content of Summary Flow Charts ..... Narrative Interpretation of Interaction l-l5 Diagnostic Constructs ................ Review of Conceptual Definitions ......... Operational Definitions .............. Concept Refinement ................ Construct Relationships .............. Cue x Hypothesis Matrices Summary ......... Tutorial Models ................... Tutor Decision Rules ............... Flow Chart Task-Specific Tutor Models ....... Tutor's Conception of Tutoring .......... Summary ...................... Page 105 109 110 110 116 123 130 137 137 141 157 159 Chapter V. Tutor Effectiveness ................. Effectiveness Criteria .............. Effectiveness Results ............... Validity of Knowledge Base Representations ..... Comparison of Operations ............. Comparison of Flow ................ Heuristic vs. Algorithmic Problem Solving ..... Method Validity ................... Chapter Summary ................... CONCLUSIONS, IMPLICATIONS, AND RECOMMENDATIONS Overview ...................... Conclusions ..................... Question 1 .................... Question 2 .................... Question 3 .................... Question 4 .................... Validity of Methodology .............. Summary of Significant Conclusions ........ Implications and Recommendations .......... Implications for Research ............. Implications for Practice ............. Recommendations for Developing Skilled Diagnostic Tutoring .................... Recommended Future Studies ............ APPENDICES .......................... A. on G. TASK CLASSIFICATION .................. KNOWLEDGE BASE REPRESENTATIONS ............ TUTORIAL INTERACTION PROCEDURES ............ TUTOR-STUDENT INTERVIEWS ............... STIMULATED RECALL MATERIALS .............. PROTOCOL ANALYSIS FOR INTERACTION 1-15 ........ TUTOR PERFORMANCE MODELS ............... LIST OF REFERENCES ...................... vi Page 159 159 163 168 170 171 171 175 178 181 181 183 183 190 192 194 195 196 198 199 200 201 203 207 208 226 238 241 247 250 264 279 bbb-h-bw .9 .10 Tutor 2 Verbalized Decision Rules .11 LIST OF TABLES Comparison of an Information Processing and a Mathematical Modeling Approach to Clinical Judgment A Summary of Five Theoretical Models From Which a Diagnostic Tutorial Model Will Be Developed ..... Chemistry Examples of Three Levels of Content as Defined by Smith (1972) ............... Chemistry Examples of Three Levels of Tasks as Defined by Smith (1972) ............... Criteria Used to Classify Verbal and Mathematical Questions ...................... A Listing of the Eight Selected Chemistry Tasks Tutor Statements Reflecting Hypothesis Generation Tutor Statements Reflecting Hypothesis Evaluation Types of Tutor Generated Hypotheses .......... Sources and Types of Cues Attended to by Tutors Reasons Cited by Tutors for Soliciting Information From the Student ................... Relationship Between Cue Acquisition and Cue Interpretation .................... Cue x Hypothesis Matrices Summarized by Tutor and Task Group ...................... Cue x Hypothesis Matrices Summarized by Tutor Across All Task Groups ................... Tutor l Verbalized Decision Rules ........... Abbreviation Key of the Tutorial Models ........ vii Page 38 46 66 68 70 72 112 115 117 120 124 127 131 134 138 139 145 Table Page 4.12 Comparison of Tutor Diagnostic and Remediation Effectiveness by Task Group ............. 164 4.13 Comparison of Tutor Diagnostic and Remediation Effectiveness Across A11 Task Groups ......... 169 4.14 Prevalent Ways Tutors Teach Heuristics for Problem Solving ....................... 173 4.15 Comparison of the Ways Tutors Teach Two Problem- Solving Heuristics .................. 174 4.16 Accuracy of Tutor Recall of Overt Interaction Events . . 176 4.17 Tutor Ability to Distinguish Between Prior Interactive Thoughts and Thinking During Stimulated Recall . . . . 177 Al Mathematical Tasks (Type I) .............. 209 A2 Verbal Tasks (Type II) ................. 222 viii B1 82 83 84 F1 F2 F3 G1 LIST OF FIGURES Two Levels of the Diagnostic and Treatment Components of Tutoring ..................... An Information-Processing, Diagnostic Model of Tutoring ....................... A Cue x Hypothesis Matrix Representing the Problem Space of the Tutor .................. A Representation Showing the Relationship Between the Intended Knowledge State of a Learner and What a Specific Learner Can Verbalize About and What Remains Unknown ................... A Page From the Protocol Analysis of Tutorial Interaction 1-15 ................... A Summary Flow Chart From Tutorial Interaction 1-15 A Cue x Hypothesis Matrix From Tutorial Interaction 1-15 ................... Tutorial Model III-1 .................. Tutorial Model II-2 .................. Knowledge Base Representation I ............ Knowledge Base Representation II ............ Knowledge Base Representation III ........... Knowledge Base Representation V, VI, and VII ...... Observation Notes for Interaction 1-15 ......... Student Interview Notes for Interaction 1-15 ...... Protocol Analysis for Interaction 1-15 ......... Tutorial Performance Model I-1 ............. ix Page 35 48 51 52 87 100 128 147 152 228 231 233 237 253 254 255 267 Figure Page GZ Tutorial Performance Model II-l ............ 268 G3 Tutorial Performance Model IV-l ............ 269 G4 Tutorial Performance Model I-2 ............. 272 G5 Tutorial Performance Model III-2 ............ 274 G6 Tutorial Performance Model IV-2 ............ 277 x DEFINITION OF TERMS Analytic-Synthetic Approach: The systematic analysis of human per- formance combined with the synthesis of models of performance. Aptitude Hypotheses: Tutor estimations of the student's overall intel- lectual capability, usually based upon a previous encounter with the student. Audio-Tutorial Instruction: Audio tapes, used as instructional units within a learning center, are integrated with laboratory resources to help students master unit behavioral objectives. Students may also attend one large group administrative and one small group discussion session per unit. Tutors are available to aid stu- dents in the learning center. Clinician: An expert practitioner, who informally and artistically makes observations, reaches diagnostic decisions, and provides treatment. Content Diagnosis: A tutor diagnosis of that part of the subject matter that is misunderstood or unknown by the tutee. Cue: An item of information that can be interpreted as evidence which supports or refutes a given hypothesis about the tutee's state of knowledge. Cue Acquisition: The process of searching, gathering, and attending to units of information that are either volunteered or elicited. xi Cue Interpretation: The process of evaluating the fit of a cue to a generated hypothesis. Cue x Hypothesis Matrix: A matrix of cues acquired and interpreted by the tutor and the possible hypotheses generated. Diagnostic Hypotheses: Relatively early tutor estimations of the student's pre-tutorial deficiency. These estimations were essen- tially an answer to the question, why can't the student success- fully complete the problem. Hypothesis Evaluation: The process of accepting or rejecting a hypothesis as being logically consistent with the information derivable from the current set of cues. Hypothesis Generation: The process of generating a tentative estimate about the current state of the student's knowledge or ability with respect to that knowledge the course was designed to develop. Information-Processor: A problem-solver who collects, selects, pro- cesses, and stores environmental and internally organized (memorial) information. These processes are inferred symbolic mental activities. Keller Plan: A system of college instruction which is individually paced, mastery oriented, and student tutored. It may contain a few motivating lectures, but most information is disseminated via printed study guides. It is also known as the Personalized System of Instruction (PSI). Knowledge Base Representation: A formal representation of the tutor's knowledge of the subject matter the course was designed to develop xii and that a tutee must use to successfully complete a specific instructional task. Mastery Learning: A teaching strategy in which students complete a learning unit at their own rate and are not allowed to progress to a new unit until they can demonstrate competency at a set minimum achievement level. Instructional variables (time, unit organization, presentation mode, practice, feedback, evaluation, etc.) are manipulated so that most of the students can attain mastery. Modular Instruction: Instruction which is either partly or entirely based on self-contained, independent units of a planned series of learning activities designed to help the student accomplish spe- cific objectives. Multi-Cue Hypotheses: Hypotheses which are generated and evaluated on the basis of acquiring and interpreting two or more cues. One-Cue Hypotheses: Hypotheses which are generated and evaluated on the basis of acquiring and interpreting a single cue. Peer Teaching: One-to-one or one-to-many instruction in which the tutor and tutee switch roles. Peer Tutoring: One-to-one instruction in which the tutor and tutee are relatively close in age, status, and expertise. Procedural Knowledge Base: A knowledge base required to successfully complete an instructional task represented as a flow chart of sequential operations. Process Diagnosis: A tutor diagnosis of that learning strategy that is incorrectly applied or unknown by the tutee. xiii Process-Tracing: Tracing the mental processes of human beings as they perform a task by observing performance and collecting introspective accounts. Proctoring: Use of student proctors to work and review quizzes, tutor other students, and provide personal contact through social interaction in the context of a Keller Plan course. Pr0positiona1 Knowledge Base: The knowledge required to successfully complete an instructional task represented as sets of proposi- tions or rules. Remedial Tutoring: Tutors possess subject matter expertise and dif- ferential status due to their assigned role. They function intuitively to help remediate student misconceptions within the context of a systematic, individualized course. Remediation Hypotheses: Tutor estimations of what the student knows and doesn't know as the interaction evolved. State of Knowledge: That knowledge that an individual knows with respect to some content domain. Stimulated Recall: The use of audio or video recordings of the sub- jects' performance to help jog their memory of covert thoughts and feelings which occurred during the original recorded per- formance. Task Environment: An "objective" description of the actual problem, including all the given information and the possible solutions. Task Group: One or more instructional tasks, defined at a general level, which required the use of the same course knowledge to be successfully completed. xiv Unevaluated Hypotheses: Generated hypotheses for which no direct evidence exists to suggest that the hypotheses were either accepted or rejected by the time the interaction ends. w J CHAPTER I THE PROBLEM Bees. Introduction Systematized and individualized instruction (Mastery Learning, Keller Plan, Audio-Tutorial, and Modular Instruction) are becoming keen competitors to the traditional lecture-discussion mode of teach- ing within our colleges and universities (Fisher & MacWhinney, 1976; Goldschmid, 1976; McKeachie & Kulik, 1975). In particular, the accep- tance rate of these novel methods among instructors of lower level, undergraduate courses in the more structured disciplines (psychology and the natural sciences) appears high. Adoption of individualized instruction necessitates, to some extent, a change in the traditional roles of the college instructor and the student. The instructor's function shifts from that of information giver and orator to systems designer and manager. More active, self-paced, and self-responsible behaviors characterize the student. These role changes are part of an instructional plan which incorporates more individualistic inter- action between the learner and the teacher. A distinctive characteristic of these newer teaching methods is the extensive use of tutors who usually examine the student's level of mastery, provide and prescribe remedial assistance, and represent the principal human link between the discipline and the 1 learner. Thus, tutoring is becoming a significant focal point of student-teacher interaction in higher education as the trend toward adoption of these innovative instructional systems continues. Problem Statement In most systematized service courses, the instructor usually delegates some, if not all, of the tutorial responsibility to under- graduate and graduate teaching assistants who often lack much teaching experience, but who are subject matter specialists relative to the course content. The use of these tutors is based on two implicit assumptions: (1) that one-to-one teaching has a beneficial impact on academic achievement and course attitudes, and (2) that the critical variables for these successful gains are the tutor's content knowledge and the degree of individual attention provided. The research literature provides ample evidence that these beliefs are unwarranted, since the effectiveness of only sgmg_tutoring has been demonstrated (Allen, Feldman, &Devin-Sheehan, 1976; Ellson, 1976; Harrison, 1972). The significant variable for achieving cognitive results is "what the tutors do" (Ellson, 1976, p. 137); that is, the skills and strategies that are used in the tutorial setting. Training and supervision are the critical conditions for creating effective tutors. The tutorial method is naively conceived of among college instructors as simply the transfer of knowledge between the tutor and the learner. The exact nature of this process remains largely unde- fined and the skills and strategies of successful college tutors have rarely been the object of empirical study. It is this problem that the research being presented here addresses. Contributing Factors Neglect by both the research and practical traditions of higher education is partially responsible for the void in tutor training and its underlying knowledge base. Upon examining the research in higher education, it is difficult to find many studies which systematically analyze the skilled performance of effective college tutors or of instructors in general (Trent & Cohen, 1973). The purpose of most studies of teaching in higher education is to demonstrate the superiority of particular instructional methods in which the effects of the instructor are muted by randomization and dismissed to the error term of statistical designs (Baumgart, 1976). In reviewing the accumulated evidence, Berliner and Gage (1976) conclude that different teaching methods, when correctly practiced, are equally effective in producing student achievement. Therefore, it seems appropriate now to begin to examine the "effectiveness withjg_ teaching methods . . . and to focus more sharply on ways in which each method can be optimally used" (Berliner & Gage, 1976, pp. 18-19). In a parallel view, there has been a call for more research which accounts for other important variables that may interact with the treatment (method of instruction) of interest (Sullivan, 1975). Certainly, the actions and characteristics of the instructor are powerful variables which should not be overlooked. Those inquiries which have sought to develop criteria and techniques for evaluating teacher effectiveness are the closest any research paradigm has come to investigating the college instructor per se. A much finer analysis is needed to iden- tify, explain, and control the variety of behaviors which engender the concept of teaching within the tutorial mode. In the realm of practice, teaching has not been a significant concern in either faculty training or in college personnel procedures (selection, retention, and promotion) in spite of the fact that most Ph.D.'s never publish (Heiss, 1968) and available college employment exists mostly in smaller institutions that emphasize teaching. Com- prehensive training programs that go beyond informal administrative meetings with a group of teaching assistants are still a rarity (Stockdale & Wochok, 1974; Trent & Cohen, 1973). However, recently there has been growing interest in the improvement of instruction brought about by increasing demands of accountability, decreasing enrollments, and a declining economy. This concern is manifested particularly by the rise in faculty development programs (Gaff, 1975). Thus, the attitude within higher education has changed such that the discovery of a theoretical base of tutoring is no longer extraneous to research on teaching nor is the development of tutorial prepara- tion programs outside the boundary of college practice. Summary Tutoring is quickly becoming an important teaching method in many undergraduate courses, due to the rapid expansion of the indi— vidualized instructional systems approach. However, very little is known about the one-to-one instructional process, nor does there exist a precise theory of tutoring to explain what is done and to prescribe what ought to be done. Research and development is required to explore the tutorial process more fully, to identify effective tutor behaviors and strategies which help organize the instructor's actions, and to plan tutor training programs. The favorable climate within higher education suggests that the time is ripe for proceeding with these investigatory and design efforts to amend those past areas of neglect. Purpose The formulation of a clearer conception of tutoring can serve as a guide to developing training programs and must, therefore, logically precede the design stage. This deeper understanding can be developed by examining the tutorial process from the perspective of one or more of the four commonplaces of educational research: the learner, the teacher, the milieu, and the subject matter (Schwab, 1973). In the chapter on methodology, it is demonstrated how each is represented in the present study, but, for the most part, a heavy emphasis. is placed on the teacher component. The reason for this dominance is plain; if the ultimate goal of developing tutorial compe- tence among neophytes is to be realized, then an initial research priority should be the analysis of what constitutes competent per- formance (Glaser, 1976). The basic purpose of this inquiry is to develop a model of competent tutorial performance as it is displayed by teaching assist- ants in an individualized undergraduate chemistry course. This is accomplished by an interplay between rational synthesis of theory and analytical description of tutoring. Initially, a normative model of the tutor's transactions is proposed to be used as a framework for understanding actual behaviors. Descriptions of what "good" tutors do and the heuristic rules and strategies they use are collected and compared to the suggested model. Using this information, the model is modified to fit hitherto unaccounted for constraints of both the envi- ronment and the instructor and is thus grounded in observations of real interactions between tutors and students. Theoretical Orientation Definition of the Tutorial Setting In an attempt to delineate the phenomenon of interest in a more exact way, the concept of tutoring requires more detailed exploration. Basically, tutoring is a form of teaching in which the number of students taught is reduced to a single individual. This simple definition defies the complexity that the concept actually connotes. Many terms and adjectives have been used to add precision to the array of tutorial situations that exist (e.g., structured, peer, and personal tutoring; independent study; proctoring; the learning cell; and coaching). This variety stems from variations in the degree of improvisation used by the tutor, the characteristics of the tutor and the tutee, and the context within which a tutorial occurs. In the previous discussion, the type of tutoring (to be referred to as remedial tutoring) studied has only been alluded to in a general way. The remedial tutorial type lies closest to the peer- proctor model. However, it also differs in some important ways. Hence, it is necessary to provide a more precise description which will also help to define the generalizability of the model. The remedial type of tutoring is characterized in terms of the situational context and the tutor's role (later use of the terms "tutoring," "tutorial," and "tutor" refers to only the remedial type of tutoring depicted below, unless noted otherwise). Context. The course, within which a remedial tutorial is organized, is segmented into smaller units with objectives specified for each. The mode of original instruction is usually a combination of learning activities which might include reading texts and study guides, listening to audio tapes, viewing slides and films, and per- forming experiments and demonstrations. These activities occur in a self-paced "laboratory" or learning center. Lectures and recitations are usually motivational supplements, but not always (e.g., Mastery Learning). In many basic undergraduate courses that are so arranged, a tutor, present in the learning center, provides assistance through personal contact on a first-come, first-served basis. It should be noted that for our purposes here, the similari- ties between particular individualized systems have been heightened and the differences slighted. More accurate definitions of specific individualized orientations can be found in Block (1974, Mastery Learning), Bloom (1968, Mastery Learning), Goldschmid and Goldschmid (1973, Modular Instruction), Keller (1968, Keller Plan), and Postle- thwait, Novak, and Murray, Jr. (1969, Audio-Tutorial). Tutor role. To enhance the description being presented here, it would be useful to briefly examine the Mann and others (1970) teacher-role typology. Mann et a1. (1970) have suggested the follow— ing categories to describe the college teacher's role: the teacher as expert, the teacher as formal authority, the teacher as socializ- ing agent, the teacher as facilitator, the teacher as ego ideal, and the teacher as person. The principal role of the tutor, under inves- tigation here, is that of the expert. However, these roles do overlap and combine in many ways to reflect the intricate relationships in a learning environment. There are two reasons for the emphasis of one role element over all the others. First, this simplification of the complexity of the tutor's role will allow for more concentrated study. Second, the teacher as expert role is highly visible in many tutorial situations. The typical cognitive goals (the transmission of information to assist students in learning concepts and procedures), the limited interaction time, and the student self-selection norm of tutorials seem to reflect this role aspect. In some approaches, other conceptions may be more highly valued (teacher as person), more evident (teacher as authority as in the proctor model of the Keller Plan), or more fully integrated with the expert role, but these elements will remain subordinate here. Even Mann et a1. (1970) suggest that "in some educational environ- ments the teacher as expert is by far the most legitimate part of the teacher role" (p. 13). The tutors, who are the focus of inquiry, can be described as advanced undergraduates or graduate teaching assistants who possess subject matter expertise and differential status due to their defined role. Upon request, they function extemporaneously to help remediate misconceptions and defective academic performance after the student has been exposed to the primary form of instruction. Thus, they can be considered as a secondary defense against misunderstanding and ambiguity. Theoretical Conception of the Tutor Gage (1963) recognized the fact that the concept of teaching is a deceptive generic term very much in need of analysis. So it is with tutoring. In meeting that need, the research approach taken here is guided by the following proposition: The instructor in a tutorial setting is conceived of as a clinical information-processor. Tutor as clinician. The instructor's role as clinician parallels the medical meaning of an expert practitioner, connected with a clinic, who informally and artistically observes patients, reaches diagnostic decisions, and provides treatment. However, there are several important distinctions between the tutorial process and the medical metaphor adapted here. These will be explicated in Chapter II. For now, it will be assumed that tutoring, defined as rendering diagnosis of student learning difficulties and providing remediation, captures the essence of clinical judgment. Although the medical jargon seems a bit cumbersome in the tutorial setting, others have taken a similar view (Ellson, 1976; Shulman & Elstein, 1975). 10 Tutor as informationtprocessor. It seems appropriate at this point to explore directly a critical distinction that has, as yet, only crept into the dialogue. The tutor's diagnostic judgments and decisions can be analyzed at two levels: the behavioral and the cog- nitive. Behavioral analysis is more common and considers the actual observed performances of the individual. The cognitive domain empha- sizes inferred, symbolic, and internally organized mental structures (memory) and psychological activities (also to be referred to as strategies or procedures) which are part of and act on the information stored in memory. Rather than view the instructor as a ”black box" and limit the analysis of tutor performance to incoming stimuli and behavioral responses, the research reported here follows the sugges- tions of Bessemer and Smith (1972), Glaser (1976), and Greeno (1976) and begins to refine our understanding of tutor performance by examin- ing the psychological processes and structures that are necessary to produce such behavior. A theoretical framework that appears most useful for this examination is Newell, Shaw, and Simon's (1958; Simon & Newell, 1971) theory of human problem solving. An essential premise of this theory is that human behavior is uniquely determined by an interaction between the nature of the task for which the behavior evolved (to be referred to as the task environment) and hypothesized information-processing capabilities of the problem-solver. Briefly, these capabilities refer to a control system that links sensory awareness with motor activities and is composed of mental "programs" (analogous to computer routines and themselves composed of more primitive operations or "steps") held 11 in definitive structures (e.g., memory) residing within the human brain. Recasting the tutor in these terms permits appropriate atten- tion to both the structure of the tutorial task environment and to the manner in which the tutor collects, selects, and processes infor- mation when solving clinical problems. That human cognitive processes can be isolated, identified, and employed in explaining observed behavior is a fundamental assumption. Concepts derived from an information-processing theory of human problem solving will be used as analytical tools in thinking about the process of tutoring. This study supports the assertion that educational research follow the lead our medical colleagues have taken in investigating the practitioner's clinical reasoning (Shulman, 1974; NIE, Report No. 6, 1975; Shulman & Elstein, 1975). Objectives Describing the task structure, behavioral performance, and intellectual processing of tutors as they develop diagnoses of what the student comprehends and misconceives are the focal points of this research. For the purposes of developing a more magnified view, the treatment component of tutoring is only examined in its relationship to diagnosis. The prime interest in the tutor's interpretation of the student's state of knowledge stems from the belief that diagnosis is a chief determinant of the specific treatment decisions made by the tutor. 12 Having established a basis for the proposed investigation, a more exact statement of objectives is presented hierarchically, in the form of questions rather than research hypotheses because of the descriptive and exploratory nature of this research. 1. How who a. How do tutors conceptualize the task of assisting students seek help because of a learning deficiency? Do tutors generate interpretations of the student's knowledge state with respect to what the student (1) doesn't know or understand? (Do tutors diag- nose?) (2) does know or understand? Are there different levels (in terms of specificity) to these interpretations? How can these interpretations be represented? is environmental information dealt with? What information is attended to by tutors and why? What information is actively solicited by tutors and why? What are the rules used by tutors that help them to collect information relevant to the tutorial task? to develop interpretations of the student's knowledge state? to combine information into an interpretation of the student's knowledge state? to evaluate those interpretations? 13 4. How accurate are tutors in making interpretations of the student's knowledge state? Overview of the Study Accepting the perspective of an information-processing psy- chology presumes the adoption of a methodology appropriate to that theory and useful for the purposes stated above. After reviewing the common elements of such a methodological approach, a rationale is given for the generic methods of this inquiry. A synopsis of specific procedures will follow. General Methodology The predominant features of "process-tracing" inquiries (those studies concerned with tracing the mental processes of human beings as they perform a task) can be outlined as follows: 1. The experimental setting usually simulates the natural task environment. 2. Protocols of what the participant said or did while per- forming the task are collected. 3. Verbal introspective reports by the participant are usually collected either during or after task performance and are part of the protocols. 4. The protocols are coded and interpreted. 5. Finally, these analyses are transformed into models of human performance. The specific techniques that characterize this genre of research include Total Task Simulation, In-Basket, 20 Questions, and 14 Tab Item. These formats recently have been reviewed and classified (according to the degree of fidelity between the research setting and the real task environment) elsewhere (Shulman & Elstein, 1975) and are only cited here to justify the method of choice. Rationale A high fidelity, total task, case study is the format chosen for the research presented here. More specifically, two tutors are observed as they deal with several different diagnostic cases in an actual tutorial environment. To support this decision, several argu- ments can be raised. Albeit that a naturalistic study lacks the flexibility, control, and reliability of other techniques, it does preserve the integrity of the natural setting. However, the main reason for this choice stems from the lack of knowledge of the tutor- ing process. The design of valid lower fidelity techniques requires an accurate assessment of the task environment, but this evaluation is precisely an objective of the present study. A naturalistic study may eliminate enough unknowns (how the tutor Operates, what the cues and diagnoses are, and whether the cue-diagnosis conception maintains any validity at all) to enable the future development of more objec- tive procedures. The rationale for a case study can likewise be succinctly asserted. The apparently small sample size is misleading in a number of ways. First, the unit of analysis is not simply the tutor. Four task groups (specific learning tasks that the student fails to accomp- 1ish and seeks help with) are crossed with each tutor. An attempt to 15 capture three replications of each task-tutor is made so that the variability within each task-tutor, between tasks, and between tutors can be considered during theory development. Considering the rich- ness of the data, the analysis of each protocol is extremely time consuming and the benefits to be gained from adding additional tutors does not seem to outweigh the costs in data collection and analysis. Also, it is important to remember that the prime goal of this research is the empirical validation of a model and not universal generaliza- bility. Now that the results are forthcoming and the developed pro- cedures demonstrated, the model might be tested in other settings and with other disciplines, tasks, and tutors. Synposis of Specific Procedures Before one can analyze the cognitive processes and behaviors of tutors, a particular setting must be stipulated and explicitly defined. A division of procedures reflects this necessity. Phase I is concerned with the specification and characterization of an aca- demic course and its content and Phase II utilizes those definitions to explore the tutor's diagnostic and problem-solving strategies. Phase I. After a particular academic course and a specific set of units were chosen, based upon prespecified criteria, the experi- menter tutored in the course over the selected units to gain famili- arity with the course structure, the content, the tasks of difficulty, and the student's deficiencies. Because a significant portion of the tutor's task environment can be characterized in terms of the subject matter to be learned, 16 several types of study or exam questions (learning tasks), chosen from the selected course units, were designated for more detailed micro- analysis. This analysis yielded a representation of the underlying knowledge required to answer each question type. These representa- tions constituted a preliminary description of the way in which a tutor represents the task environment and were used in the next phase. Phase II. Two highly regarded, experienced tutors were chosen for observation of their tutoring behaviors. Four types of data were recorded on audiotape: (1) a pre-tutorial interview with each tutor to discover the nature of the tutor's preparation for assisting stu- dents, (2) natural interactions between the selected tutors and stu- dents whose deficiencies involved one of the prespecified question types, (3) a post-tutorial interview of the students to determine their knowledge state both before and after the previous tutorial, and (4) a stimulated recall session in which the tutors, some time after the interaction, listened to the tutorial and attempted to verbally reproduce their own thoughts as they occurred during the interaction. Those taping sessions occurred only during that period on which the selected units were presented to the students. These protocols were coded and interpreted according to cate- gories suggested by a tutorial diagnostic model (developed in Chapter II). These interpretations were then transformed into flow chart models of the tutor's behavior and psychological processing. Compari- sons will be made across interactions, question types, and tutors. Also, the experimenter's interpretation of the student's knowledge state before tutoring was compared with that of the tutor. 17 Limitations There is little doubt that this study contains some obvious constraints. By promulgating these, it is h0ped that the explicit awareness gained will signal a need for greater effort toward their amelioration. The nature of the limitations is both theoretical and procedural, but mostly of the latter type. Theoretical Restrictions An important constraint of this study is its narrowed scope; it seeks to develop a model of only one aspect of tutoring, content diagnosis. The importance assigned to the tutor's expert role and to an information-processing theory also restricts the slice of reality that is attended to and the types of questions that are posed. The affective domain although not completely ignored is not of significant focus here. However, it is believed that this limited breadth is balanced by an expanded depth and that it is better to produce a restricted model which is validated than a more encompassing one which is not. Procedural Restrictions Methodological limitations involve the equipment used, the objectivity of the procedures, and the task validity of the design. The use of audiotape limits the analysis, in the main, to verbal information. However, some visual data were collected by using obser- vation notes. The loss of visual stimuli may also hamper stimulated recall somewhat, but this problem can be minimized if the time lapse between the tutorial and the recall session is kept short. The 18 disadvantages of videotape (increased costs of equipment and manpower requirements and the considerable intervention in the natural setting) seem to surpass the expected gains. Since few well-designed procedures are available, the intro- spective process and protocol interpretation contain several poten- tial difficulties; there may be a reactive effect of the stimulated recall session on the subsequent tutorials (Neisser, 1968) or the blending of the tutor's introspection with a post-hoc analysis of the interaction (Elstein, Shulman, & Sprafka, 1976) or a high degree of subjectivity in the analysis of the protocols (Shulman & Elstein, 1975). The careful construction of rigorous procedures and the record- ing of any anomalies are the measures that were taken to counteract these difficulties. A final limitation exists in what Shulman and Elstein (1975) have referred to as the universal representativeness dimension of task validity. The selected question types represent only a handful of those which serve as stimuli for tutorial remedia- tions. Therefore, this restricts, somewhat, the generalizability of the model. Implications The consequences of the approach outlined above are both theoretical and practical. First and foremost, this research yields a deeper conception of tutoring by generating a model of how college chemistry tutors select and combine information in judging the learner's state of knowledge. As an example of natural human judg- ment, this approach provides a further concrete base from which to 19 evaluate and generalize theories of information-processing and diag- nostic reasoning. The refinement and improvement of introspective techniques and protocol analysis are additional results. Several future, practical by-products of this investigation can be envisioned. Making the normally intuitive judgments of tutors more formal and explicit can assist in the development of: l. competency goals and instructional techniques for tutor training and for educating students in self-diagnosis, helping them to become independent learners; 2. process (Hammond, 1971) as well as outcome feedback to be provided to tutors to help them to integrate informa- tion when making diagnostic judgments; and 3. computer-assisted tutorials for diagnosing learning defi- ciencies and supplying correctives. The procedures developed in this study for mapping out the knowledge state that a course was designed to develop can be applied towards the practical goals of determining the adequacy of instruction and developing cognitive objectives (Greeno, 1976) and student materials. Additionally, this inquiry demonstrates the value and adaptability of nonexperimental, inferential, and observational methods for research on teaching in higher education. Overview The following agenda, to which the remainder of this thesis will conform, is offered as a concise organizer of what is to come. Selected literature, germane to the tutoring process in general, to 20 a diagnostic model in particular, and to the general methodology of this study is reviewed in the next chapter. A specific model of the diagnostic tutorial process derived from a synthesis of several lines of inquiry also appears in Chapter II. In the third chapter, the complete design of the study, including the procedures for data col- lection and analysis, is described. Chapter IV contains samples of the basic data and a summary of the substantive and methodological findings organized within six sections. Having explored the need and purpose for this research, a review of the various literatures that bear relevance to the stated problem is explored next. CHAPTER II SELECTED LITERATURE REVIEW Introduction The review of literature is segmented into five major parts: (1) an overview of tutoring, (2) an intuitive analysis of remedial tutoring, (3) a theoretical approach to clinical reasoning, (4) the development of a diagnostic model of remedial tutoring, and (5) a methodological review. The first segment begins with a general critique of tutoring research and theory focusing primarily on those types of tutoring involving remediation. The second section deals with a description of a college tutorial, organized as a basic com- ponent of an individualized-systems approach to instruction. In the third part, this description is linked to the research traditions of human problem solving, judgment, and decision making. After a brief comparison of these alternative schools of thought, the selection of an information-processing theory to undergird this research is explained. Under this subsuming theory, the salient features of a formal model of medical inquiry are described and related to the clini- cal information processing of a tutor, noting the model's metaphorical properties and limitations. A synthesis of the preceding theoretical and empirical findings, presented as a diagnostic model of remedial tutoring, is developed in the fourth section. Finally, this chapter 21 22 ends with a review of literature relating to the methodology of this study. Tutoring: An Overview In reviewing the literature on tutoring, it becomes imme- diately apparent that the definition of tutoring research is not very cohesive. Schisms in the literature occur because there are several conceptions of tutoring to study and because there are two major research approaches to follow. These conceptions of tutoring that are currently of research focus are discussed. Also, a comparison is made between an experimental and an analytic-synthetic approach to tutoring research. As an example of the former approach, research on proctoring is critiqued because the Keller Plan context for that research lies closest to the instructional situation of the present study. This section ends with a summary of two theoretical perspec- tives of tutoring (cognitive and behavioral). Those studies that are similar to or suggest implications for the research reported here, will be analyzed in depth. Tangential literature will be reviewed by citing summary articles only. Conceptions of Tutoring As was suggested in Chapter I, the definition of tutoring as one-to-one instruction is a rudimentary idea which has been elab- orated upon. The following types of tutoring are discussed here: peer tutoring, peer teaching, proctoring, remedial tutoring, struc- tured tutoring, and programmed tutoring. 23 To demonstrate how convoluted the tutorial literature can be, one need only scrutinize the concept of peer teaching in higher edu- cation. Peer teaching is usually associated with peer tutoring with some important distinctions. Peer tutoring (the tutor and learner are relatively close in age, status, and expertise) is on the rise in all levels of education and is consequently drawing the attention of education researchers (Allen, Feldman, & Devin-Sheehan, 1976; Goldschmid & Goldschmid, 1976). In contrast, in peer teaching, the students consciously switch expert and learner roles and it is pos- sible for the interactions to involve more than two individuals (Schermerhorn, Goldschmid, & Shore, 1976). The picture really becomes complicated when the proctor model in Keller Plan courses is cited as an example of peer teaching (Goldschmid & Goldschmid, 1976). Some forms of proctoring exemplify directed teacher-to-student communication in which little if any role-reversal occurs. It therefore seems more appropriate to con- sider it both peer tutoring and peer teaching. Remedial tutoring, as defined in Chapter I, is very similar to the proctor model. The context for remedial tutoring may be a Keller Plan course or other individualized instructional format (e.g., Mastery Learning). The tutors possess subject matter expertise and differential status due to their assigned role. They function intuitively to help remediate student misconceptions of the original instruction. Two other forms of tutoring, developed in recent years, have been derived from principles of instructional psychology. Harrison 24 (1972) has described structured tutoring as a managed system of tutors, trained in validated skills, who use instructional materials developed in accordance with Gagnéan principles. A major part of one issue of Improving Human Performance Quarterly has been devoted to research on structured tutoring. In contrast, programmed tutoring is a form of programmed instruction in which the teaching activities are programmed in detail (Ellson, 1976). The other essential features of programmed tutoring include clearly specified and limited objectives and empirical formative testing. Approaches of Research The approaches of research on tutoring can be described as having one of two orientations. The majority of research studies are designed from an experimental perspective. This type of research tends to ask two types of questions: questions of effectiveness (how effective was the tutorial or tutor training program) and questions of significant variables (what variables in the tutorial situation affect which tutoring outcomes). The methodology used is typically an experimental or quasi-experimental control-group design. The second orientation is called the analytic-synthetic approach (Collins, Warnock, & Passafiume, 1975). The systematic analysis of tutorial performance combined with the synthesis of models of performance characterizes the methodology of this approach. The methods are designed to help answer more theoretical questions (what is tutoring and how is it accomplished). Shortly, this type of 25 research will be explored in some depth because its goals and methods are exemplified by the present study. Experimental approach. Recently several reviews of tutoring research have appeared in the literature (Allen et al., 1976; Ellson, 1976; Trent & Cohen, 1976). Their conclusions can be summarized as follows: 1. Many studies of tutoring have shown no significant gains in achievement or affective outcomes. 2. Program structure rather than individual attention may be an important variable in successful programs. 3. Very few broad generalizations can be made due to a very haphazard research effort. 4. Very few studies have raised issues grounded in theory. The study of proctoring within Keller Plan courses is an example of this research type in higher education. Since the instruc- tional context for the present study is a modified Keller Plan, this domain of research deserves special attention. A critique of proctor- ing research follows. Research on proctoring. A critical feature of the Keller Plan is the use of proctors to mark and review quizzes, tutor students and remediate deficiencies, and provide personal contact through social interaction (Robin, 1976). It is the tutorial role that is of most interest here and tends to be highly structured (Lazar, Soares, Goncz, & Terman, 1977; Robin, 1977). A basic goal of this research has been to identify effective proctoring behaviors and variables. There are usually two, sometimes three phases, during a tutorial: (1) determining 26 a student's baseline and area of difficulty, (2) prescribing approp- riate remedial procedures and directly assisting the student, and (3) testing for mastery (Lazar et al., 1977; Robin, 1977). Most of the tutorial skills (providing prompt feedback, rein- forcement, etc.) which are believed to be effective and incorporated in training programs were derived from behavioral principles of psy— chology. Robin (1977) and Kozma, Kulik, and Smith (1977) conclude that successful techniques and procedures for training these skills have been identified. However, these authors suggest the need for additional validating studies that use achievement on course objec- tives as a criterion of effective proctor training. Although great strides have been made in training proctors to use effective behaviors, there is a need to develop and test pro- grams which train proctors to use effective strategies of decision- making and problem-solving. For example, Lazar et a1. (1977) suggest that a behavioral component of determining the student's baseline is probing for the student's mastery of prerequisites. Several questions arise immediately: which prerequisites are probed, why, how, in what order, and how does the proctor interpret student responses in order to make decisions about the next teaching act. In contrast to this approach, the analytic-synthetic approach has just this focus. Analytic-sypthetic approach. Allan Collins and his colleagues have been pursuing a line of research on tutoring that is directly akin to this study (Brown & Burton, 1975, 1978; Collins, 1977; Collins, Warnock, Aiello, & Miller, 1975; Collins, Warnock, & Passafiume, 1975; Stevens & Collins, 1977). Using theoretical 27 constructs and analytical techniques of cognitive science, they have developed representations of subject matter knowledge and constructed formal models of successful teaching strategies. Brown and Burton (1975, 1978) have formally represented knowl- edge of electronics, addition, and subtraction; Collins (1977) and Collins, Warnock, and Passafiume (1975) have developed representations of geography. These representations, when programmed on a computer and combined with formal tutorial strategies, have generated several computer-assisted instructional systems that can tutor students. In one particular study (Collins, Warnock, and Passafiume, 1975), actual tutorial dialogues on South American geography were recorded and analyzed to uncover the strategies that tutors use in interacting with students. The strategies were programmed on a com- puter and comparisons were made between the human-led and computer- led dialogues. Although some of the discovered tutor strategies are not relevant to remedial tutoring, the study does contain several impor- tant implications. The tutoring task under analysis was different from that in remedial tutoring in that it was the tutor and not the student who selected the topics of discussion. Hence, the strategies dealing with topic selection and presentation have limited applica- bility to the setting under investigation here. However, the error correction strategies have an important significance. Collins, Warnock, and Passafiume (1975) hypothesized that "questioning the student to determine the underlying misconceptions" (p. 72) would be the tutor's response to student error. They found no support for 28 their hypothesis and they explained this by suggesting that the errors were all obvious ones. Their interpretation appears plausible because paired-associate and discrimination learning formed the basis for the tutorial dialogues (naming the political and geographical features on a map). It seems that a tutor's error correction strategy may inter- act with the nature of the student's learning task so that more dif- ficult tasks may lead to misconceptions that are only made conspicuous by specific tutorial strategies. Therefore, the present study, fol- lowing Collins, Warnock, and Passafiume's (1975) suggestion, involved observations of tutor strategies with more complex types of learning tasks. The analytic-synthetic approach was also adopted here. There is one weakness in the Collins, Warnock, and Passafiume (1975) investigation that the present study sought to overcome. Good teachers were selected for the project based upon the author's judg- ment. An attempt was made in this study to obtain some external indicators of tutor effectiveness. Theories of Tutoring A cognitive perspective. In pursuing this same line of analytic-synthetic research, Collins (1977) has develOped a theory of Socratic tutoring. This type of tutoring involves helping the student to learn specific information about cases, to derive general prin- ciples from cases, and to develop reasoning skills. The tutoring strategy has been formalized as a set of twenty-four decision rules, each consisting of a condition and an action (e.g., "If in situation X, do Y"). They are expressed at a rather general level, independent 29 of any particular subject matter. Resnick (1977), in critically analyzing Collins' (1977) theory, felt that it may have some descrip- tive power, but can't function as prescriptive theory until the choices between tutorial acts for a particular set of conditions are specified by some set of higher-level decision rules or goals. Stevens and Collins (1977) have attempted to fill this void by developing a theory of the goal structure of a tutor. The Goal- Structure Theory was based upon tutors' comments of their own tutorial dialogues as they proceeded. The comments involved: (1) what the tutors thought the student knew or didn't know based on the student's response, and (2) why they responded to the student in the way they did. Unfortunately, it is difficult to evaluate the influence of this methodology on theory development, since it was only described very generally. The Stevens and Collins theory contains two levels of goals. The top-level goals of a Socratic tutor are essentially (l) to develop the student's understanding of the causal relationships among factors for a specific case, and (2) to develop the student's ability to apply the causal model learned to new cases. In order to achieve these goals, Stevens and Collins (1977) have postulated two types of sub- goals which govern the selection of tutorial strategies: (1) diagnosis and (2) correction. The authors describe diagnosis as the following: The purpose of diagnosis is to discover differences (either errors or omissions) between the student's knowledge and the tutor's knowledge. This generally requires that the tutor probe the student by asking for relevant factors, by requiring the student to make predictions about carefully selected cases, and 30 by trying to entrap the student into making incorrect predic- tions. It is clear from our analysis of human dialogues that diagnosis cannot be characterized in terms of a simple mapping between student's errors and conceptual bugs. Rather the pro- cess involves sophisticated use of a student model and knowledge about common bugs in order to simulate the student's reasoning processes and pinpoint the underlying conceptual errors or missing information. In some cases, a single answer may reveal a whole set of bugs, while in other cases, the tutor must care- fully probe the student, testing alternative hypothesized bugs to reveal the misconception (Stevens & Collins, 1977, p. 10). Although top-level goals of Socratic tutoring tend to dis- tinguish it from the remedial tutoring as defined in Chapter I, it will soon be demonstrated that these notions of diagnosis and correc- tion have broader applicability than the top-level goals. In fact, this study provides additional evidence which suggests the existence of these subgoals. These tutorial subgoals seem to complement Ellson's (1976) more behavioral conceptions of tutoring. A behavioral_perspective. In a recent review article on tutor- ing, Ellson (1976) has presented a theory of tutoring which has several significant aspects. It is, first, a serious attempt at theory build- ing in an area of teaching that contains few theories. Second, it is supported by the empirical research on programmed tutoring. Last, there are several features of this model which are particularly rele- vant to the present inquiry. Ellson's (1976) theory emphasized student failure because he believes that most tutoring is remedial in nature. His theory is also psychologically grounded in Gagné's (1970) theory of learning hier- archy (the decomposition of a task into simpler capabilities, which are themselves analyzed until a set of ordered prerequisites exists from very basic types of learning to more complex types). A basic belief 31 for both Gagné (1970) and Ellson (1976) is that the failure to perform a complex task stems from either an unlearned or mislearned pre- requisite skill. This assumption underlies Ellson's (1976) proposal of a four-step model, useful in determining what is to be taught. The steps are: 1. Analysis of the subject matter to determine the hierarchy of prerequisites for each required task. 2. Diagnosis of the student's performance in terms of those prerequisites the student knows or doesn't know. 3. Prescription of what needs to be taught based upon the diagnosis made both prior to and during treatment. 4. Treatment is provided as a form of face-to-face teaching. Several practical problems in performing these processes were cited by Ellson (1976). The development of a complete learning hier- archy is a difficult and tedious job (particularly if one considers, for example, the terminal task of understanding chemistry). There are also two kinds of hierarchies that might develop from subject matter analysis that must be distinguished. One would represent the logical structure of the subject matter as organized by scholars and the other would represent an optimal learning hierarchy useful for teaching. The issue is that "good theory for the scholar may not be good peda- gogical theory" (Glaser, 1976, p. 18). A final problem involves diag- nosis. Ellson (1976) argues that the enormous number of elements in a hierarchy imposes a huge information-processing task upon the tutor during diagnosis. 32 Of the eight successful tutoring programs reviewed by Ellson (1976), six were examples of programmed or structured tutoring. Although Ellson's (1976) model maintains some validity for highly structured situations, it remains to be demonstrated with more improvi- sational tutoring. Some aspects of Ellson's (1976) model are reflected in the analysis of remedial tutoring, to be presented next. An Intuitive Analysis of Remedial Tutoripg The present inquiry focuses on the remedial conception of tutoring. In order to provide a common point of reference for later theory building, an intuitive analysis of remedial tutoring is pre- sented here. It is an attempt to explain what effective tutors do behaviorally and cognitively during a remedial tutorial session. Informal observation, personal experience, and a review of related literature form the basis for this intuitive analysis. In a tutorial situation, two types of problems are prominent, one for each party of the interaction. The problem for the learner is to understand the relationship between a study or test question and a known answer. That is, the student must comprehend the manner in which the answer was obtained. For that, the student seeks assist- ance from a tutor. Infrequently, the miscomprehension of instructional information (i.e., "I didn't understand the tape's explanation of pressure, could you explain it to me?") becomes the basis for a tutorial. The reason most tutorials center upon specific instruc- tional questions is probably some combination of the following factors: the students' inability to assess their own misunderstandings (they 33 believe they understand the material when in fact they don't), the failure of instruction to demonstrate how the content is to be applied to specific questions (the students may understand the presented material, but are unsure how to apply it), and the students' failure to have mastered previous material (the students may understand the current material being presented, but lack the necessary background knowledge to successfully answer the question). Whatever the reasons, most tutorial dialogues begin when stu- dents request help and state their own difficulty at a very general level (e.g., "I don't understand question eight, would you show me how to do it?" or "How do you do question eight?"). Since an exten- sive knowledge base provides the means to answer a specific instruc- tional question, an inability to do so suggests that one or more misconceptions or nonlearnings have occurred. By making these defi- ciencies explicit, to themselves, the tutors are able to make more effective teaching decisions, which help the student to answer parallel questions successfully and to learn the subject matter more meaning- fully. Therefore, the problematical tasks for tutors are to (l) deter- mine what the student knows or doesn't know in relation to the instructional question of focus and (2) to teach the student accord- ingly. Using interpersonal skills, a conception of the subject matter, and a knowledge of possible and probable learner deficiencies, the tutor elicits and attends to information from the student in order to render one or more diagnostic judgments of the learner's state of knowledge. Decisions about prescriptive treatment follow from the 34 diagnosis. A correct diagnosis is a necessary but not sufficient condition for helping the student to learn. Tutors must exhibit skilled performance when remediating and teaching on the basis of a diagnosis. This analysis seems to reflect some practitioners' view of tutoring (Gibbs & Durbridge, 1976; Holley, 1977). In most cases, diagnosis precedes treatment initially, but during the tutoring process, there is a reciprocal interaction between these two components. Diagnosis suggests the treatment to be con- sidered and, during treatment, new and perhaps more specific diagnoses appear. Therefore, these components may not be sequentially ordered in a simple manner. There may also be a level dimension (content and process) to each component as shown in Figure 2.1. The content level refers to the subject matter to be learned and the process level refers to how the material is learned. Before linking this analysis to several psychological theories, an important delimitation mentioned in the first section will be clari- fied here. From this point forward, the developing theoretical frame- work will center upon the diagnostic aspect of tutoring because the tutor's diagnosis of the student's knowledge state is a significant antecedent of the specific treatments that the tutor provides. Treat- ment is considered only as a vehicle for further understanding the diagnostic process. Furthermore, greater interest will be shown toward the content level because (1) the constructs that are necessary to diagnose process problems (e.g., problems in attending, storing, coding, organizing, searching for, and recalling information) require 35 TUTORIAL COMPONENT DIAGNOSIS TREATMENT That Part of the That Part of Sub- Subject Matter That Is ject Matter That Tutor Misunderstood or Un- Engages in Teaching known CONTENT (e.g., Learner doesn't (e.g., Tutor deals with remember the relation- the concepts of: con- ship between pressure stant, gas, ideal, pres- and volume for an ideal sure, volume, inverse, gas)6 molecules, etc.)a LEVEL That Learning That Learning Strategy That Is Miss- Strategy That the Tutor ing or Ineffective or Attempts to Teach to Incorrectly Applied the Student PROCESS (e.g., Learner attempted (e.g., Tutor explains to rotely memorize as how to derive the rela- the pressure increases, tionship between pres- the volume increases, sure and volume from without reference to the appropriate chemi- other important infor- cal models)a mation)a Figure 2.1. Two levels of the diagnostic and treatment components of tutoring. 6These examples deal with the typical content of a freshman chemistry course. 36 much more elaboration than presently exists (Gagné, 1976; Glaser, 1976); (2) it is highly unlikely that tutors possess any explicit awareness that these process problems exist, having had little or no training in learning theory (in at least one PSI course reported by Robin [1977], process diagnosis occurs in a highly structured and general way); and (3) pilot data suggested that very little process diagnosis occurs. Unless specified to the contrary, the term "diag- nosis" will refer only to the content level. Theoretical Approach to Clinical Reasoning A Comparison of Two Research Traditions From the foregoing description of a tutorial, the relevancy of research on problem-solving and judgment and decision-making is evident. A brief comparison of these areas as viewed from two dis- tinct traditions is presented in order to explain the selection for the theoretical structure for the present study. Two distinct traditions investigating human behavior asso- ciated with these higher mental functions can be readily identified. One school of thought, aligned with Newell, Shaw, and Simon's (1958) information-processing approach, seeks to explain the underlying mechanism and processes that humans use in solving problems by analyz- ing the protocols of subjects' actions and verbal introspections as they perform various tasks. A theory of behavior is developed by Operationally transforming these tracings of human intellectual pro— cesses into specific programs that are coded to run on digital com- puters. The program produced is viewed as an analogy of human 37 behavior, duplicating the steps and solutions that humans produce when solving problems. The mission of the second theoretical tradition has been to mathematically represent the relationship between collected informa- tion (weighted cues) and judgments (diagnoses), focusing on the practical question of judgmental accuracy. Regression (Daws & Corrigan, 1974; Hammond, 1971; Hammond & Summers, 1972) and Bayesian decision analysis (Edwards, 1954, 1961; Raiffa, 1968) have been the dominant mathematical models employed for reproducing and enhancing human judgments. The salient characteristics of these two major paradigms are summarized in Table 2.1. Provision of these comparative distinc- tions is aimed solely toward improved clarity and understanding. In fact, combination of their respective advantages and issues might be capitalized upon, yielding synergistic results. Indications of con- vergence and possible points of connection have already been noted (Shulman & Elstein, 1975). Comprehensive reviews of this genre of research are available in Shulman and Elstein (1975), Slovic, Fischoff, and Lichtenstein (1977), and Slovic and Lichtenstein (1971). Advantages of an Information- Processing Approach for the Present Study Although both theoretical positions appear useful, it is from the information-processing approach that most direction for this study has been sought. There are several reasons for this choice. First, the following conditions must be present before regression 38 Table 2.1. Comparison of an Information Processing and a Mathematical Modeling Approach to Clinical Judgment. Aspect Information Processing Mathematical Modeling Purpose Understanding, Explanation, Prediction, Control, Description Prescription Focus Cognitive Processes Cues, Weights, Proba- bility, Judgments Aspects of Sequential Selection, Integration of Informa- Cognition Accumulation and Integra- tion, Decision Rules Captured tion of Information Methodology Use of Introspection to Distrust of Introspec- Trace Problem Solving tion; Use of Formal Process; Nonstatistical Mathematical Models Relationship Related to Research on More Self-Contained to Other Memory, Cognition, and Psychologi- Language cal Concepts models are relevant: (1) there must exist several levels of a single diagnostic judgment, (2) the important predictor variables or cues must at least be grossly identified, and (3) the actual criterion values must be known. Whether these constructs are applicable to tutorials are issues that this research sought to address and there- fore, regression models may hold greater significance at some later time as the scope of this inquiry deepens. Second, the mathematical representations of judgment, being abstract and far removed from a clinical orientation, focus solely on a single operation of judgment, data synthesis. Third, the kinds of processes represented by the Bayesian Decision Model require sufficient time, often unavailable 39 in interactive teaching, to carry out the necessary analysis prior to choosing among alternative teaching acts (Shavelson, 1976b). Shavelson (1976b) defines teaching "as a process by which teachers consciously make rational decisions with the intention of Optimizing student outcomes" (pp. 411-412). He goes on to describe five features (borrowed from classical decision theory) which describe teachers' decision making. Although the mathematical aspects of Shavelson's (1976b) Decision Model are not descriptive of practice, the model does have heuristic value in that it complements the descrip- tion of tutoring presented earlier. For Shavelson, the most important information used by the teacher to make decisions is the teacher's estimate of the student's "state of mind" (cognitive, emotional, or motivational, Shavelson, 1976a). These notions directly reflect the treatment and diagnostic processes of remedial tutoring. Specifically, Shavelson's (1976a) conceptions of teacher estimates and student cognitive states of mind have been labeled here as tutor interpreta- tions or diagnoses and student states of knowledge, respectively. There is one further connection to Shavelson's research. The objec- tives and methods of the research presented here parallel those which Shavelson (1976a) suggests for conducting clinical classroom studies of teachers' estimates. In summary, an information-processing approach is more approp- riate for the initial phase of this research because the mathematical orientation has a limited focus and assumes the applicability of con- structs that may not be valid for the present study. 40 To better understand the theoretical structure which under- girds the present research, a brief examination of the elements of an information-processing theory of human problem solving follows. The theory postulates two basic interrelated components of human problem solving which determine observed behavior, the problem solver and the task. The problem solver is assumed to possess certain gross characteristics (serial, sequential information processing; 7 i 2 chunk capacity short-term memory with a transfer time on the order of mil- liseconds; and infinite capacity long-term memory with a storage time on the order of seconds) that are constant across tasks and problem solvers. The task structure can be defined as a task environment, an "objective“ description of the actual problem by an observer (experi- menter) and as a problem space, a subjective representation of the task by a particular problem solver for the purposes of discovering a solution. It is proposed that the structure of the task environ- ment determines the possible limits of the problem space. The problem space, in turn, determines the possible programs of primitive infor- mation processes which are used in problem solving by operating on information stored in memory. A more detailed account of these con- structs and their relationships can be found in a summary paper by Simon and Newell (1971). In summary, the prime mental activity that human beings exercise is considered to be the processing of informa- tion sensed in the environment or stored in memory and which results in human behavior. 41 A specific application of this paradigm, a model of medical diagnostic reasoning, and its relationship to tutorial instruction, is elucidated next. A Model of Medical Diagnosis and Its Applicability to Tutorial Reasoning Until recently, very few psychological theories have been sug- gested which deepen our understanding of medical diagnostic reasoning. Insight gleaned from an analysis of one such theory will be used to develop a rudimentary theory of a tutorial. It is toward this ideal that the forthcoming discussion is directed. Two important segments form the body of this argument. They include: (1) a concise over- view of a theory of medical inquiry, and (2) an analysis of the simi- larities and restrictions of this theory to tutoring. An extrapola- tion of this model to tutorial instruction in an attempt to construct a preliminary theory, is presented in the following major section. Using a variety of simulation techniques, process tracing, paper medical problems, questionnaires and test batteries, Elstein, Shulman, and Sprafka (1976) have investigated the medical inquiry process and developed a theory of diagnostic reasoning. Their study of the diagnostic process leans heavily toward the information- processing paradigm, both in terms of its methodology and in its con- ception of the physician as a problem solver. The major tenets of Elstein et al.'s (1976) model presented below reflect this relation- ship. 42 1. Diagnostic problems are believed to be solved by a hypothetica- deductive method that is characterized by four major processes. a. Cue Acquisition: searching, gathering and attending to units of information that are either volunteered or elicited. b. Hypothesis Generation: generating diagnostic hypotheses early on and during the case work-up which direct cue acquisition and interpretation. c. Cue Interpretation: evaluating the fit of a cue to a generated hypothesis. d. Hypothesis Evaluation: selecting one hypothesis as a diag- nosis based on a judgment rule for combining evidence. 2. At any one time, a set of working hypotheses defines the physician's internal representation of the problem (the problem space). In addition to theory generation, the research produced sev- eral important findings. The number of hypotheses being considered at any one time was estimated to rarely exceed five, which agrees with other reported estimates of the capacity of short-term memory. Intel- lectual strategies of working from general to specific hypotheses or vice versa seemed to depend upon an interaction between the physician's knowledge and the content of the problem. Surprisingly, there was found neither a general trait of clinical competence which could be used to separate good and poor clinical problem solvers, nor consis- tent intra-individual problem-solving approaches utilized across cases. This discovery supports the premise advanced by Simon and Newell (1971) that the task environment acts as a determinant of possible cognitive strategies. Finally, the diagnostic accuracy of the physicians was determined to be directly related to the thorough- ness of cue acquisition and the accuracy of cue interpretation. 43 There were several admitted weaknesses in the study which should serve as warning signals to future inquiries. The criteria that the researchers employed in determining the accuracy of cue interpretation were established by one and sometimes two experts for each problem. Weighing the interpretation of cues by several raters might be a more reliable strategy. If clinical competence is not a general trait, but is problem specific as was noted above, then a more extensive sampling of clinical problems should be utilized and the issue of problem taxonomy addressed. In failing to discriminate between the cognitive processing of peer-nominated high vote and low vote physicians, several reasons were forwarded. There were not enough problem cases, subjects, nor perhaps subject variability, but the authors felt that the crucial factor was "the phenomenon of case specificity of performance" (Elstein et al., 1976, p. 163). That is, a global definition of an "expert“ physician is inadequate and must be constrained by designating the domain of problems over which the expert has competence. It was planned that this admonition take root in the methodology of the current research. To provide a basis for constructing a diagnostic model of tutoring, this medical model's congruence with a tutorial situation is explored. The generalized process of a one-to-one tutorial inter- action presented at the onset of this chapter demonstrates the high degree of parallelism between clinical medicine and clinical teaching. However, several striking differences should be noted. Standardiza- tion of medical training and curricula leads to a common conception of human functioning, disease, and treatments among practicing 44 physicians. Although many graduate students are taught a common basic structure of their discipline, their use of that knowledge in approaching a given learning task might be quite different and could lead to differential recognition of learning misconceptions. The clinical power of the physician is built upon knowledge of specific procedures which aid in diagnosis and treatment and a highly techni- cal vocabulary with hierarchical categories of domain-specific diseases and their associated cues. These elements are almost entirely lacking in education. The tutor's knowledge of possible knowledge states in a learner is based upon his (her) conception of the task the learner was asked to perform and his personal experience with similar learners and tasks. Low inference evidence (e.g., medical lab analysis) is unavailable to the tutor, and written, verbal, and nonverbal communication are the sole arenas from which cues might be obtained to aid in the judgment of the type and severity of a learn- ing misconception. There is also a fundamental difference in the ultimate goal of each clinician. Physicians attempt to restore the patient to good health; it is not important for the patients to have a precise characterization and understanding of their disease or healthy state. This insignificance in medicine is very important in tutoring. The student must understand his (her) own deficiency and ultimately possess a meaningful conception of a discipline. The direct responsibility of the tutor is to help the student attain these understandings. In spite of these differences, a similar inquiry model has served as part of a theoretical framework for examining the teaching 45 process (Snow, 1968). As Snow (1968) explains the cognitive events that are involved in heuristic teaching behavior (a teaching style which emphasizes an inquiring role for the student), the essence of the medical inquiry model is evident: One can assume, for example, that at some given instant in an ongoing group discussion a teacher attends to significant cues regarding the course of discussion, makes inferences about the state Of confusion in some problem faced by stu- dents, decides On a kind Of question or comment designed to Open up a new aspect Of the problem, and skillfully inserts the question or comment into the stream Of discussion (p. 78). Having explored several theoretical positions which provide some direction for understanding the tutorial process, a tutorial diagnostic model, based upon a synthesis Of these theories, is developed in the next section. A Diagnostic Model of Tutoring A Summary of Five Theoretical Models Before describing the diagnostic model, the different theories that were reviewed in the previous discussions will be briefly sum- marized and linked together. Table 2.2 contains five theoretical positions that were drawn upon to develOp the tutorial model. The two basic components Of a tutorial, diagnosis and treat- ment, are drawn directly from Ellson's (1976) theory of tutoring. An important tutorial function which is implicit in other models, but which Ellson explores directly, is subject matter analysis and organi- zation. For Ellson (1976), the specific diagnosis and treatment are derived from a hierarchy Of prerequisites developed from the analysis Of a terminal task. The attention given to the organization Of :o_uwucoo poucm5:og_>:u uzqaso cmc_:com . «amen cm>_u mgsuuagum xmm» . acmscocw>cm xmmp M muwum m>muumww< m—pwxm —m:Omme a pmowcgomp .mmum—zocx m>wucouma=m . memccmmg can mew ” -gommh Low mmuaupua< . mupumwgmuomcmzu . LegumOh P ‘ . accustowema top—pxm m_mow ma_u__23= mmeouuao Lop>mgmm . . m:o_m H mpo< m>wumccmup< ” ucwsuumeh . . . mouam suppose . . upuwo pmcopmmowocm . . mcopumcmno ” . H ” cowuamcummca m:_mmmuocaucomumELowc~ . H . . =o_u Lo>pom sm_noca ”-u=_o>m memoguonaz H succumb ” n we macaw mg» uzonu ”copumumcncmucm one ” mmoemgmwcm "cop“ ” esp: we macaw H mpmocmmpo . cowumcmcmw . -mcmcmu m_mw;uoax: . m.»:mu=um to mmums_umu _ . . . mwmmcuonzz . mucoucmuu< mzu m=o_u . . . cowuwmwaco< oau . -umguxu :owumsgowc_ . . . H H . co_um~w:omco . xweumz m_mmzuoazz wen mmmx—o:< Lmuumz uomnnzm _ x oau . A_~s_v F_mzaz ecu coeem “ Aoea_v uz~>4om zusmoma . _m “a =_aumpm zmowzp . >m~=oz~ ozmmmmuoxmuzo~hmomzh . Amempv zocm . ozmzum m>~p~zwou b l- Aaokmp .am~m_v compo>ogm wz~xmouzh .uoao_m>mo mm ___2 _mcoz poweouzh umumocmmwo m cove: Eocu umvoz pmumumLoO;h m>wu mo secessm < .~.~ «page 47 subject matter and the influence of Gagné's (1970) notions Of learning tasks sets Ellson's (1976) model apart from the rest. The research on programmed tutoring provides some empirical support for his ideas. Shavelson's (1976a, 1976b) Decision Theory, although formal in design, maintains several qualitative concepts (teachers' esti- mates Of students' states Of mind and alternative teaching acts) which link it closely to Ellson's (1976) notions Of diagnosis and treatment. Shavelson (1976a, 1976b), in dealing with the cognitive processes Of teachers, shares a common orientation with the three remaining positions. This relatively new area of research is just beginning to be investigated (Clark & Yinger, 1978). Snow's (1968) model connects Shavelson's (1976b) theory with a more explicit strategy for collecting and interpreting information to be used in decision making. A similar hypothetico-deductive strategy has been expanded and shown to be the basis Of physicians' diagnostic judgments (Elstein et al., 1976). The theory of information processing is more encompassing and relates to the concepts of the other models in only a general way. As discussed in a previous section, a human being is believed to possess particular mental characteristics (serial information processing, a short-term memory, etc.). These characteristics pro- vide a framework within which most Of the concepts Of other models might be subsumed. Whether the construct is subject matter organi- zation or estimates and hypotheses of the student's state Of knowledge, it can be represented in a tutor's memory system, be processed inter- nally (e.g., evaluated, acted upon), and related to other information. 48 Thus information processing can encompass, within its perspective, all the other models. A Tutorial Diagnostic Model Being most comprehensive, the information-processing theory is used as a framework for organizing the other theoretical constructs into a tutorial diagnostic model. Such a model is presented in Figure 2.2. The model contains three components: tutor behavior, the task environment, and tutor information processing. TUTOR INFORMATION PROCESSING TASK ENVIRONMENT Intellectual Strategy Cue Acquisition Learner Provided HypotheSis Generation Information 4 Cue Interpretation Learner's Knowledge State Hypothesis Evaluation Environmental Setting Instructional Question of Focus Problem Space Knowledge Of Subject Matter Knowledge of Course Treatment of Subject Matter Cue x Hypothesis Matrix Diagnosis and Treatment TUTOR BEHAVIOR Questioning Explaining Listening Writing Figure 2.2. An information-processing, diagnostic model Of tutoring. 49 Tutor behavior. This component consists of the observable teaching acts the tutor performs during any given tutorial. They include questioning, explaining, listening, writing, and others. The sequence of acts and the choice of a particular behavior results from an interaction between the task environment and the mental operations of the tutor. Task environment. The tutor is asked to perform a task within an environmental situation that suggests certain goals, constraints, and expectations. A description of the actual tutorial problem was presented earlier: The tutor must determine what the student knows or doesn't know with respect to a given instructional question and then teach the student accordingly. This goal statement is represented in the model as diagnosis and treatment. Another two important components of the task environ- ment are the environmental conditions (tutorial time limitations, noise levels, etc.) and the learners. The learners are central features of the task environment. They possess the states of knowl- edge that the tutor attempts tO characterize, they provide behavioral information which might serve as cues, and they indirectly determine the course knowledge to be dealt with during the tutorial by asking about a particular instructional study or exam question. Tutor information_processing, The problem, outlined above, is very difficult to manage because the goal state is so generally defined. One way to constrain the problem definition is for the tutor to make estimates or hypotheses of the student's knowledge state and to acquire and interpret information (cues) in relation to these 50 estimates. These hypotheses could be evaluated using information collected and processed and one or more hypotheses could be selected in order to make decisions concerning teaching. The tutor's representation of the task in memory (the problem space) determines which hypotheses are generated and how incoming information is interpreted. The problem space of a particular diag- nostic case can be conceptualized as a set Of learner emitted cues (either volunteered or elicited) that are related to a specific learner misconception. Diagrammatically, the problem space can be defined by a cue x hypothesis matrix (Elstein et al., 1976, as shown in Figure 2.3). Cues can be defined verbally ("I can't do problem two") and nonverbally (as a tutor explains something, the learner maintains a quizzical look). The hypotheses can be generated from what the tutor believes are the possible and probable (developed on the basis of tutorial experience) learner knowledge states for a par- ticular instructional task. Yet this matrix fails to address the issue of how hypotheses are derived and defined. It is the tutor's representation of the instructional presen- tation Of the subject matter that gives rise to Specific hypotheses. The use of a metaphor may help to clarify this point. Let us assume that the knowledge Of a discipline that is stored in a tutor's memory is represented by a world road map that depicts only the principal routes (relationships) and cities (con- cepts). That part of a discipline that a specific course deals with is represented by a single country on the larger map. When a student is unable to complete an instructional task, the deficiency lies 51 CUE x HYPOTHESIS MATRIXa Hypothesis1 Hypothesis2 Hypothesis3 Cue1 Xb X Cue2 X Cue3 X X X Cue4 X X Figure 2.3. A cue x hypothesis matrix representing the problem space of the tutor. aThis has been developed for illustration purposes only. Reality may and probably does reflect a greater complexity. For example, cues may be partially present and unequally weighted. bA certain set of cues (X represents the presence Of a cue) suggests a particular diagnosis to be hypothesized (i.e., the set {Cue}, Cue3, Cue4} defines Hypothesisl). somewhere in the knowledge base for that task. The knowledge base of a task is analogous to a regional map which covers less area but con- tains more detail. Rather than search a whole country map to uncover that specific city or road that the student failed to understand, the tutor only has to search a regional map. As smaller and smaller maps are scanned, the problem becomes more and more defined. The hypotheses of the student's state of knowledge can be defined by several levels, metaphorically, at the regional, local, city, or street level. To reiterate the point being made, the cues and hypothesized diagnoses can be characterized in terms of the subject matter to be learned. What the learner intentionally conveys about his/her own 52 knowledge state and what that knowledge state actually is can be defined in terms of the knowledge state that the course was designed to develop (see Figure 2.4). Intended *——‘“ Knowledge State That Part of the Knowledge State f' That Student Can , Verbalize About r-T-l ----- 4 What Remains 1 Unknown to Student (Correct Diagnosis) Figure 2.4. A representation showing the relationship between the intended knowledge state of a learner and what a specific learner can verbalize about and what remains unknown. That is, what the student knows is some subset of that knowl- edge the course was intended to develop and when logically added to that which the student doesn't know, a definition of the intended knowledge state should result. The tendency for the student to ver- balize very little about what remains unknown to him/her makes the process of diagnosis nontrivial and therefore, the tutor must have some mechanism or strategy for developing an accurate estimate of the student's knowledge state. The final component Of the problem space depicted in the diag- nostic model is the tutor's knowledge of each specific learner. This 53 knowledge might include how the learner studies, the general intel- lectual ability of the learner, and the learner's motivations. Summary A synthesis of several theoretical conceptions believed relevant to a model of tutoring was presented. It was argued that an information-processing theory of human problem solving was an effective vantage point from which to build a diagnostic tutorial model. In describing the model, the tutor was viewed as an informa- tion processor who interacts with and acquires information about the learner and who represents the tutorial task as a subjective problem space. This problem space was hypothesized to consist of informa- tion stored in memory (information about the subject matter to be learned, about the possible and probable learner states of knowledge, about the specific student, and about cues and hypotheses from past interactions) which is processed and combined with environmentally acquired information. The tutor's purpose is to discern the learner's knowledge state and ultimately, to decide and skillfully perform the next teaching act. The model is useful in that it suggests a set of constructs and relationships that directed the initial observations of this research. Methodological Review Representation of the Problemp§pace A significant portion of the tutor's problem space is the tutor's conceptualization of that part of the discipline that the student is required to learn. The characterization Of the tutor's 54 knowledge of the course's subject matter is important if we are to understand the strategies and behaviors that the tutor exhibits. One criterion for this representation is that it be consistent with the current ways of modeling memory. The way in which one models human memory depends upon the way one conceptualizes how people organize knowledge. There are two basic kinds Of knowledge (Lewis, 1976): knowledge that (e.g., the pressure of an ideal gas is inversely proportional to the volume) and how knowledge (e.g., solving the ideal gas equation for one specific unknown). Correspondingly, there are two principal ways that knowledge is represented: declaratively, as sets of propositions which are rep- resented by a network Of concepts linked by relationships, and pro- cedurally, as directed graphs and flow charts (Greeno, 1976; Lewis, 1976). A current controversy among cognitive psychologists concerns the issue of whether knowledge is stored propositionally or proce- durally or both ways. This psychological issue will not be dealt with here. Follow- ing Greeno's (1976) example of practical applicability, that repre- sentation which seemed most appropriate for and easy to apply to a given knowledge base was adopted. This methodology is elaborated in Chapter III. Process Tracing The conception of tutor as information-processor focuses attention on the cognitive processes that produce tutor performance. It also implies the adoption of process tracing methods that are 55 identified with information-processing theory, specifically, intro- spection and stimulated recall. Exactly what these methodologies are, their significant issues and assumptions, and the practical and theoretical implications of their use are discussed next. Methodology definition. Process tracing involves the collec- tion, coding, and interpretation Of the verbalized mental processes of human beings as they perform a task. Behavioral observations Of the task performance are also recorded and these data serve as anchor- ing points for judging the accuracy and credibility of the verbalized mental processes. That is, the actual recorded performance is the primary data from which inferences are made. Introspection is an important general technique for Obtaining the basic data of process tracing inquiries. It generally connotes a subject's verbal report Of covert thoughts and feelings. One might think of introspection as self-Observation, self-perception, and thinking aloud (Radford, 1974). Stimulated recall, a subset of this general methodology, uses audio or video recordings of the subjects' performance to help jog their memory of covert mental processing which occurred during some original situation. An interviewer or inquirer provides a trusting and nonthreatening psychological environment and facilitates the subject's recall and verbalization of higher order processes by using carefully constructed probes and by continually asking the subject to focus upon what had previously occurred. Think- ing aloud is different from stimulated recall in that the introspec- tion occurs during task performance rather than sometime later. In this study, students were asked to think aloud as they solved practice 56 problems and stimulated recall was employed with the tutors because it permits the discovery of aspects of their interactive thinking without being intrusive. Thinking aloud techniques have been used in psychological studies of the mental processes involved with problem-solving, judg- ment, and decision making (Shulman & Elstein, 1975). Research on teacher thinking, studies Of medical diagnostic reasoning, and coun- selor, physician, and teacher training have been the contexts in which stimulated recall has been used (Marland, 1977). Video-taped stimulated recall has been exclusively studied by Norman Kagan and his colleagues (Kagan, 1975; Kagan, Schauble, Resnikoff, Danish, & Krathwohl, 1969). They have called the technique Interpersonal Process Recall (IPR), reflecting the counselor training context in which it was developed. The validity of introspection. Since most of the data for the present research relies on introspective techniques, it seems impor- tant to examine the issues surrounding this methodology. Marland (1977) has provided a recent summary of the criticisms of introspec- tion. If philOSOphical arguments are ignored, then a major problem with verbal reports Of cognitive processes is their validity and reliability. The questions of concern are: Do subjects accurately peppy; covert processes? DO subjects remember their actual thoughts? DO subjects pepp§t_all their actual thoughts? Do subjects embellish their reports? (Connors, 1978). Researchers have dealt with the issue by taking recommended precautions against inaccurate reports and by making one of two assumptions. The possible assumptions are: 57 l. The validity and reliability of introspective reports is assumed but not proven (Connors, 1978; Marland, 1977). 2. The verbal reports are a combination of true recall of actual mental processing and post-hoc reconstructive thoughts which occurred during the recall sessions (Peterson & Clark, 1978). Evidence relating to the accuracy of self-reports of higher mental processes has been comprehensively reviewed by Nisbett and Wilson (1977). Their summary of the field is a persuasive argument, supporting the conclusion that subjective reports are inaccurate and that there is little or no introspective access to higher cognitive processes. Practical and theoretical implications. In light Of the power- ful conclusions of Nisbett and Wilson (1977), two questions arise: What has been done to enhance data validity and how have the data been interpreted? Several researchers have made recommendations that might enhance the accuracy of recalled verbal reports (Bloom, 1954; Conners, 1978; Elstein, Shulman, & Sprafka, 1976; Gaier, 1954; Krutetskii, 1977; Marland, 1977; Nisbett & Wilson, 1977). These recommendations include: 1. Brief the subjects peeeew -wwe e we eewe> e we eewueeww -wemem esp cw mewemee .eweewee> wee we eewe> e on eewweee ems: .eewsz Seaweemwe ge ewem .ueesewe use on mewweee sowez eweewee> meweeeemoeeee we“ we eewe> ecu we :ewueewwweeem we“ cw mewemee .peeeewe em eu eewweee ems: .eewcz Seaweemwe Le owem .eweewee> ee>wm e ee Homemec new: eegmwemcwu -mwe meewpereepeeceee weese_e Leweowueee mewucmmeeeew macaw .eewu mmeeee peeeewe eee seew ee geesepe ea geesewe Eeew Leeuwe >Le> was sews: maceEewe we Homeme :e we mEez .eeweepm mcwee wee cows: mewpwuee mew ewem Feeewueweeeeu eceeeoeee peeEecemeez :ewue>cemeo meewe> esez awemwem> peeeewm we>ee Leweewueee we>ee ewEeumxm cowewewwae peeeeeu we>ee ewuzwee< .ANKmPV geese we eacwwao mm ecaecoe we mwe>ee emcee we meweeexm xeemwsego ._.m eweew 67 Levels of learning tasks. In an analogous way, tasks can be specified at these levels. Table 3.2 provides two examples of this. Thus, we have come full circle to answer how a task might be defined; it might be defined at the analytic, the systemic, and the particular levels. The subject matter knowledge was represented for tasks defined at the systemic level. There were several reasons for this. First, there was no explicit course instruction at the analytic level. Second, each systemic task required a distinct knowledge base which was not applicable to other tasks. Also, characterizing the knowledge of systemic level tasks reflected the goals of the course, to have students develop generalized capabilities to be used across parallel particular level tasks. Finally, it would have been psychologically, educationally, and practically illogical to have developed knowledge representations for every specific study or exam question. There are two final notes that require attention. In addition to clarifying the task levels, the analytic level was introduced so that selected analytic concepts could be meaningfully used hereafter. Also, henceforth the term "task" is used to refer to a systemic level task and the term "question" will be used to refer to a particular level task. The methods to be described below deal with: (1) how the par- ticular study and exam questions were categorized into systemic level tasks, (2) which tasks were selected for this study and why, and (3) how the required knowledge base for each selected task was rep- resented. 68 e—eewee> peeueo eeeweeex meeeeemespe e mecegeme5p< peep we e=Pe> N meezeemeo eeeew ee>wo eeueeeeee eeemmeee eweewee> e mewm: No we meemmeee esp eeemeez mew peesewu _eeew eewe> page weeew “mes “wee. Le ewes we eeeweemee ueeueo eeeweeem mw gown: wee “mew mw news: new “new eeeeewe “new _ meezem eepe> mew eee epeewee> < mucwEm _.w Fmgm>wm we>ee Leweewueee we>ee eweeumzm we>ee ewpxwec< peeew ee>wu .Ammmpv sawsm cae emewwmo me mxmeh we mwm>oe mmwch we meFeEexm acumwseeu .~.m eweew 69 Task categorization. All study and exam questions that included content from the selected units and that were found in the Study Guide, in a sample Exam 6, and in all forms of Exam 6 given in the Fall of 1976 and the Winter of 1977 were analyzed in terms of (l) the input given in the question (the variables, their values and units, explicit or implicit assumptions, the equations required for solution, and other relevant information) and (2) the required output (e.g., solve for a specific variable or choose a correct response). Each question was then sorted into two major divisions, depending upon whether the question was mathematical (Division I) or verbal (Division II). Within each division, all questions that were similar in their input and output (at the systemic level) were grouped together. A rough taxonomy was attempted using categories suggested in the Study Guide. Finer dis- tinctions were made using the criteria shown in Table 3.3. These cate- gories were mutually exclusive except for a few ambiguous cases which were placed in that category suggested by the instructional materials. A listing Of all tasks and their categories is included in Appendix A. Task selection. In order to develop criteria for selecting target tasks to focus upon, the experimenter became a tutor in the freshman chemistry course for two weeks during Winter Term of 1977. The tutorial interactions dealing with the two target units were tape recorded to determine (1) the number of times each task served as the topic of the interaction, and (2) the frequency and type of difficulty that students had with these tasks. In addition, the experimenter gained personal experience with the course structure, content, and students. 70 .pw e>eee weemeuee Loewe eueweeeew we» cw omen» e» :ewuweee cw eewueeee ee eeeweeee meemeeeo mwcp cw mcewumeee ww< A__._v xeemepeu new .meaeeso eceemwwwe spesmw_m .8 Le .meeepee wepweeee .e "Leguwe eeeweeee pee .eeeeeeeee Le .ewemeewuewee .ueeeeee maem ecu ee eemeeew xeemeuee mwcp cw meewumeee Fw< .meewueeee we new eEem one we em: e>wueceuweg .e co .ee>wm peeew one we emeeeee mceeeo -eee eewpewem peewumwe e .e co .eewuewem Lew meewp neeee we pom seweewueee e .e "cesawe eegwecmc meemeeee mweu cw meewumeee ww< A_.Fe weemepeo Lecwz .eeeeeeece eEem one .o Le .ewsmeewuewee eEem esp .e co .meeeeeee we pew Le ueeoeee eEem esp .e "eeepwe ee meeew ep em; xeemeuee mweu cw meewumeee ww< .eewse seaem age =_ spec eeeew we: .eewpewem mew saw: wee—e meweeeum .Eeweeee esp .e co .ueeew>e we: eewuweeee =m>wm Le eewpeeeem -eeeec Peemw> ewmee eEem esp .e co .eewuewem Lew eeeweeee we: cewpeeee ewmee eEem one .e ”peep eesuwe eeeweeeg weemepeo ewe» cw meewpmeee Fw< Ae.Pv wgemmueu Lone: cowewcwwae cowewcwwae .meam> “He eowmw>we _aeweasaeemz "e cowmwswe mew» agemepeo .mcewpmeeo weewueseguez eee weeee> wwwmmewo on new: eweeuweu .m.m eweew 71 Based upon the following criteria, eight tasks were initially chosen for further analysis. 1. That in the past, tasks have demonstrated students' sub- stantive misconceptions that are not immediately evident. 2. That in the past, the tasks have had a relatively high frequency of student misconceptions associated with them. 3. That the tasks require auxiliary representations since this may make the task more difficult and create more student misconceptions. 4. That the tasks have a relatively high frequency of occur- rence on exams so that there will be a greater chance of replication of student misconceptions. The following is a listing Of the eight selected tasks (as represented in Appendix A): Division I: 1.40, 1.41, 2.1, 2.2, 4.2 Division II: 1.1, 1.2, 1.3 A description Of each can be found in Table 3.4. Knowledge Representation As was suggested earlier, there are two types of knowledge that enable a student to complete a chemistry task: (1) chemistry concepts and their relations (chemistry rules or propositions) and (2) the psychological procedures that manipulate the concepts and principles to produce the required output. The framework for repre- senting these kinds of knowledge for the tasks of interest was based on certain simplifying assumptions. First, the ordered, algorithmic 72 .ceeu ooe we ecemmece e eee uooow eeeueceeeep e we mew Npu we eweEem —s cow e we xuwmeee we» meeweeweu xuwmeee we eewe> ee>wm emwe eEePe> we eewe> ueeexm e>eee me esem e>eee me esem _¢.F .u°om ece see N ee N_e we e\e cw xuwmeee we» eueweeweu Auwmeee we eewe> ewes x .mee a See :w a Low mewe> a cw ee>wm emcee Eeew pee -cewwwe wee sewez mews: cw ee>wm meewe> wees Le eeo Aeewwweeeew memv ce>wmlxwuwewweew Hemwez Leweeewee we eewe> exemmeee eee eeeueceeeeu Lew meepe> weeew maemme .eweu pee wH eewwweeeew mew mew e we eweEem e ee>ww e.P eweEexm eeeeee eeeweeem peeew em>ww meewueeew pee>ewem acemeeeu xmew .mxmew seemweeee eeeueeem eeowe use we eewemwe e mxmew weewpesespez "H zoumH>Ho .¢.m mpeeh 73 Nee cow ece eeow we eewemeee mw aw ww me esewe> mew wwwz pee: .55 com eee eoem we we eee mw wee we mmee ee>wm e we esewe> mew we eewe we eweewwe> mewewesew we eewe> m cw emcee seww peewewwwe ewe gown: muwee cw ee>wm meewe> ewes we eeo ee>wm we we: zes we xee men we mmeE wew eewe> ee>wm we we: mes we hes ewee -wwe> peeemeee we» wew eewe> _e eewe ee mewee -wwe> ee>wm we wee wew eewe> ee eewe eeem we eweueweeeeu we .eE: 1we> .ewemmewe "mmweewwe> mew -zewwew we» we ezu wew meewe> pneumeee mw ee>wm ewe meewe> e: sewn: wew ewee -wwe> peg» esemme .e—eu we: wH weeew esemme .eweu we: wH eewwwaeeew me pee wee we we: mem e we ewesem e ee>wo .x u I— l> _.N ewesexm eeeeee eewweeem peeew ee>ww meewweeem u=e>ewem awemeueu emew mxmew weewpesecuez .eeeewueeu "H onmH>Ho .¢.m mpeew 74 e ew emeep seww peewewwwe ewe cows; mpwee cw ee>wm meewe> ewes we eco ee>wm we pee mes we mes m e men we mewee we mmes wew eewe> x n.H we eewp pe mewee -wwe> emeep we ezp we meewe> Nx u.» ep eewp pe w meweewwe> eewzp wwe we wee—e> peepmeee mw ee>wm _ ewe meewe> e: cewgz wew mewee x u >e woemm pe weewepcee we coop -wwe> weeew eEemme .ewep pee w“ e cw cemewe»; mwep we ewem -mewe eep mw pee: .weewep we eewp pe eeww -eee we coo e cw ooo pe wwep epeewwe> -wpeeew we pee mes we we: cow we ewemmewe e mpwexe mewewesew mem cemewex; we xpwpeeee < we eepe> mem e we e—eEem e ee>ww N.~ peepeo meewpeeem awemepeu eweEexm eewweeem peeeH ee>ww pee>ewwm xmeh mxmew weewpeeeepez .eeeewpeeu "H onmH>Ho .¢.m mpeeh 75 App eewp pe ezp esp we wemwew we m> + <> u w>e ee>we swpwewwesw we xwpwewwexe w> we eewe> me+ ep eewp pe memem wwe wew ewemmewe eee esewe> we meewe> NewepeweeEep Fp eswp pe .m a > peepmeee e mewsemme .eexwe m> u <> u w> we eeFe> egamm< e>ee memem eep eeee Eep Fp eswp -mxm ewwpee eep we ewemmewe pe we we weeew egamme .ewep pee wH weeww eep mw pee: .eeeeee we eewp pe mw as com pe N: we ewee a e peesewe eeee eexwe ewe pp eewp wepew esem e use as com pe No we epee wew ewemmewe sews: ep eewp pe memem eewwwp a N e :ee3pee xoeueepm ezw we eewe> -eeew epeweeem wwwewpwew ee>wo ~.e eweEexm peepeo peecw ee>pw meewpeeem awemepeu eewweeem . pee>ewem xmew mxmew weowpeseepez .eeecwpeeu "H onmH>Ha .¢.m epeeh 76 meemeew eee memem we: ewewexe eee we meewpeewesee Fewe>em weeew eweE mw e>eee me eeem sews: pee xewe eewwwpeeew memem esp m._ .w: eeep emcee ewes mw ezo ”w: cw eecp ezu cw wepeewm ewe meewew e>wpeewppe eep meewwepw eeep e>wpemeeewpeewe ewes mw eeewee "wepeeepes ezu esp mcemeew wewe>em :eep ewes wepeewm e>ee meweeepes w: esp ”meeee _._..u esp eeep wewee ewes weepe eeep weeew ewes mem eeo ewe meeee mu: eep emeeeee weeew ewes cemeew mw ezu .memem epee ewe w: eee ezu peewweo pee xewe eewwwpeeew memem ezw N.w e .N . .N I . I .00 . ou weeew wweeew wwweee pmeew we mem eweswew wweep pmee mw memee mewzewpew esp we sews: weeew pmee xewe we memem we mewwem e ee>wu p._ peepeo awemepe eweEexm eewweeex peeew ee>ww emew u memew peewe> .eeeewpeeu "HH zo~m~>wo .e.m epeeh 77 structure of the mathematical tasks and the course's procedurally based instructional treatment Of these tasks suggested that due recognition be given to the procedural knowledge component. Thus, a flow chart containing those concepts and principles and the Opera- tions performed on them required for the task solution was the repre- sentation chosen to depict the knowledge structure of the mathemati- cal tasks (Greeno, 1976). The procedural component was less important for the verbal tasks because the strategies for performing the tasks were not specifically taught in the course and were relatively trivial for most students to learn. The propositional knowledge, being most important and largely invariant across different strategies for the verbal tasks, was represented (l) by rules (two variables linked by a relation) which were grouped by an organizing concept, and (2) by the Observation-measurement procedures that were used to determine the values Of the variables within the rules. Specific examples of both types of representations can be found in Appendix B. The knowledge representing each task was gleaned from the instructional audiotapes, the Study Guide, and the textbooks used in the course during the 1977-78 school year (Brown & LeMay, 1977; Nebergall, Schmidt, & Holtzclaw, 1976) in the following way: 1. The tapes and block notes of the selected units were reviewed and those sentences that were thought to be relevant to the eight tasks were recorded. 2. These sentences were condensed into propositions con- sisting of key concepts and their relations. 78 3. These propositions were then either incorporated into flow charts for the mathematical tasks or logically arranged together for the verbal tasks. 4. Those procedures that were the Object Of explicit instruc- tion were transformed into flow diagrams. 5. The appropriate assigned readings in the textbook were reviewed in the same manner as above except that only the relevant knowledge that was ppt_found in the tapes or block notes was added to that already recorded. There were several minor revisions made in this analytic pro- cess. The verbal tasks required the analysis of several additional course units in order to complete the knowledge base sufficient for successful task completion. In addition, those portions of the flow diagrams that were, in the researcher's judgment, necessary for task solution, but that were omitted from the course instruction, were added. On the basis of the experimenter's past Observations and knowl- edge of chemistry, several alternative, logical strategies, that had a reasonable probability of being learned and used by students, but were omitted from instruction, were also added. Highly similar pro- cedures were combined together to form more generalized routines with adaptations noted to account for the differences. Since some of the developed knowledge representations were useful for more than one task, it seemed inappropriate to treat all eight tasks as distinct. Therefore, these tasks were classified into four task groups: Task Group 1: Initial-Final Questions, Type I 2.1, 2.2 Task Group 2: Density Questions, Type I 1.40, 1.41 79 Task Group 3: Partial Pressure Questions, Type I 4.2 Task Group 4: Ideal Concept Questions, Type II 1.1, 1.2, 1.3 Each knowledge representation corresponded to one of the four task groups and contained the knowledge necessary to complete all tasks in the corresponding task group. For easy referencing during protocol analysis, a numerical and alphabetical nomenclature was developed for specifying each knowl- edge base and its component parts. This nomenclature is presented in Appendix B, along with all the knowledge base representations that were developed. Three heuristics determined the depth to which the knowledge representations were developed. 1. The detail of the analysis should be sufficient so that it represents those propositions and procedures that are nontrivial for the population required to perform the task. 2. Each step in the flow chart should contain only one ques— tion or action verb (operation). 3. Those parts of the knowledge base that typically produce few difficulties among students can be represented at more general levels (e.g., "Solve Equation for Unknown" was not broken out in a separate algorithm because most students don't have difficulty with this step). 80 Phase II Subjects The tutors of this study were chosen from a population Of teaching assistants who had been tutors in the freshman chemistry course. Since the goal of this research was to develop models Of competent tutorial performance, only those assistants who were iden- tified as "effective" by the course coordinators were considered as possible participants. All the considered tutors were informally observed by the researcher to Obtain some subjective notion Of their teaching style and to determine which tutors were consulted by their tutor colleagues for assistance in solving tutorial questions. Other criteria that were used in choosing participants included the number of terms Of tutoring experience, the number Of chemistry courses com- pleted, and the tutor's willingness to participate. More objective evidence of tutor effectiveness (student achievement and opinion data) was collected during the study and is discussed shortly. Two tutors of different sexes, meeting all the set criteria, were picked as research participants. Both tutors were in their last year of graduate study, majoring in analytical chemistry. Each had taught several basic and advanced chemistry laboratory courses, led discussion groups for unprepared chemistry students in a specially designed tutorial assistance program, tutored several terms in the freshman chemistry course, and participated in two one-week teaching seminars for beginning graduate students. One of the tutors was a certified high school chemistry teacher and had taken several graduate courses in education. Both stated that they enjoyed teaching. 81 Henceforth, to facilitate tutor comparison, the tutors will be identified as Tutor l (the male) and Tutor 2 (the female). Data Collection After subject selection, an informal meeting was held with both participating tutors. The purpose of this meeting was to intro- duce the overall goals and design of the research, to explain the nonevaluative nature of the study and data confidentiality, to hand out procedural instructions and a reference list of those question numbers and their locations (either in the Study Guide or in any version Exam 6 given during Fall, 1977 and Winter, 1978) that exemplify the selected chemistry tasks (these specific questions will be referred to as target questions), to review specific procedures for typical interactions and extenuating conditions, to test out recording equip- ment, and to have the tutors sign consent forms permitting their interactions to be taped (Appendix C contains the tutor handouts). The session also gave the tutors an Opportunity to propose uncon- sidered circumstances which would require special accommodations, to express their feelings about the research, and to ask questions. This study had four data collection points: The Pretutorial Preparation Interview, the Tutorial Interaction, the Student Inter- view, and the Tutor Stimulated Recall. Pretutorial preparation interview. A meeting between each tutor and the experimenter was scheduled before any tutorials were subjected to data collection. During this meeting, the tutor was asked a series of questions about the nature of the tutor's (l) formal 82 and informal educational experience, (2) preparation for tutoring in the chemistry course, and (3) preparation for tutoring the target questions (see Appendix D for a copy of the protocol used for this interview). The purpose of this session was to Obtain a more complete characterization of the participants and to determine whether the tutors preconceived any possible student difficulties by extrapolat- ing their own discovered misconceptions or by uncovering bits of unclear or incomplete instruction as they prepared for tutoring. Tutorial interactions. Each tutor assisted students in the chemistry help room at different intervals for two to three hours per session during the last five days when the selected chemistry tasks were being tested. Except for one case, at least three tutorial interactions per task group per tutor were tape-recorded; a total of fifteen to seventeen interactions per tutor were captured on tape. Both tutors, having participated in a pilot project that occurred the previous term, had already experienced several taped tutorials deal- ing with the target questions and a few stimulated recall sessions, and were therefore familiar with the procedures. Before any taping began, the tutor checked the student's chemistry question with the reference list of target questions to discriminate those interactions which should be taped from those that shouldn't. If the question checked with the list, the tutor explained the nature and purpose of the research to the student, obtained the student's signature on a consent form, and signaled the experimenter, who was in the help room getting ready for the next stage (the tutor's procedures can be found in Appendix C). 83 To maintain the natural setting, the tutor was equipped with a small condensor microphone attached to an Edcor Personal Mini- Transmitter. The interaction was remotely recorded by being trans- mitted to an Edcor Personal Mini-Receiver patched to a portable Sony tape recorder. These instruments were located on the experimenter so that the conversation could be monitored at a distance, the equip- ment could be concealed from the student, and the tutor was freed from carrying or Operating the recorder. The experimenter also recorded observations of nonverbal expressions and written communi- cations. The tape position number on the recorder was noted next to each Observation so that a chronicle of the nonverbal communication could be reconstructed (an example of recorded observations can be found in Appendix F). To assist the experimenter in recording black- board information, the tutors were asked to write systematically over the blackboard space and to move over when the interaction had ended. Student interview. The student was asked to participate in a five-minute tape-recorded interview in another part of the help room as the student walked away from the completed interaction. After a brief introduction to the experimenter and the research project, the student was asked to respond to questions concerning his or her thoughts or feelings about the previous tutorial and the particular point of difficulty that required assistance. Finally, given the opportunity to solve a practice problem, the student was requested to think out loud to record the solution process on audiotape (the stu- dent interview schedule and a listing of the practice problems can be found in Appendix 0). Scrap paper was supplied to record the 84 student's written notes. The interview session was designed to determine the student's description of those points during the inter- action that helped or hindered learning, the student's judgment of the effectiveness Of the tutor, and the student's pretutorial and posttutorial knowledge state with respect to the specific target ques- tion dealt with during the interaction. Tutor stimulated recall. Once the tutor had completed a session of tutoring, the experimenter met separately with each one to begin the stimulated recall session. The Objective Of this session was to stimulate the tutor's recall processes and allow the verbali- zation of thoughts or feelings as they had occurred during the origi- nal interaction by replaying each taped tutorial (Kagan, Schauble, Resnikoff, Danish, & Krathwohl, 1969). The materials required for this procedure included: (1) the instructions to the tutor (see Appendix E), (2) a covered copy of the tutorial blackboard informa- tion arranged down a vertical time axis on the page and situated in front Of the tutor, (3) a tape recorder to run continuously and tape the session, and (4) another tape recorder set to play back the original interaction and connected to two remote control on-off switches. Before playback began, the tutor read the instructions in order to understand the purpose Of the session and the role they were to play. Then, the experimenter prompted the tutor's attention and memory by having the tutor read the specific target question of the interaction and by supplying information about the student (some very general physical characteristics) and the interaction (e.g., others who might have been present, time of day). As the recording was 85 played, the tutor was able to stop and start it at will, allowing time for reflection and verbalization on the previous content. The experi- menter was also able to stop the tape to probe the tutor as to his or her original thoughts or feelings (see Appendix E for a listing of the probes). At each point in the interaction when writing occurred, the experimenter uncovered the apprOpriate written commu- nication. Helping the tutor to relive the experience by focusing on and pointing to the playback unit, avoiding significant eye contact, and reminding the tutor to concentrate on the content of the past tutorial was an additional responsibility of the experimenter. In order to increase the probability of accurate verbal reports, several recommendations mentioned in the research literature were heeded. Each tutor and the researcher participated in eight pilot stimulated recall sessions the previous term; each interview lasted approximately fourteen minutes. The experimenter reviewed two Of these sessions with two experienced stimulated recall inter- viewers to Obtain feedback on the interview strategy. An attempt was made to begin probing at a stopped point with more Open ended ques- tions so that the instructor's comments were not specifically directed. If the tutor failed to recall after a general probe, the question was remolded more specifically (the probing strategy and the specific questions used can be found in Appendix E). Whenever the tutor pro- vided a clearly post-hoc rationale to justify what had occurred earlier, the researcher asked if those thoughts had occurred originally and gently restated the purpose of stimulated recall. 86 The time lapse between the interactive and recall phases Of the study and the length of a recall session were held within the recommended bounds. The recall sessions followed from two to four hours (an average of 2.9 hours) after the original interactions. To avoid fatigue, the length Of any particular recall session was kept to within one to two hours (an average of 1.5 hours). Protocol Analysis Several transformations of the raw recordings were made and were based on a technique developed by Smith and Sendlebach (1977). First, the experimenter listened to the stimulated recall tape (which also contains the original interaction) while simultaneously viewing the tutorial Observation notes. Then, either concise summaries or verbatim quotes with verbal affects (e.g., voice tone; laughter) and tutorial observations were recorded as they occurred (see Figure 3.1). The specific site of the original data source (tape position number and the original observation notes) were recorded as references. Also, for the stimulated recall comments, account was kept Of who stopped the tutorial tape. The student interview tape was treated in the same way. All recorded Observations were written on numbered lines for later referencing and became the basic written transforma- tion of the raw information. As each summary or verbatim note was extracted from the taped interaction and stimulated recall, inferential interpretations were made in terms of the diagnostic tutorial model and the model knowledge 87 PROTOCOL ANALYSIS ANALYSIS BY DATE TIME IN TIME OUT PP 2 PAGE TTW p54 8/17/78 1-15 1 of 9 1 TRANSCRlprON SR 9/S ACT INFERENCE PR = P LINE 1_S'_an.¢_tbis_says_methane- 07 2 3 I' The initial reaction was, I talked to her 08/1 2, 4. before, I still think she’s smart. She still H2: -III (Major 1—15 3 7-12 5 reminds me of the same person who was smart Problem) 1-15 3 20-21 6 uhm. At the beginning of this one I felt a GEN: Dx H 7 little bad imposing on her that she was going 8 to take the exam right away, but she didn‘t 1. 9 seem to mind too much so I guess, that's why Cue Acquisition 13 that concession of not giving the stuff in the Prior Interaction 1-15 1 3—4 11 beginning, it wasn't done until the end. I H]: S is smart. 12 felt a little bad about taking up her time, GEN: Apt H 13 but that’s all. I feels bad 1-15 1 6-8 14 imposing taping. 15 5' right? Am I drawing the whole thing 13.4 Sensitive to S 16 right? needs. 17 S has drawn: 18 19 He CHA 3. 20 OBS DEAL: IIIAl 1-15 1 17:21 21 2.0 2.0 22 23 I' I didn't read it yet. ------- 2.1 4. 4 I reading problem. Cue Acquisition 1-15 1 17-21 25 S' [ ] S is correct 1-15 1 26,28 26 1' OK, right, right. I happy that 1-15 2 4-8 27 5' I'm trying to draw [ ] here 5 is working and 28 1' Right, right, that sounds good. drawing diagram 29 5' So that's right, 0K and this one is 760 30 torr. 31 S writes: 08$ 32 760 above He 33 34 R' Any thoughts or feelings ... 17/R Figure 3.1. interaction 1-15. A page from the protocol analysis of tutorial 88 base Of the focal target question. These inferences concerned those segments of the interaction which indicated: 1. 10. 11. where hypotheses about the student's knowledge state were generated and what those hypotheses were; where those hypotheses were evaluated and what those evaluations were; what cues were attended to or elicited and why, where cues were interpreted and what those interpretations were; what cues were missed and why; what specific parts of the model knowledge base were dealt with and where the models were lacking; where a decision rule was used and what that rule was; what the tutors' intentions were; what the tutors' feelings were about themselves, the stu- dents, and the materials; what the tutors learned from the recall session; and what were the ambiguities in the data and where did they occur. Other inferences which supported, refuted, or modified the diagnostic model were also noted (e.g., the tutor differentiates between what students know and what they can recall). Likewise, four kinds of inferences were made from segments of the student interview tapes. These inferences included: 89 l. summaries of the student's answers to interview questions, 2. interpretations of what the student's difficulties were as defined by the student and the experimenter relative to the model knowledge base, 3. those specific parts Of the model knowledge base the student used in solving the practice problems, and 4. those specific parts of the model knowledge base that the student was deficient in. Each inference was numbered consecutively for later referenc- ing. Supporting (or refuting) evidence for each inference was docu- mented by citing the specific reference point of the written data (protocol number, page number, and line number). That specific part of the model knowledge base (either a specific pathway, operation, or rule) that a particular inference made reference to was also cited. The entire analysis for an exemplar protocol can be viewed in Appen- dix F. Since the number of interactions per task group per tutor varied from the number originally planned for analysis (three), some adjustments had to be made. In those cases where the tutor continued to interact with the same student while focusing on different ques- tions within the same task group, the interactions were subclassified by letter (e.g., 2-22a, 2-22b, 2-22c) and the protocols were analyzed together, but treated separately when the tutorial models were later developed. When the number of interactions per task group per tutor exceeded three, then the following criteria were used to select the interactions to be analyzed: 90 1. only one student interacted with the tutor, 2. the time lapse between the interaction and the student interview was relatively short, 3. the same exact question had not been previously analyzed, and 4. the interaction was relatively short. The only Task 1, Tutor 2 interaction from the pilot study was analyzed since only two interactions of this type were taped during the actual investigation. In summary, except for those interactions in which the same student was tutored several consecutive times (Tutor 2 had seven interactions with three students involving Task Group 3 questions), three interactions per task group per tutor were analyzed. Operational Definitions and Evidence Enumeration After the initial transformations and protocol analysis were completed, all transformed protocols were reviewed again to operation- alize the concepts (cue, hypothesis, diagnosis, etc.) and processes (hypothesis generation, hypothesis evaluation, etc.) Of the tutorial diagnostic model, to determine what modifications of the model were suggested by the inferences generated, and to Obtain evidence Of the validity of stimulated recall. As the inferences were generated in the initial pass through the written transcriptions Of the tapes, several categories Of findings became sporadically evident in one or more of the protocols. The findings were grouped under the following labels: 91 Process Categories: (1) tutor hypothesis generation and (2) tutor hypothesis evaluation. Taxonomic Categories: (1) types of hypotheses, (2) types of cues, and (3) reasons for tutor question generation. Relationship Categories: (1) tutor decision rules, (2) cue- hypothesis relationships, and (3) heuristic rules. Measures of Method Validity: (1) tutors' ability to accurately recall events as they had originally occurred, (2) tutors' ability to differentiate between accurate recall and post- hoc analysis of the situation, and (3) locations of data ambiguity. Unplanned Benefits: (1) tutors' increased understanding of their own teaching process. To cohesively and systematically document these findings, a second pass through the protocols was made. Each segment Of the proto- cols which exemplified one of the numbered categories above was dupli- cated under the appropriate category label along with a reference to the original location (protocol number and either inference number or page and line number). In general, this analysis resulted in lengthy lists Of concrete instances for each numbered category. Instances Of validity measures and unplanned benefits were then simply enumerated. Data ambiguity was categorized according to the type of inference that the ambiguity involved (e.g., hypothesis evaluation). Similar instances within each respective process, taxonomic, and rela- tionship category (except cue-hypothesis relationships) were grouped 92 according to their similarities. Then, labels and definitions char- acterizing those groupings were developed. As a means of summarizing the cues, hypotheses, and their relationships, a cue x hypothesis matrix was formed for each inter- action. Hypotheses that were generated during the interaction were listed chronologically across the top and coded as to type. A chrono- logical list of the cues attended to during the interaction was placed vertically along the side and also coded as to their type. The cells of the matrix summarized how the cues were interpreted by the tutor in relation to generated hypotheses. An example of such a matrix is provided in Figure 4.2 on page 128. Development of Tutor Models The final transformation of the taped protocols involved the combination of specific interactions to form models of tutor behavior and psychological processing. As a preliminary step to model develop- ment, linear flow charts summarizing the action of each protocol and covert mental operations of the tutor were created. Each unit of the flow chart summarized a chunk of interaction that either was self- contained and homogenous as to its content or was a segment from which an inference was derived. Unit boundaries were defined as those loca- tions where succeeding inferences were evident or where the tutor or student changed the thrust of what was occurring. The amount of interaction that a unit outlined varied from a paragraph to a sen- tence in length; details were ignored. Each unit was consecutively numbered and referenced to the original protocol. A flow chart Of interaction 1-15 is provided in Figure 4.1 on page 100. 93 Using these summary flow charts, the written transcripts of the tapes, and the cue x hypothesis matrices, a model of tutor behav- ior and mental processing was developed for each tutor-task group in the following way: 1. Segments of the apprOpriate transcripts and summary flow charts were read parallel to each other. 2. Reading continued until points of similarity (primarily in terms of student and tutor behaviors; secondarily in terms Of tutor mental Operations) between two or more specific interactions were noticed. 3. Flow charts were useful as a summary guide; actual transcripts were studied and details noted to aid in the determination of similarity points. 4. If the points of similarity were significantly similar (that is, the differences in the events were inconsequential), then a common operation was posited. This operation depicted either the behavior of the tutor, the mental processing of the tutor, or both as it had occurred in the specific interactions and was referenced to the summary flow charts from which it was derived. 5. Since in many cases the mental processing of the tutor was generally alike (the tutor was generating a hypothesis in each case), although specifically different (the tutor's specific hypothesis generated in each case was different) in similar locations across several interactions, the following rules were developed to deal with the inclusion of the tutor's psychological processes in the models: 94 a. If the psychological process Operation was derived from only the psychological process of a segment from one inter- action, then only the general process was included (Evaluate Hypothesis [H]; Generate H, etc.). b. If the psychological process Operation was derived from the psychological processes of segments from two or more interactions that were generally different (Generate H in one case, Acquire Cue in the other case), then no process was included. c. If the psychological process Operation was derived from the psychological process of segments from two or more inter- actions that were only generally, but not specifically similar (Evaluate H was occurring in two or more cases, but the hypothe- ses were specifically different), then only the general process was included. d. If the psychological process Operation was derived from the psychological process of segments from two or more inter- actions that were specifically similar (Evaluate H: -IIB4 Info was occurring in two or more cases), then the specific process was included. e. If several different psychological processes were occurring at nearly the same time, then these rules were applied to each process separately. 6. Between each pair of sequential similarity points (common operations), the intervening differences in the interactions were studied to determine their significance. 95 7. Significant differences between interactions were judged when an event (either student or tutor behavior) occurred which: a. led to completely separate events which were not common across interactions, b. led to a different order (flow) of events which were common across interactions, c. led to identified tutor psychological processing, d. was directly related to the specific question of focus. Some examples of insignificant differences were: (i) the student said "yes," (ii) the tutor's specific wording or diagram was different but the meaning was similar across cases, (iii) the student asked if the units for R are always the ones given. 8. If differences were determined to be significant, then the specific interactions were closely studied to determine why and how the interactions were different. 9. Answers to the why question were incorporated into the model as decision rule operations (in the form of yes-no questions, the answer to which would determine the flow). Clues which helped determine why the events were different in different interactions included: the student's immediate behavior, patterns in the interac- tion, the related stimulated recall of the tutor, the nature of the target question, and links between separated segments of the inter- action. 96 10. If no answer to first question was found, but the differ- ences were significant, then a split flow (in the form of an inverted "Y") was used with the Operation "OR" placed at the junction. 11. Answers to the how question were incorporated into the model as subroutines. Chapter Summary The context for the study was described as a freshman chem- istry, self-paced, tape-tutorial instructional course. The ideal gas laws were the specific content within which tutorial interactions were Observed. The course examination questions that tested the con- tent of interest were categorized into mutually exclusive task groups, of which four were selected for further analysis. The knowledge required to successfully complete the selected tasks was represented either as procedural flow charts (for the mathematical tasks) or as lists of prOpositional rules (for the verbal tasks). Two experienced tutors were selected as subjects for the study. For each tutor, approximately three tutorial interactions involving each of the selected tasks were tape recorded. Post- tutorial student interviews were used to assess tutor effectiveness and post-tutorial stimulated recall sessions were tape recorded to Obtain information potentially relevant to the tutor's thoughts and feelings as they had originally occurred during the interactions. The tape recorded interactions and stimulated recall sessions were tran- scribed and inferences, based upon the tutorial diagnostic model, were made concerning the tutor's mental processing. Concrete instances of 97 the constructs of a tutorial diagnostic model and measures Of method validity were enumerated in a second pass through the transcripts. Each interaction and the complementary set of inferences were sum- marized in the form of coded flow charts and cue x hypothesis matrices. Finally, these intermediate summaries were combined for each task group and tutor into tutorial performance models. CHAPTER IV SUMMARY OF RESULTS Introduction The purpose of this chapter is to organize and summarize the results of this study. The results are organized into six sections: (1) summary flow charts, (2) diagnostic constructs, (3) tutorial models, (4) tutor effectiveness data, (5) validity of tutor knowledge representations, and (6) validity of methodology. The first section begins with an explanation of an example summary flow chart model of a tutorial interaction which represents part of the basic data. In section two, the concepts (hypothesis, cue) and processes (hypothesis generation, hypothesis evaluation, cue interpretation, and cue acqui- sition) Of the tutorial diagnostic model are refined and Operational- ized and concrete examples are provided. The tutor's psychological processing relating to the tutorial diagnosis is summarized for each interaction in cue x hypothesis matrices which are also part of the basic data for this study. An example matrix is presented and dis- cussed along with summary data from other matrices. In the next section, the tutor's rules which serve to guide their interactive behavior are presented first. 0f the eight general tutorial models depicting tutor behavior and diagnostic psychological processing, and develOped from specific interactions, two are displayed and discussed. The other six are included in Appendix G. This section ends with a 98 99 review of each tutor's own conception of a tutorial. Several measures of the effectiveness of each tutor for each task group are presented and explained in the fourth section. The degree Of correspondence between the developed representations of that knowledge the course was designed to develop and the tutor's structuring of that knowledge is summarized in section five. Finally, this chapter ends with the presentation of data related to the accuracy of each tutor's recall ability as it bears on the issue Of method validity. Summary Flow Charts The protocol analysis Of each interaction was transformed into a linear flow chart. These flow charts represent a summary of the flow of the tutorial interaction and depict the covert mental, diag- nostic processing of the tutor. Figure 4.1 depicts a flow chart for interaction 1-15. Reviewing the transcription and protocol analysis for interaction 1-15 (found in Appendix F) may aid in understanding the forthcoming explanations. Before presenting a detailed narrative of flow chart 1-15, a brief explanation of the format and content of the flow charts is necessary to clarify this basic type of data. Format and Content of Summary Flow Charts Flow chart units. Each flow chart is divided into boxed units which are numbered and separated by arrows. These units summarize a chunk of the transcribed protocol that either was self-contained and homogenous as to its content or was a segment from which an inference was derived. There are three types of units. A unit may simply 32‘7-I2b I: 15-28 I: 29- 32 22 lO-ZS 31 l- 5 31 14- IS 33 25- 29 Figure 4.1. :CUE ACQUISITION . PREVIOUS INTERACTION I GEN: APT H I II1 : S IS SMART IIESENSITIVE TO S NEEDS “"1"" 2 SisuESSEO AT ANSWER, 1 ASKS WHY RIGHT ‘ o I leI -III (MAJOR) I I. J ---1 _____ :DEAL: IIIAI . CUE INTERPRETATION I S IS CORRECT I :1: PLEASEO WITH S wRIT-l I. ."iGJ'iP P'AGEM - ' S: EXPLAINS PROOLEN “ DYNAMICS ORAws Aoo. aox :1: wISNEo 3 USED ORIG- : I. INAI. ORAwINC l _____ I ----.. szExPLAINS HALFED "' PRESSURE I AOOS EACH CAS To OBTAIN ANSIIIER {CUE INTERPRETATION I EVAL : H I REJECT H2 L C 100 1 : EXPLAINS CORRECTION ' FACTOR I I :OEAL:161,5 IIIICS) : 2 7 EEXCLAIUS, SHE SEES 1 'CUE INTERPRETATION I : OENtREN H I I N3z-IO4INPOIPV=III,T32| ,““:l:“’7 s: REVIEwS PROCEDURE ° ORAws New PROSLEN S 1 MAKES TOTAL V HISTAKE 1:CORRECTS :DEAtleAi I -----1 ...... S: STATES INCORRECT '° FACTOR , VA/VB CUE INTERPRETATION EVAL'. H REJECT H3 I I I I GEN: RE” H I I I I J H4z-IIIA‘Ib EVAL: N ACCEPT H4 ' I I 1 EXPLAINS VGOES FROM VA TO VT S:SAYS OK I END 3 A summary flow chart from tutorial interaction 1-15. 3: 30-34 4: I, I4-IS 7,3,9 4: 19 IO 4:32-34 5: MI 525-16 II 5: 17-20 I2 13 6:5-8 101 aReference to protocol inference (1) numbers. bReference to protocol page (3) and line (7-12) numbers. CKey to symbols used: APT: CUE ACQUISITION: CUE INTERPRETATION: DEAL: EVAL: GEN: REM: S: -IIIAlb: Aptitude type of hypothesis. Taking in information relevant to evaluating a hypothesis. Judging whether a cue supports or refutes a given hypothesis. That part Of knowledge base representations which were dealt with during the interaction. Diagnostic type of hypothesis. The process of accepting or rejecting a particu- lar hypothesis. The generation of a particular hypothesis. A tutor hypothesis of the student's knowledge state referenced to the knowledge base repre- sentations. Instructor or tutor. Remediation type of hypothesis. Student Denotes that step lb of route A of knowledge base representation III is unknown by the student. 102 summarize the behaviors of the student and the tutor as they had originally occurred in the interaction. This unit is Shown sur- rounded by solid lines and is referenced to the protocol page and line numbers which it summarizes (see Figure 4.1, unit 11). Occa- sionally a unit summarized only the inferences from the protocol analysis and stimulated recall reports and is surrounded by dashed lines (see Figure 4.1, unit 1). These units are referenced to the inference numbers located in the protocol analysis. Many of the units contain both interactive behaviors and inferences that are related to those behaviors (see Figure 4.1, unit 2). Summaries of tutor thoughts and research inferences. There are four kinds Of information that are Shown within the dashed lines of a unit: (1) summaries of stimulated recall reports, (2) inferences concerning the tutor's approach to problem solving, (3) inferences concerning which item of knowledge is focused upon during the inter- action, and (4) inferences concerning the tutor's diagnostic mental processing. The summaries of stimulated recall reports usually con- tain those feelings, goals, and knowledge deficiencies that are commu- nicated by the tutor. For example, in Figure 4.1, unit 3, during stimulated recall, the tutor reported being pleased that the student (denoted by "5") was doing the writing and that the student used a diagram. Occasionally, certain interactive behaviors and stimulated recall reports led to inferences that the tutors were using and teaching heuristic rules for problem-solving. Since this type of inference relates to the validity of the knowledge base representations 103 and is not found in interaction l-15, a discussion of it is postponed until the validity issue is addressed. Relationship to knowledge base representations. Since the remaining two types of information relate directly to the knowledge underlying each study or exam question (upon which the interaction is focused), it seems necessary to discuss this underlying knowledge first in order to clarify these two types of information contained in dashed-line flow chart units. It should be remembered that simi- lar specific instructional questions were grouped into task groups. For each task group, the knowledge base required for solving a spe— cific problem was developed as procedural flow charts as lists of propositional rules. These proposed knowledge base representations occur in almost every dashed unit Of every summary flow chart in the following ways. The observable content Of each interaction con- sists of the behaviors of the student and tutor as they deal with the apprOpriate underlying knowledge. Each component of the underlying knowledge dealt with during the interaction was referenced to a coded item Of the proposed knowledge base representation. This permitted (1) the explicit, uniform labeling Of the knowledge dealt with during the interaction and (2) the validation of the proposed knowledge rep- resentations. These references to the Chemistry knowledge of the interaction is the third type Of information included in dashed-line units and this is denoted in the unit by the concept DEAL followed by a coded reference to an item Of the apprOpriate proposed knowledge base. 104 For example, in Figure 4.1, unit 6, the tutor (denoted by I_ for instructor) explains the correction factor to be used. Using the apprOpriate knowledge base representations for the question of focus in interaction 1-15 (which can be found in Figures B1 and B3 of Appendix B), an inference was made that the tutor is dealing with Operations 1 (Is Y2 > Y1?) and 5 (Write X2 = X] - %) of route G of knowledge base I, which is a subcomponent of Operation 5 (GO to Knowledge Base I) of route C of Knowledge Base III. Inferences concerning the tutor's diagnostic mental processing (the fourth type of information included in the dashed-line unit) also reference the knowledge base representations. It was proposed in Chapter II (and is reviewed in the following major section) that the tutor develops a conception of the student's state of knowledge by generating hypotheses about what the student knows and doesn't know (denoted in the unit as §@N3_H_[The 25f APT, and REM preceding the H refer to diagnostic, aptitude, and remediation types of hypotheses, respectively, and are discussed in the next major section]). The tutor also acquires and interprets information (denoted in the unit as CUE ACQUISITION and CUE INTERPRETATION) relevant to those generated hypotheses in order to either reject or accept the hypotheses (denoted in the unit as EVAL: H). These hypotheses are characterized in terms Of the specific knowledge base being focused upon during the interac- tion. For example, unit 7 of Figure 4.1 denotes that the tutor generated a third hypothesis (GEN: REM H). His hypothesis was that the information item (PV=k) of operation 4 ($-= K ?) of route 8 of knowledge base I and operation 2 (X Decreases, Use-§ < 1 Factor) 105 of route G of knowledge base I were unknown by the student (H3: -IB4INFO (PV=k), 1G2, [the negative sign denotes that this knowledge is unknown]). Therefore, the data displayed in the flow charts can only be interpreted with reference to the knowledge base representations that were developed. Specifically, the hypotheses generated by the tutor are characterized in terms of a specific relevant knowledge base. The influence of a specific knowledge base on the tutor's diagnostic processing is also confirmed by the data on tutor effectiveness to be presented shortly. Having reviewed the basic symbolic and structural character- istics of the flow charts in general, a discussion of a particular interpretation is now presented. Narrative Interpretation Of Interaction 1-15 Before the interaction began, the tutor, having acquired cues from a previous interaction with the same student, hypothesized that this student is smart (Unit 1). The flow chart shows that this hypothesis was generated at the beginning of the interaction because this was the location that the hypothesis was reported by the tutor during stimulated recall. A more likely alternative is (and there is some evidence in the protocol to suggest) that the tutor generated this hypothesis during the prior interaction and simply retrieves it from memory at the beginning of this interaction. The tutor also feels that the taping procedure is an imposition on the student's time. 106 The actual tutorial was initiated by the student who suggested that she guessed at the right answer and wished to know why it was right. The tutor processed and interpreted this information and generated the diagnostic hypothesis that the student's deficiency is significant and that the entire knowledge base for this type of task may not be known by the student (Unit 2). In unit three, the student demonstrated her solution of the exam problem and drew a diagram. In relation to the knowledge repre- sentation, the student was dealing with step 1 of route A of knowledge representation III. The tutor interpreted the student as being cor- rect and was pleased that she was doing the writing and was using a diagram (Unit 3). The student continued to explain how the answer was arrived at (Units 4 and 5). The tutor evaluated his original hypothesis and rejected the notion that the student's deficiency is a major one; his decision was based upon the cues the student had provided which comprise the student's drawings and explanation. Unfortunately, the reason why the student initiated this explanation was not tape recorded. During the beginning of the student's explanation, the tutor provides some positive reinforcement ("right") in response to several questions from the student asking if what she was doing was correct. Initially, the tutor had not read the problem and it was difficult to respond with positive and accurate judgments. After the student describes how she arrived at the correct answer, the tutor remediated by providing several steps of the knowl- edge base that the student fails to mention during her explanation 107 (Unit 6). Five data points exist that may account for why the tutor launches into this particular remediation. Three of these were cues which are logically related to the remediation: (l) the tutor reports that the student said in the beginning, "I guessed at the right answer, maybe you can tell me why it's right?", (2) the student seems to understand the initial pathway of the knowledge representa- tion as evidenced by her drawings, and (3) the student says, "I just halfed it." All this may have suggested the generation and evalua- tion of a hypothesis that the student lacked several steps Of the second pathway. How the tutor begins the remediation is an addi- tional point of evidence. The tutor says, “The reason you halfed it was. . . ." Last, when solving a similar practice problem, the stu- dent demonstrates a deficiency with one of these same steps. All this evidence suggests that the tutor may have generated, evaluated, and acted on the basis of a hypothesis that the student lacked these par- ticular steps. In spite of this evidence, no inference was made about the generation of a hypothesis. The reason for this stems from the objec- tives of the present study. One of these concerned the validation of a diagnostic model and therefore it seemed faulty to assume a model, interpret reality in terms Of inferences that are derived §plely_from the model, and then conclude that the model was validated. Therefore, inferences of hypothesis generation were adopted only when more direct evidence existed (e.g., when the tutor verbalizes something repre- sentative Of hypothesis generation during the stimulated recall 108 session). This more conservative approach was believed necessary in order to establish some validity to the model. This suggests that there may be instances where hypotheses were generated, but for some particular reason (inaccurate recall perhaps) they did not Show up in the analysis because they were never mentioned by the tutor during stimulated recall. During the tutor's remediation some important steps were ignored, but the tutor does remind the student of the way they had done the problem in a prior interaction. The student cut in with the required piece of information (the correct correction factor, 1/2) and although she suggests she understood, the tutor hypothesized that she doesn't know the neces- sary pressure-volume relationship nor how to develop the correct cor- rection factor for the apprOpriate equation (Unit 7). This hypothesis was generated because, according to the tutor, "she just jumped to 1/2." The student repeated the generalized equation necessary for solving the problem and generated a new problem, but she made an error in determining the total volume of the mixture of gases (Units 8 and 9). There was no stimulated recall at this point, but from the Short correction that the tutor provides, the student's error was probably not believed to be serious. The student continued to explain the problem but she stated the wrong correction factor. Although this cue seems to logically support the tutor's prior notion (H3) of the student not knowing pressure-volume relationship or the develOpment of the correction factor, the tutor rejected this 109 hypothesis and instead generated and accepted the hypothesis that the student was not understanding the volume change of one of the gases (Unit 10). It is interesting to note that part of the third hypothe- sis that was rejected by the tutor was found to be an item of deficiency for the student during the solution of a practice problem. The tutor Simply explains what happens to the volume of the gas and the interaction essentially ends (Unit 11). Again the shortness of the remediation suggests that the student's error in solving the second example was believed to be just "an oversight" on the student's part. Of the twenty-four flow charts, this is one example of the basic data resulting from an analysis of the taped tutorial interac— tions and thestimulated recall sessions. After presenting the refine- ment Of the diagnostic constructs alluded to here, the diagnostic processing results of each flow chart are summarized and discussed in the subsequent section. Diagnostic Constructs In the process of developing these summary flow charts, operational definitions of several diagnostic constructs evolved. Although one might expect these operational definitions to appear in the methodology section, they are presented below because they were develOped during the analysis and therefore are viewed as one Of the products of this research. Before presenting these Operational defi- nitions, a brief summary of the diagnostic model developed in Chap- ter II is reviewed, in order to clarify the link between the conceptual and Operational levels. 110 Review of Conceptual Definitions The intellectual strategy for tutorial diagnosis, as presented in Chapter II, was proposed to encompass four basic processes. These processes are defined as: 1. Cue Acquisition: the process Of searching, gathering, and attending to units of information that are either volunteered or elicited. 2. Hypothesis Generation: the process of generating a ten- tative estimate about the current state of the student's knowledge or ability with respect to that knowledge the course was designed to develop. 3. Cue Interpretation: the process of evaluating the fit of a cue to a generated hypothesis. 4. Hypothesis Evaluation: the process of accepting or reject- ing a hypothesis as being logically consistent with the information derivable from the current set of cues. With these conceptions as reference points, the indicators which served as concrete referents for these concepts and processes can now be presented. Operational Definitions Hypothesis generation. Basic to a diagnostic conception Of tutoring is the tutor generation of hypotheses about the student's knowledge state. Since this process is covert, direct indications for this process were limited to the tutor's recall Of prior inter- active thoughts during the stimulated recall sessions. However, 111 references to prior and subsequent behavioral data served to confirm these inferences of hypothesis generation. Eight types of generic statements served to indicate hypothesis generation. These are pre- sented in Table 4.1 along with a specific example of each. For type one statements, in most cases, the tutor's expecta- tion of a particular response to a question must be accompanied by a belief about the student's state of knowledge. Statements Two and Three reflect direct estimations about what the student knows or doesn't know. If the stated goal of the tutor is to check the state of knowledge (Statement Four), then this suggests that the tutor believes the item being checked is either known or unknown by the student or the tutor iS uncertain about the student's state Of knowl- edge with respect to the item. A plus (+), a minus (-), or a question mark (?) were used to indicate each respective case. Statements One through Four were labeled primary indicator statements Since they more directly reflect hypothesis generation. Statements Five through Eight are secondaryindicators because they only indirectly imply the generation Of hypotheses about student knowledge states. For example, stating She was trying to realize a procedure or rule (Statement Eight) only implies that the tutor thinks the stu- dent doesn't know it; the tutor never stated her belief directly. The inferences of hypothesis generation based upon the primary indicators are more certain than those derived from secondary indi- cators. Only about 20% of the hypothesis generation inferences were made on the basis Of a single secondary type statement. Table 4.1. 112 Tutor Statements Reflecting Hypothesis Generation. Generic Statements From Stimulated Recall (SR) Example . Tutors' SR statements of what they expected the student to respond with when asked a question. "I guess, I expected her to be able to give me the answer, so I just waited. " (2-12b/19: 34-36)C Tutors' SR statements of what they believed the student's state of knowl- edge or ability to be. "I thought this concept was com- pletely handled by her." (1-12/9: 27—29) Tutors' SR statements of what they believed the student's difficulties with the task to be. "That her main problem was decid- ing which problem it was. (1-20/2: 25-26) Tutors' SR statements which indicate that their goal was to test the student's state of knowledge. "The reason I, (uhm) I said that was I just wanted her to go back and (uhm) make sure that that was right. Again, as kind of a check." (2-22a/1l: 34-38) . Tutors' SR statements of what they hoped the student would respond with when asked a question. "I guess I was hoping that she could tell me that the sum of the partial pressure is equal to the total." (2-15a/3: 6-9) Tutors' SR statements of what they believed was either the student's understanding of a par- ticular segment of tutor explanation or the stu- dent's meaning of a particular student com- munication. "I guess I kind Of figured he was understanding this part down there." (1-19/14z26-28) "When She said similar polarity, I thought she meant . . . about dipole moments going this way and that way, if they're both the same, they pull apart, nothing happens they cancel, that's what I thought she meant." (1-12/6: 27-35) 113 Table 4.1. Continued. Generic Statements From Stimulated Recall (SR) Examp'e 7. Tutors' SR statements of "It bothered me, meaning that it what they believed the was a wrong concept." (1-19/4: correctness Of a stu- 30-31) dent's communication to be. 8. Tutors' statements which "I was trying to get her to realize indicate that their goal that she wanted to use PV = nRT." was to have the student (2-9/1: 41) understand some procedure or rule. aGeneric statements 1 through 4 are primary indicator state- ments which directly reflect the generation of hypotheses about the student's knowledge states. bGeneric statements 5 through 8 are secondary indicator state- ments which more indirectly reflect the generation of hypotheses about the student's knowledge states. cThese are coded sequential references (2-12b/l9: 34-36) to the instructor (Tutor 2), the interaction (12b), and the protocol analysis page (19) and line numbers (34-36). Before discussing hypothesis evaluation, it is interesting to note the interpretative processes of the tutor in the second example of Statement Six. The tutor interpreted the meaning of a student com- munication in a much richer and deeper way than was actually verbal- ized by the student. In this case, other evidence suggests that the tutor's inference was probably valid. Hypothesis evaluation. Basically, the evaluation of hypotheses was Operationalized in one of two ways. First, if the tutors made a direct statement of what they believed the student's knowledge state 114 to be, then this was assumed to indicate the acceptance or rejection of a Specific hypothesis already generated (whether previously identi- fied or not). This would also tend to confirm prior inferences of hypothesis generation. Second, it was assumed that the tutor would act on the basis of an evaluated hypothesis and therefore some evi— dence other than self-report (interactive behavior) was used to con- firm inferences of hypothesis evaluation. If the tutor remediated a part of the knowledge base that was previously hypothesized to be known, then it was assumed that the tutor rejected the hypothesis. Conversely, if the tutor didn't remediate a part Of the knowledge that was previously hypothesized to be known, then it was assumed that the tutor accepted the hypothesis. Evidence to support these latter assumptions was found on several occasions. The tutors, during stimulated recall, suggested that a link exists between what they believed the student did or didn't know and what they would or would not teach. Specific examples of this along with the generic statements reflecting hypothesis evalua- tion can be found in Table 4.2. Over half Of the instances Of hypothe- sis acceptance Or rejection were based upon direct tutor statements of what the tutor thought the student did or didn't know (SR Statement Three). About 25% of the time, hypothesis evaluation inferences were suggested by tutor statements linking their decision to remediate with their beliefs about the student's knowledge state (SR Statements One and Two). Table 4.2. Tutor Statements Reflecting Hypothesis Evaluation. Generic Statements From the Interaction l. Tutors' interactive state- ments which suggest they are remediating a rule or procedure (that was assumed to form the basis for an immediately preceding hypothesis). Generic Statements From Stimulated Recall (SR) Example "Let me explain that, I'll ex- plain it down to where we were before." (1-12/10: 29-30)a Example l. Tutors' SR statements which indicate that their goal was to begin to remediate because the student lacked the pro- cedure or rule to be reme- diated. 2. Tutors' SR statements which indicate that their goal was not to remediate because the student knew or understood the appropriate procedure or rule. 3. Tutors' SR statements of what they believed the student's state of knowledge to be. "Apparently she didn't understand and, uhm, so it's like, remedia- tion. We're just going to go back and go through about how you got the fraction part." (2-12b/18: 9-12) "She gives me the correct one, so we can just go, go forward without having to spend time on how to get the number of moles." (2-9/2: 36-39) "When she said, 'Oh, multiply by,‘ I took thatbto mean, Oh, she's getting it, she knows what She's doing." (1-17/7: 5-8) aSee Table 4.1, footnote c, for the key to these coded references. bThe referent for "it" was not clear although it is likely the tutor is referring to that part of the knowledge base dealt with in the immediately preceding part of the interaction. 116 Concept Refinement Hypothesis typology. Three types of hypotheses surfaced dur- ing the protocol analysis (see Table 4.3). Diagnostic hypotheses refer to hypotheses that the tutor generates about what the student's original difficulty was. It is essentially an answer to the question, why can't the student successfully complete the task. Very specific evidence was found which supports the existence of diagnostic type hypotheses. During stimulated recall, the tutors would make refer- ences to "the problem" that the student was having. For example, Tutor 1, on one occasion, stopped the tape to cite the time when the student's "problem" was realized. During several different interac- tions, Tutor 2 would ask either directly what the difficulty was or how the student expected to begin the solution. The reason for the latter question, cited during the SR session, was to determine if it was a "small problem." This last example hints at a dichotomy of diagnostic hypothe- ses. The tutors tended to differentiate between those "problems" that were major and would require much time for remediation and those that were trivial like a substitution or math error. Examples of those subtypes can be found in Table 4.3. In general, the diagnostic type of hypothesis was usually one of the first few hypotheses to be generated during an interaction. In some cases, no diagnostic hypotheses were generated, while in other interactions, several were proposed and of these, generally only one was accepted as the diagnosis of the student's problem. 117 .meeeewewew eeeeo emeep ep xex eep wew .o epeepeew .—.e e_eew eem e .eewmmem __eoew eepe_eewpm eep me_wee wepoewpmewIwepep eep ep mwewew dew“ we wm_-~_ u~\e_-e_ ”wwe_-_e ..wpw_wee ewe we w_eewe peeeeep . .eewp peep pe ewem xppewe E.~...»mee eeewep ee eeeewe ee pew. eeeeee pp. “ewe _ .eueemw__epew we .»p__wee .eeepwpee weweeem m.peeeepm eep peeee me>ewwmm wepep e pee: ”mwmeepeexz eeepwpe< we-m wmwmm-_. ..eeweewe eweeeewe eee eewpee Iwewwewe ee ep see sees eem .peeoeeo eep eeweeepeweeee we: eee peeeeep .. wen. _ MeN-.~ "w\m~-~e ..eewp e.peep we: pee eewe ww e.ee eewep p.eee _ .wepeeeee ee epeeem pw ..eeew ee ep weewe cw .wewee m.pw ww .eweezesem p. eweee e: .eewep we eewe epew e e.pw zeeesem ...e: .eeeepeweeee ee .p.eeeee ee. wee. p .Aemee emeepzeex eep we meewp Iwmeeewe we meepm awepwee ep mwewew wepepv eepeweee a—emwuewe eee »__eewwwo Ieem ewes ewe emee emee_:eee m.peeeepm e p e mpwee ewwweeem "pesep ewwwee m we-m ”ewew-_e ..eeep peep __e we wpwpweeeee eep eee ee eeseeee p. weme p we~-e_ ”w\e_-_e .eepm ewep eeww wwe seweewe eep eeweww ep See seep p.eewe ee peep pwew _. weme p .Ameepm eexew_ wewe>em we eeewm e we .xezepee e .emee emeepxeex ewwpee ee ep mwewew wepepv empoweee x—eeme> eee x—_ewe Ieem ewes mw emee eoee_3eeu m.peeeepm eew upe>ep .eweeeo .eewwwpeeew mp eeepeewe eepe wepwe we ewewee weepwe epepm emeewzeee m.peeeepm eep we ewepewe _epeee e me_wee wepep eep me eewpeewepew eep mewwee eepe Iweeem ewe memeepeexe emeew .Aepee wee weeeeEew p.emeee we xeee p.emeee we mweeeeeew eee exec; peeeepm eep me>ew—ee we wew peep ep meemws wepep eep peep eee ee_e>ee ep eeeewmee we: emweee eep emee emeepeeex eep we mpwee we pwee peep ”mwmeepeea: eewpeweeEee ww_-e_ .e-~ ”ewee-pe ”ewe.-_e :.wezmee eep pea p.eewe ee peep wewwe eEee eeem pmen we: eweep...ew>_ex meewoee we eeepmew eeewawpeeu meewaee eew— wewwe eewpepwpmeem eEem eEemme .eswp eep pe .w .pw we: peep .eweewe eep ew wee—eewe eeem eee e: .pw e>_em ep zee mzeee ee... ewe ee peep zeww peewwe eeee ee ep eewee mexeweep .eewp eep pe 2:... we. 5.. 22 _ .Aeepm eeev :sepeewe eepe m. emee emee_:eee eep we pwee peeOwwwemwmew eee ppeEm < use—eewe _ew>wwp ewem .e_-m u~\em~I~V =.pw mzeee eem xewep p.eee w ....Ee_eewe eep ee ep xee peeee oewep Ixee teex p.emeee eem .x—_eowmee eeep .mwep__ we mpeeOEe peewewwwe eep epw: ee ep pee: zeee p.eewe eem eee ewepxwe e m.pw ee~w_eew pee ewee eem. eweme _ I wepeowee_ e_esexm .Aeepm Owwwoeememep .eo prwwo we aezepee .emee eoee Ipzeee ewwpee eev esepeewe eepe mp emee mace—Ice; eep we pwee peeoww empm we eewep < use_eewe we.ez eexpeem .eeewee_ .85: we .eeeeeeeweaeu Imps .eeeeeeewesee we>ee weepwe Aewezv we: Amvpeeeeeeeo eep emeeoee ee—eewe Eexe we weepm eep we eewpe—eEee wew Immeeuemee m.peeeepm eep Aemeeov memeee me>ew—ee eee peeeepm eep we ap—eewwwwe we woeewuwwee e.sepeewe eep= me meewwee wepep eep peep eee eepe>ee ep eeeewmee we: emweee eep emee eeee—zeex eep we Ampweev pwee peek ”mwmeepee»: prmeemewo weak .memeepeexz eepeweeeo wepew we meexw .m.e e_eew 118 During remediation, as more information about the student was gained, the tutors tended to develop notions about what the student knows or doesn't know in relation to the knowledge required for task solution. These generated notions about the student's knowledge state were considered another major hypothesis type and because they arose intermittently during remediation, were called remediation hypotheses. This type of hypothesis contained two subclasses, General and Specific, defined by the level of specificity with which they were expressed (see Table 4.3 for examples of these subclasses). There are some Significant differences between diagnostic and remediation type hypotheses. Diagnostic hypotheses are estimates of only the student's deficiency. Remediation hypotheses are generated in terms of what the student knows and doesn't know and allow the tutor to develop a more detailed conception of the student's state of knowledge. Also diagnostic hypotheses tend to be generated early on, while remediation are generated throughout the interaction and usually outnumber the diagnostic hypotheses for any given interaction. Aptitude hypotheses, the last type of hypothesis discovered, dealt more with tutor estimations Of the student's overall intellec- tual abilities rather than the student's specific state of knowledge. They tended to be generated at the beginning of the interaction and were only rarely changed as the interaction proceeded. Two other significant findings related to the hypothesis concept are worth mentioning. First, on several occasions, the tutors either stated or implied that past experiences influenced their choice of generated hypothesis. Almost all aptitude hypotheses 119 were generated on the basis of a prior interaction with the same stu- dent. Common errors among other students or past self-deficiencies were cited as the basis for generating some of the diagnostic and remediation hypotheses (examples Of this can be found in Table 4.4, Source 3a and b). Another important finding concerned how the hypotheses were characterized. Originally, it was believed that the tutors character- ized the student's knowledge state as either containing (the student knows) or lacking (the student doesn't know) a specific procedure or prOposition. However, it was found that the tutors described the student's knowledge state using such verbs as "understand" and “remem- ber" in addition to "know." The implications of these descriptions are discussed in Chapter V. Cue typology. A cue is defined as an item of information that can be interpreted as evidence and that supports or refutes a specific hypothesis. The following are sources of cues attended to by the tutor, that were discovered during the analysis: student's verbal or nonverbal communication, the tutor's memory, the nature of the target question, and the answer arrived at during the interaction. These sources, the Specific types of cues, and examples are presented in Table 4.4. It should be noted that not all student behaviors were found to be cues. A large proportion of the cues were derived from the student's verbal behavior and a high percentage of those behaviors concerned descriptions and explanations of the student's chemistry knowledge. 120 AQMAQN "P\MNIFV :exo: m wep-pp um\w_-pe =.eee peep ee .eew. m ewpeep eem weepew eez epeez ....mepee eeep eep ew pw mewee we xez weepeee m.eweew= p .meewmmewexe weewe> eweswm we mpeeEepepm .mpeeeepm .w .eewpeeweem wepeewpwee e meweweep wew meoeewewewe we mpeeeepepm .mpeeeepm .e Ammnp\m~va e.eeee we eep m.eweep emeeeee be: u >e empee m.ppue m ARI? "Q\mplpv zen: amm>: m ewm owe: pw eeee see es weep ee» epeee m m we ewes eee e>ee H: H .eewpmeee peewep eep we eewperwemepee .mpeeeepm .m .emeepzeex wweep peeee eewpmeee e eexme eeez e: we mex we mwezmee .mpeeeepm .w wee “wpweNpINV ewmwwee ee>em we mepee>emIezw= m .meewpmeee .mpeeeepm .e A—F n¢\©le =.m¥wH -ewe e eeee p eweee e.peep .eew. m eep ue\mN-~v =.pw pee p. m Ame-~e "ewep-me ..x eeww ep weewe ew ewpew e e: pee p .ewe p pee: ew ewew. m pee-m upwm~-Np =.we_eeIeee ee ep mewem ew.aeep .weeew pmes ew.>eep peep eewemww H: em .exepmwe e eees weep eweez we eeepmweeee we geee p.eee xeep pee: we mpeeEepepm .mpeeeepm .e .wezmee eep 3eex heep peep we eewp Imeee pemwep eep e>pem eee weep peep mpeeeepepm .mpeeeepm .e .eewpmeee pemwep e e>wem ep eepeeeppe aeep 3ee we meewpeeewexe .mpeeeepm .e .emeeFZeee xwpmw Ieeee wweep we mpwee ewwwoeem mew Iewwemee mpeeeepepm .mpceeepm .e eewpeeweeeeeo peewe> peeeepm .w e_eEexm eeah eeweem .mwepew he ep eeeeepp< meeu we meezw eee meeweem .¢.e e—eeh 121 HmHIm He\omIHv ..eew>we .eew>we eeep weepew meewp .eew>we eee pwe xeep .wepeeweeeee eep ew mweeEee eep eee wepeweEee eep ew weeEee e m.pw .eHeeee .meswp we peH e .emeeu .ewe eem pee .pemww wepeHeeHeo wee epew pw eecee op mewem me: eem ww mewweeeez we: H ....eewp eep pe eeweewep we; H. Heme H .mpceeepm weepe epwz meewpeewepew wewwe we Hweeee .mwepew .e HeH-eH.e "HweH-HH ..xmee mmewep ee eexewe ee ewa eeseem pw eee...ewewee ewe eeeHee H. Heme H .peeeepm eep epwz eewpeewepew wewwe e we Hweeee .mwepew .e Hweeez wepew .m HHMImN Hm\mNIHV ..we>e eee we>e mewep esem eep mewxem ee peex H eee p.eewe eem .mee» wee we eee epwz em ep wee eepeeexe H ....mewepxee mewxem p.eeee eep .eewp ewep eewwee. Heme H .eeeeme .mpeeeepm .e He~-em HM\HN-HV ..wee wew eeeewewew we eEeww e pew ep mewawp .eme mpwee esp ewem H He; m.peep .ee pe eeeH ewwez e mew>wm we eewx we: eem eee prweHee peeee eeeHep H eeez wee pe eeeeeH H eewep H. Heme H HeHImH Hm\HNIHV ..eme mpwee eep eeee weesesew eIIII.weHee ewe mmewep wHe H .meewmmewexe Heweew .mpeeeepm .e 5.8 ”(2-3 ...eeweeep ee pee eeewez eewpeeweee wwepm we peH e eee ee eweez weeee e we Iseu Heewe> eeeeem wHHeepee ee peeeeep H. Hemp H .eewpwwz .epeeeepm .e -eez peeeepm .N eHeEexm eexw eeweem .eeeewpeee .e.e eHeew 122 .HHeeew eepeHeEHpm wew eeeeepm me: eewez eewpeewepew eep ew eewpeOeH e ep mwewew eeweee ewe; eewe .eewpeewepew eep mewwee eoeeme we eewwee pweem e ep wewew meemee eew e .Heeweeee eHeeeewpmeee we eee eHeweee HHHewpwee HHee we; eewpeswewew eepexeewe eewe .meeeewewew eeeee emeep ep Hex eep wew .e epeepeew .H.e eHeew eem e .eewmmem HHeeew eepeHeswpm eep mewwemlwepeewpmewIwepep eeplep mwewew New“ H eee .eewpeewepew eep mewwee wepeewpmewIwepep eep ep mwewew H .peeeepm ep mwewew me Hem-em HoH\wHI~v pwee meow: eep me; peep peep eEemme HHepeweeEEw H .mweeEee eep we Hee es ee>wm mee peeeepm eep wH .meewz mewepesem ewe e3 Hem ep we: eewmmeweew pmwww HE mmeem H. Heme H HHmIom HoH\wHINV eeweee peew Iwee epwe eewme p.emeee eewez wezmee eewewepee ep wepeHeeHeo eeme peeeepm .eewpeewepew eep mewwee ee>wwee eewpeHem eep we mmeepeewwee eew .e eewpepem eeHeewe .m HN-H "w .He-He Neme-Hp we; eeHeewe eep pee: e.peep eNHHeew H ewee eeeewe pemww eee eeewm preee ew ewe mwezmee eep HHe eee ew> IHex mw ewee we pee pee ee» wezmee eep HHe .eewep ewep mp eewee eeewewpeee ep ew>He¥ Eeww mewem xHee m.pw eweez... ewewee eeHeewe ewep eeee e>.H. Hemp H .eewpmeee pemwep we peepeee eee ewepeewpm eew .e eewpmeeo pemwew eep we ewepez .e eHesexm eexw eeweem .eeeewpeou .e.¢ eHeew 123 Cue elicitation. Sometimes cues were volunteered and other times they were tutor elicited. Since asking questions was the pri- mary vehicle for eliciting cues from students, it was Of interest to determine if the purpose of every question was to elicit cues and if not, to discover what other reasons might there be. Table 4.5 sum- marizes the reasons cited by the tutors for asking questions during the interaction. Generic reason number 8 was cited only by Tutor 1; generic reason numbers 4 through 7 were cited only by Tutor 2. Generic reason number 1 reflects the eliciting and gathering of cues to evaluate generated hypotheses. It was found that cue elicitation was the rationale most Often cited for directing questions at the stu- dent. Occasionally, either several reasons Or no reason was given by the tutor. Before leaving Table 4.5, it is important to observe that two of the cited examples provide some additional information about the educational sophistication of Tutor 2. The first example provided for generic reason number 1 suggests that Tutor 2 realizes the importance of testing for prerequisite knowledge. Tutor 2 also has some idea of the role of verbal (thought) rehearsal in learning as evidenced by the example supplied for generic reason number 5. Construct Relationships Cue acquisition and elicitation. Cue acquisition is the men- tal process of perceiving the item of information or cue (e.g., a stu- dent states the correct equation needed to solve the problem focused on during the tutorial) and storing it in memory. These cues can 124 Table 4.5. Reasons Cited by Tutors for Soliciting Information From the Student. Generic Reason Example 1. To determine the knowl- edge state of the student with respect to the chem- istry content that is currently or about to be the focus Of discussion. Ia "What is the density by defini- tion?" I (SR) "I have to make sure that they first know what density is defined as; if they don't know that, it doesn't do any good what I Show them." (2-9/1: 17-18, 23-26)b I "Is that polar or non-polar by the way?" 1 (SR) "I've been talking for a long time, maybe I better ask him some- thing just tO make sure he's with me." (2-21/4: 7-8, 34-37) 2. To clarify what the student had just commu- nicated. . I (SR) "She said divide by molecular weight, and I wasn't exactly sure what she said. I guess I was expecting her to say multiply.... I say something like 'did you say multiply by' or something like that, coming up, to clarify it." I "SO did you say divide ormultiply?" (1-17/6: 27-33, 35) 3. To have the student elaborate and be more specific. 5 "It's getting larger." I "What's getting larger?" I (SR) "I guess I'm just helping her verbalize it, nothing else.’l (2-22a/l4: 10-11, 16-17) 4. To have the student attend to a particular part of the chemistry content. I "SO, we expect, if the volume didn't change, the temperature didn't change, can the pressure change?" I (SR) "I guess I'm just emphasizing, trying to Show her why...she should have figured that you had to add them up." (2-22b/18: 16-18, 23-26) Table 4.5. Continued. 125 Generic Reason Example 5. To have the student practice. I "OK, tell me why you did that." I (SR) "Having him repeat the thought processes again and say it out loud and listen to it...I was hOping it would convince him a little bit more and he['s] more likely to get the right answer next time." (2-20/4: 42/5: 6-14) 6. To provide, out of habit, an opportunity for the student to ask questions. I "Does it make sense how to do it?" I (SR) "It's sort of just giving them an Opportunity to ask any questions rather than just saying 'OK, that's it'...that too I think is a habit, that I just always ask, giving them that Option just in case." (2-16/12: 42/13: 8-14) 7. To determine if the solution to the target question obtained during the interaction agrees with the correct answer. I "Let's see, is that right?" I (SR) That was only for me...I was kind of saying to myself, 'am I doing it right and did I do something wrong ...or are you [referring to the stu- dent] Sure that you got the right answer copied down.'" (2-16/12: 16, 26-34) 8. To determine which procedure the student would like to learn. I "OK, you want to do it that way?" (1-13/2: l) a§_refers to student, I_refers to the tutor-instructor during the interaction, and I (SR) refers to the tutor-instructor during the stimulated recall session. b erences. See Table 4.1, footnote c, for the key to these coded ref- 126 either be volunteered by the student or elicited by the tutor. Elicited cues are those cues that are revealed because the tutor asked a question, stopped in the middle of a sentence to allow the stu- dent to complete it, or requested the student to do something (e.g., the tutor tells the student to "Work it out for me"). Cue elicitation is one way the tutor can influence the interaction in order to increase the possibility of perceiving certain cues and logically, therefore, it precedes cue acquisition. Cue acquisition and interpretation. TO evaluate and generate hypotheses, the tutor must first acquire cues (information) and then make some sense of them in relation to a hypothesis (to determine the amount that a Specific cue denies or supports a particular hypothesis). Cue acquisition, the selective attention, awareness, and perception of a cue, is necessary, but not sufficient condition of the latter pro- cesses, cue interpretation. Table 4.6 displays the combinations Of the state of these two psychological processes that were found by protocol analysis. Cue x hypothesis matrix. For each interaction, a cue x hypothe- sis matrix was developed to summarize the diagnostic psychological processing of the tutor. A cue x hypothesis matrix for interaction 1-15 is shown in Figure 4.2. Basically, this diagram contains the tutor generated hypothe- ses, listed sequentially in the order they appear in the analysis, across the horizontal axis and the cues that were attended to, listed sequentially in the order they appear in the analysis, down the ver- tical axis. Each of the hypotheses and cues was typed according to 127 Table 4.6. Relationship Between Cue Acquisition and Cue Interpretation. Cue Cue AcQuisition Interpretation EXP1anat10n Accurate Accurate The tutor accurately perceives a cue and accurately interprets it with respect to a specific hypothesis. Accurate Inaccurate The tutor accurately perceives a cue, but interprets it inaccurately (e.g., the tutor believes the stu- dent to be correct when in fact the student is not) because the tutor does not have the appropriate knowl- edge base in mind (e.g., the tutor hasn't read the target question). Accurate None The tutor accurately perceives a cue but waits with interpretation until more information is acquired because the cue is ambiguous (e.g., the stu- dent is, in a general way correct, but may not be specifically). or because the tutor lacks the neces- sary knowledge for interpretation (e.g., the tutor does not know the necessary procedures or proposi- tions with which to judge the val- ence of a cue with respect to a specific hypothesis). None None The tutor does not perceive the cue and therefore, does not have any- thing tO interpret (e.g., the tutor ignpres what the student was say- ing . Inaccurate Inaccurate The tutor does not accurately per- ceive the cue and therefore inter- prets the cue inaccurately (e.g., the tutor perceives something dif- ferent than what actually occurred and believes the student to be cor- rect when in fact the student is not). 128 .e.e eHeeH eem e .m.e eHeeH eeme .mHIH eereewepeH Hewwepep Eeww wapee mwmeepeeze x eee < .N.e ewemww .eee eeeeeeeem pee>e HeewmeH Ieeoxme .eeHH eeemee zeHee Heee eeeeeewe pee>e HeOHmeHeeOHme .eeHH eeemee e>ee< HIIII preewwwwe weeee mHmAHeee Heeepewe seww weesee eeeeweweH ”w weH peeeepm Hm eepeweeem we: mwmeepee»: Hm mwmeepeeze OHwHeeem Hm mHmeepeeHe prmeemewo He mwmeepeexe eewpeweesem Hw mwmeepeeAe eeepwpe< He eepeenew we: mwmeepeexz Hm eepeeeee me: mwmeepeex: H< Heme IImeII IIImIII =N ep m hem H em: m HH eH IIIMIII epeewe go .N\H .eem H. m 0H eH a EeHeewe meHeHexu m o eN.eH mewzewo m m eN meHZewe m e eN IIImIII eemmeeo m N H u eewpeewepeH weHwe H em I ee e ee» eHeHHH- ewszmW- HHH- weeem p ww_H e H m.w m.w E.e e eeezh e m N H memeepee»: 129 the classifications presented in Tables 4.3 and 4.4, respectively, and these classifications were included in this and other matrices. For example, in Figure 4.2, the hypothesis generated second represents the tutor diagnostically hypothesizing that the student doesn't know anything about the knowledge required for solution and this was a major deficiency. The cells of the matrix contain the interpreta- tions of the cues with respect to the hypotheses. For example, the first and last two cues that were acquired during this interaction each suggested a hypothesis to be generated. The fifth and last cues were interpreted in such a way that certain hypotheses were evaluated. Cues may also confirm, maintain, or disconfirm a particu- lar hypothesis but the tutor may not accept or reject the hypothesis being considered. Three other characteristics of this matrix are typical of most other matrices. First, a cue can be interpreted with respect to more than one hypothesis at the same time. The last cue of Figure 4.2 is an example of this; the student provides an item Of information which is interpreted by the tutor to be negatively related to Hypothesis 3 (Hypothesis 3 was rejected) and positively related to Hypothesis 4 (Hypothesis 4 was generated and accepted). This very last interpretation demonstrates another interesting char- acteristic. A single cue can be the basis for generating and evaluat- ing the same hypothesis. A hypothesis related in this way to a single cue was called a one-cue hypothesis. In contrast, a multi-cue hypothe- sis is one which is generated and evaluated on the basis of the inter- pretation of two or more cues (e.g., see Hypotheses 2 and 3). The 130 last characteristic is exemplified by Hypothesis 1 in Figure 4.2. This hypothesis like many others in other matrices was generated, but may not have been accepted or rejected by the time the interaction ends (or at least no evidence exists to suggest the hypothesis was evaluated). Plausible alternative explanations for this finding are discussed in Chapter V. Cue x Hypothesis Matrices Summary Now that the concrete referents for the diagnostic model have been discussed and an example flow chart and cue x hypothesis matrix presented, numerical counts of hypotheses and cues for each tutor across each task group are provided in Table 4.7. The following dis- cussion is limited to the more significant comparisons of the data in an attempt to characterize a "typical" interaction with reference to the diagnostic concepts. Table 4.7 displays the average number (the range is in paren- theses) of hypotheses and cues per interaction for each tutor across each task group. These numbers are based upon three specific interac- tions per task group except for Tutor 2, Task Group 3, which was based upon seven. Since the interactions were of different lengths, it is difficult to make exact comparisons between task groups. More hypotheses generated for the interactions in a particular task group may Simply reflect the fact that these interactions were longer and therefore there was greater opportunity for hypothesis generation. For this reason and to provide specific data on the time duration for a tutorial, the average time (based upon interaction times rounded Off 131 Table 4.7. Cue x Hypothesis Matrices Summarized by Tutor and Task Group. TUTOR I Task Groups II III IV Interaction lime_ Mean Range Mean Range Mean Range Mean Range Approx. Minutesa 2.7 (2-4) 5.3 (4-6) 2.3 (2-3) 5.0 (5-8) Types Of Hypotheses Total 3.0 (2-4) 6.7 (4-7) 3.7 (3-4) 6.3 (6-7) Diagnostic 1.0 (1-1) 2.0 (0-3) 1.0 (1-1) 1.0 (0-3) Remediation 0.7 (0-2) 4.3 (4-5) 2.0 (1-3) 5.0 (3-6) Aptitude 1.3 (0-3) 0.3 (O-l) 0.7 (0-1) 0.3 (O-l) One-cue 1.0 (1-1) 1.3 (1-2) 1.0 (1-1) 1.7 (0-4) Multi-cue 0.7 (0-2) 1.7 (1-2) 1.7 (O-3) 3.0 (1-5) Unevaluated 1.3 (1-2) 3.7 (1-6) 1.0 (0-2) 1.7 (1-3) Cpe Total 3.3 (2-4) 8.7 (6-10) 4.3 (4-5) 9.0 (7-13) Elicited 0.3 (0-1) 2.3 (1-4) 1.0 (0-2) 1.3 (O-3) Unelicited 3.0 (2-4) 6.3 (4-9) 3.3 (3-4) 7.7 (4-13) 132 Table 4.7. Continued. TUTOR II Task Groups I 11 IIIC IV Interaction ije_ Mean Range Mean Range, Mean Range, Mean Range Approx. Minutes 4.3 (4-5) 8.7 (5-13) 3.9 (2-7) 4.0 (3-5) Types of Hypotheses Total 3.3 (3-4) 8.3 (7-9) 3.3 (1-6) 6.0 (4-8) Diagnostic 0.7 (0-1) 1.7 (0-3) 0.6 (0-2) 1.3 (1-2) Remediation 2.3 (2-3) 6.3 (5-8) 2.6 (1-3) 4.3 (3-6) Aptitude 0.3 (0-1) 0.3 (0-1) 0.1 (0-1) 0.3 (0-1) One-cue 1.7 (0-3) 1.3 (0-3) 0.7 (O-l) 1.0 (1-1) Multi-cue 0.0 3.0 (2-3) 1.4 (0-3) 2.3 (1-4) Unevaluated 1.7 (0-3) 4.0 (3-4) 1.1 (0-3) 2.7 (2-3) 91% Total 4.0 (2-6) 10.7 (8-14) 4.6 (3-8) 7.3 (5-10) Elicited 2.0 (0-4) 7.0 (5-9) 2.9 (1-6) 2.3 (0-4) Unelicited 2.0 (2-2) 3.7 (3-5) 1.7 (1-3) 5.0 (4-6) a . . . . This column represents the time in minutes of an average interaction. b The cell numbers outside the parentheses represent the average frequency Of hypotheses or cues per interaction. cTutor 2 had seven interactions with three students with Task Group III problems. 133 to whole minutes) of an interaction for each task group is also dis- played. Average frequencies and times across all task groups are presented in Table 4.8. Typically, Tutor 2's interaction time tends to run longer than Tutor 1 except for problems from Task Group IV. The longer average interaction time for Tutor 2, with Task Group II may have been due to the fact that one of the three interactions was abnormally long because the tutor had some difficulty in obtaining the correct answer. This interaction is further explained in the next section as the tutorial model II-2 is described. Although Tutor 2 has an additional minute of interaction, both tutors for Task Groups I and III generated an average Of three hypothe- ses for an interaction time of three or four minutes. Generally for the other task groups, both tutors increased the number Of hypotheses with increased interaction time. Except for Tutor 2, Task Group IV, one hypothesis is generated for every minute Of interaction time, but these hypotheses can occur anytime within the interaction. Across all task groups, both tutors typically generate a total Of five hypotheses (see Table 4.8). Reviewing the types of hypotheses generated, the ratio of diagnostic to remediation hypotheses mostly favors the remediation type by a factor of three or more (see Table 4.8). For both tutors, the ratio is highest for Task Group I and lowest for Task Group IV. The data in Table 4.8 suggest that Of five hypotheses generated during a typical interaction for Tutor 1, one is diagnostic, one is aptitude, and three are remediation. Tutor 2 interactions Show very Similar 134 Table 4.8. Cue x Hypothesis Matrices Summarized by Tutor Across All Task Groups Tutor l Tutor 2 Mean Range Mean Range Interaction Time Approx. Minutesa 4.1 (2-8) 4.9 (2-13) Types Of Hypotheses Total 4.9 (2-7) 4.8 (1-9) Diagnostic 1.3 (0-3) 0.9 (0-3) Remediation 3.0 (0-6) .6 (1-8) Aptitude 0.7 (0-3) .3 (0-1) One-cue 1.3 (0-4) 1.1 (0-3) Multi-cue 1.8 (0-5) 1.6 (O-4) Unevaluated 1.9 (0-6) 2.1 (0-4) pg Total 6.3 (2-13) 6.1 (2-14) Elicited 1.2 (0-4) 3.4 (0-9) Unelicited 5.1 (2-13) 2.8 (1-6) 135 data except the aptitude hypothesis is replaced by a remediation type. Tutor l tended to generate diagnostic hypotheses that suggested a major or minor deficiency. Tutor 2 tended to generate more major deficiency diagnoses. Specific remediation hypotheses were usually generated more often than general remediation hypotheses for both tutors. The one-cue and multi-cue categories represent the average number of hypotheses that were generated and evaluated on the basis of one or more cues, respectively. The ratio of one-cue to multi-cue hypotheses ranged from 2:0 to 1:3 (depending upon the task group) but was generally found to be 1:2. Two out of the five hypotheses gen- erated per interaction were those hypotheses which were believed to be left unevaluated by the tutors (although generated, they were neither acceptedrwn~rejected). These three types of hypotheses were found to cross with diagnostic, remediation, and aptitude hypothesis types in a haphazard manner across the task groups for both tutors. The total number of cues acquired by the tutors varied with each task group, but in general, longer interaction times were asso- ciated with a higher number of cues acquired. Six was the average total cues acquired across all task groups per interaction for both tutors. When equal time, task group, average interactions were com- pared across tutors, the total number of cues acquired was very Simi- lar for each tutor. However, the ratio of elicited to unelicited cues is 1:5 for Tutor l, and 1:1 for Tutor 2. Again, although both tutors tend to acquire the same total number of cues per interaction, they 136 differ in their proportion of elicited and unelicited cue types; Tutor 2 elicited a higher prOportion of the total cues acquired. What is particularly striking about the data presented is the relationship between the hypotheses and cues and the task groups. Each tutor, in a parallel fashion, seems to vary the number of hypothe- ses and total cues as a function of the Specific task of focus during the interaction. There seems to be greater variation between task groups for any one tutor than between the tutors for any one task group. For example, the total number of hypotheses generated by Tutor-l fortask groups I through IV was found to be 3, 7, 4, and 6, respectively. For Tutor 2, the total number of hypotheses for each task group was found to be 3, 8, 3, and 6. Thus, the specific content seems to affect diagnostic reasoning more than individual differences. This seems to agree with Similar findings for physicians (Elstein et al., 1976). In summary, a typical interaction for either tutor contained five generated hypotheses and six acquired cues for an interaction lasting four to five minutes. One hypothesis is a diagnostic type and the others are differentially Split between remediation and aptitude hypothesis types depending upon the tutor. Also, of the five total hypotheses, one is a one-cue type and the remainder are equally Split between multi-cue and unevaluated hypotheses. The proportion of acquired cues that were elicited rather than volunteered tended to distinctly identify the specific tutor that was interacting. Varia- tions in numbers of hypotheses and cues were found to be greater between task groups than between tutors. 137 The summary flow charts and cue x hypothesis matrices with their embedded diagnostic constructs served as the basis for develop- ing models of task-specific tutorials. These models, being part of the substantive findings of this study, are presented next. Tutorial Models In an attempt to develop realistic conceptions Of a tutorial for each tutor, three types of evidence are presented. During the interactions, the tutors would occasionally verbalize rules which they reported as serving to guide their interactive behaviors. These deci- sion rules are discussed first. Next, selected examples of the task- specific models of tutor behavior and diagnostic information process- ing are reviewed in some detail. Some of the verbalized decision rules were incorporated in these models. Finally, each tutor's own conception of their tutorial situation is summarized from interview protocols. Tutor Decision Rules The first notion, of how the tutors structure the tutorial interaction, came from the tutors' stimulated recall statements. On several occasions, each tutor suggested that they applied certain principles during the interactions. These principles or decision rules, transformed into an if-then format, are presented along with the number of interactions in which they were found to have been applied (see Tables 4.9 and 4.10). Certain of these rules seem to relate to the diagnostic aspect of tutoring. For example, Tutor 2's Rule 1 functions to aid in the Table 4.9. 138 Tutor l Verbalized Decision Rules. Decision Rule Number of Interactions Indicating Evidence If the student provides the correct steps for solution, then assume the student to be intel- ligent. If the student knows the pro- cedure for determining molecu- lar geometry, then Skip drawing the Lewis dot struc- tures. If the student has trouble understanding the ideal gas assumption Of a very small volume, then present a mathematical proof of that assumption. If the student has learned a specific procedure of solution, then teach that procedure. 4c tutoring. aspect of tutoring. aDecision Rules 1-3 relate to the diagnostic aspect Of Decision Rule 4 relates to the treatment (teaching) bDecision Rules 2 and 3 apply to only one task group. c . . . . . . A Violation of this rule was found in one interaction. 139 Table 4.10. Tutor 2 Verbalized Decision Rules. Number of Interactions DeCISion Rule Indicating Evidence . If the student has already attempted the solution, allow the student to solve the problem in 4 order to diagnose the student's deficiency and determine where to initiate instruction. 2. If the student's response is interpreted as correct, then 4 go on to the next instruc- tional point. 3. If the student provides sev- eral correct responses, then 2 that student is smart. 4. If at the end of a tutorial an incorrect answer is generated, suspect the problem to be with l the student-provided infor- mation. 5. If solution may present pos- sible diagnoses already encountered in other interac- 1 tions, emphasize those steps represented by the diagnoses in the current interaction. 6. If the student is smart or knows most of the solution, 6 then move quickly through the interaction. 7. If the student doesn't walk away, teach the entire problem solution even if the 3 student verbalizes his/her own error. 8. If the student provides some indication of knowing the 3 solution, then guide him/her to get the right answer. 140 Table 4.10. Continued. Number of Interactions DGCTSTOP Rule Indicating Evidence 9. If the student lacks confidence to attempt the solution, help 1 the student to realize how easy it is. 10. If no other students are waiting for help, allow the student to 1 practice verbalizing solution. 11 . If the student can't understand the factor procedure, then 1 teach the ratio procedure. aDecision Rules 1-6 relate to the diagnostic aspect of tutor- ing. Decision Rules 7-11 relate to the treatment (teaching) aspect of tutoring. bThis decision rule applies to only a maximum of two task groups. generation of hypotheses (diagnostic type). Rule 1 for Tutor l and Rule 3 for Tutor 2 are highly similar and also help in the genera- tion of hypotheses (aptitude type). Another Similarity between the tutors can be found by comparing Rules 2 and 3 for Tutor l with Rule 2 for Tutor 2. All these rules exemplify the hypothesis evaluation pro- cess. However, the rules for Tutor 1 are much more Specific and Rule 3 (if the student doesn't know, then teach) is actually the inverse of Tutor 2's Rule 1 (if the student does know, then don't teach). Rule 5 for Tutor 2 is important since it suggests the role played by prior diagnoses held in the tutor's memory. 141 There are particular rules for both tutors which help them make interactive decisions that are not diagnostically oriented. For example, Tutor 1 tends to teach specific routes of procedures that the student has already attempted to learn. In contrast, Tutor 2 tends not to be as flexible. Rule 10 for Tutor 2 serves as another example Of a rule which aids treatment and remediation decisions. For this tutor, the number Of opportunities provided for practice is a function of the number of students waiting for tutorial assistance. Many of these decision rules (particularly those relating to diagnosis) that were identified in the interactions were built into the tutorial models. Rule 1 for Tutor 2, for example, logically fit in several Of the tutorial models. Flow Chart Task-Specific Tutor Models Except in one case, three separate interactions for each tutor and for each task group were analyzed and summary flow charts and cue x hypothesis matrices were developed for each interaction. This basic data from each group of three replications (they were not exact replications since the student was different in each case) was com- bined to yield a general flow chart model characterizing the tutor's sequential actions and diagnostic mental processing. Occasionally, student behaviors were included since so much of what the tutor does or thinks is a function of student action. Four of these tutorial models were developed for each tutor; each tutorial model was Specific to a particular task group which defined the chemistry knowledge dealt with during the interaction. Each model was labeled according 142 to the Specific task group and tutor it represented. For example, Model IV-l represents a generalized model of a tutorial for Tutor l dealing with Task Group IV types of chemistry problems (ideal concept questions). The number and length of the tutorial models prohibits a detailed discussion and explanation of each. Therefore only two models, one for each tutor, are presented here. Each model is explained in some detail; the significant aspects and points of com- parison to the other models for the same tutor and to the other tutor for the same task are highlighted. All other tutorial models can be found in Appendix G. Before discussing the content of the models, the format is briefly explained. Tutorial model format. There were four basic symbols used to enclose the specific Operations of the models. The diamond and rectangle are the significant figures. The diamond reflects decision points, where what the tutor does or thinks is a function of the ver- ity of the condition stated within the symbol. With few exceptions, decision points are dichotomous. Complex flow patterns occur when several decision points are linked sequentially. These decision points reflect the Significant differences between the particular interactions from which the models were built. Several types of deci- sion points were found to be common across tutorial models. These varieties are discussed as each model is presented. Rectangles rep- resent the operations Of tutor behavior and diagnostic thinking that are executed. Student behaviors are also depicted within rectangles and diamonds which represented elicited or volunteered cues. 143 Two types Of ancillary Shapes were also used. Ovals depict the beginning and end of each model. Linkages between pathways Of the model are represented by circles. This symbol allows the models to be presented more efficiently, and to remain unobstructed by numerous lines. Arrows specify the sequence of operations to be exe- cuted. Description of tutorial model III-l. Since the model implicitly reflects the chemistry knowledge underlying the task of focus, a brief summary of the task and its solution is presented first. A detailed description of the task and the underlying knowledge base can be found in Table 3.4 and Appendix B, respectively. Generally, the following information is representative of that provided in Task Group 3 questions: (1) two ideal gases, each of a Specified volume and pressure, are located in separate containers, (2) these gases are then mixed together, and (3) the student is asked for the pressure Of the mixture. Essentially, the solution requires the recognition that: (1) when the gases are mixed, their volume changes, (2) that this volume change causes a change in the initial pressures, and (3) that each gas is independent in terms of its volume and pressure. The procedure for solution involves: (1) drawing a diagram to recognize the conditions listed above, (2) determining the pressure of one gas after it is mixed by multiplying the initial pres- sure by a factor which "corrects" for the change in volume, (3) per- forming the same Operation for the other gaS, and (4) adding the resulting pressure for each gas to find the pressure of the mixture of gases. 144 Tutorial model III-l is shown in Figure 4.3. Table 4.11 contains a key to the abbreviations used in these models. This model begins with a decision point (1 [numbers in parentheses refer to the Specific Operation being discussed]). The condition involves whether the tutor had interacted previously with the same student. If he had, then interpretation of the cues from that prior interaction suggests the appropriateness Of generating an aptitude hypothesis that the student is relatively smart (2). This type of decision point (the condition of previous interaction) appears in almost all other tutorial models. It is unclear exactly when this hypothesis was generated; it occurred either during the prior or observed interaction. Since it was discussed at the beginning Of the Observed interaction, this pro- cess is included there. The next decision point (3) is a student behavior and is sig- nificant in that it divides the model into two separate pathways. The information contained in each pathway is roughly equivalent. The dif- ference is due to who performs the operations. The tutor executes Operations 4 through 6; the student executes operations 9 and 10. Many times what the student says or does determines the order with which an item of knowledge is dealt with by the tutor. For example, the tutor will explain that the volumes of gases change after mixing, either after the student brings it up (7-8) or later on (16). Decision point 13 simply depicts a possible past step Of the interaction. It asks if the volume change Of the gases had been pre- viously dealt with. This type of decision point (asking about a past 145 Table 4.11. Abbreviation Key of the Tutorial Models. ANS: Answer ASSUM: Assumption ATT FORCE: Attractive forces between molecules BET: Between CUE ACQ: Active acquisition and solicitation of cues CUE INTERP: Interpretation of cues attended to DEF: Definition Dx: Diagnosis ELAST COLL: Elastic collisions between molecules ELECTRONEG: Electronegativity EQ: Equation EVAL: Evaluation of some hypothesis GEN: Generation of some hypothesis HA: Aptitude Hypothesis HD: Diagnostic Hypothesis HR: Remediation Hypothesis HYBRID: Bond hybridization I: Instructor or tutor INTRO: Introduce KNOW: Knowledge base MOL: Molecule PROB: Problem or target question P-V RATN: Pressure-volume rationale, the two-step procedure for developing a correction factor QUEST: Question RELAT: Relationship 5: Student STEP: Step or operation of knowledge base procedure SUB: Substitute 146 Table 4.11. Continued. Subscripts A: Referring to gas A B: Referring to gas 8 1: Initial value Of variable at Time 1 F: Final value of variable at Time 2 1: Total UNDERST: Understanding Variables 0: Density MW: Molecular weight n: Moles P: Pressure R: Gas constant T: Temperature V: Volume -IIIAl: The student doesn't know or remember step A1 of Knowledge Base III IIIAl: The student does know or remember step A1 of Knowledge Base 111 147 CUE IIITERP SEN: 31‘! IS IIART 4 NO Sm VISUALIZATION PROSLEI 9 m ‘ S SAVS WNAIICO ' mm mun , u ”out. SAV - IRITE ' S PNOVIOES VALUES POR "T'I’A i'S VTo'Ao'Sv'T II ' 1 mm to NATIONAL: S A” ’ m ‘ EXPLAIN v CHANG! ‘ mum; mcflou VT. V‘ RV. - '0. “SE, 'f' F! JACTOR NO I! 'RITI P. - P, II OEALT'I‘I’M In " VCNANSINS 7 ASN FOR v7 YES I n ‘ IS‘ EXPLAIN V 9!“! r— S ANS INCORRECT ASK PM P CHANCE 9" “3° O'ORRECIHON SEN: MIRQINFO AND mm 7 “.102 CUE INTERP “N'- H. II VRITE E0 FOR OAS A 8 AN! CORRECT '—-i p' . p! 43:5. ] 148 8 SEN NEW EXANPLE S SUCCESTS WN DIFFICLLTY ’ I 24 ALLWS TO COHPLETE FORGASB, PF 3 PI L S GIVES WRONG FACTOR ‘5 CUEINTERP GEN: Ho -IIIAIII l , EXPLAIN V CHANCE S PROVIDES CORRECT ION FACTOR ? 2M) TASK III PRm “ALT WITH ? YES Eh") Figure 4.3. 29 PROVIDE CORRECTION FACTOR WRITE EOFOR PF GAS :r’ L SUGGEST Aoo Pp GAS A 3' A?” PF “SB 3 PT Tutorial model III-l. 149 state) was used in many models to accommodate differences in sequence between the specific interactions. In this model, as well as others, there are certain Operations that are common to all three specific interactions for which the model was derived. These common Operations are distinguished in that, no matter which pathway is activated, they must be executed (see opera- tions 12, 16, 21, 25, and 26). Decision point 17 is again a condition based upon student behavior. If the student does provide a correction factor, the tutor acquires and interprets the cue and generates a remediation hypothesis related to the knowledge underlying the generation Of the correction factor (20). If no correction factor is forthcoming, the tutor takes the initiative to ask for some information (Does the pressure go up or down?) in the hopes of acquiring some cues to evaluate a highly Spe- cific generated hypothesis that the student knows that pressure and volume are inversely related and that the pressure increases (18). This same specific hypothesis is generated by Tutor 2 during inter- actions involving the same task group. After the tutor completes the equation for determining the pressure of one mixed gaS (21), either the next gas is dealt with (24) or the student generates a new example after suggesting his/her own difficulty (22, 23). A.wrong correction factor provided by the student is interpreted by the tutor. He generates a diagnostic hypothesis that the student has failed to understand that the volume of the gases has changed and reexplains that part (255 26). Either the student 150 (27, 28) or the tutor (27, 29) provides the correction factor for the other gas and either the interaction ends there if this was a student generated example (28, End), or the tutor continues to explain the second correction factor equation (29, 30). Finally, the tutor suggests adding the two determined pressures to obtain the answer (31). The last decision point (28) is typical of others where the condition relates to the present state of the interaction (e.g., is this the second Task Group 3 problem being dealt with?). There are several important differences between this model and Tutor 2's complement. Tutor 2's model is much longer and more com- plex. Tutor 2 asks many more questions, processes more information diagnostically, and deals directly with the rationale for deriving the correction factor. Tutor 1 never deals with this rationale and mis- interprets cues which suggest that the students lack these knowledge procedures and propositions. Also, this is expected since Tutor 2 interacted with each of three students at least twice over Task Group 3 problems. Each student (n=3) that Tutor l dealt with solved the practice problem incorrectly and demonstrated the lack of this rationale. In contrast, Tutor 2's students (n=3) correctly solved the practice problem. Description Of tutorial model II-2. Task Group 2 questions concern the determination of the density of an ideal gas, given the values for its pressure and temperature and sometimes the volume. The solution involves: (l) recalling the three appropriate equations, (2) combining them algebraically into a simple equation, and (3) sub- stituting and mathematically solving for the correct variables. An 151 alternative solution uses the same information, but involves addi- tional assumptions and intermediate equation solutions and substitu- tions. Again, more complete descriptions Of the task and its underlying knowledge can be found in Table 3.4 and Appendix B, respectively. This model, like the previous one, begins with a decision point dealing with the possibility of a previous interaction (1). AS before, hypotheses are generated in or carried from this prior interaction (2). If no prior interactions had occurred, then the tutor asks for the student's difficulty (7). This same question is asked in different task group interactions that Tutor 2 experiences. The student responds with a basic question (8) which leads the tutor to initiate the solution (9). A pathway division, akin to that found in the beginning of Model III-l, exists because the student provided some information (3). The tutor's response (4) exemplifies the first verbalized decision rule presented in Table 4.10 (if the student has already attempted the solution, allow the student to solve the problem). These operations (3, 4) and number 7 seem to have a strong diagnostic function. The operations of route 4-6 parallel those the tutor takes (9-11) except that the student provides an incorrect formula which leads the tutor to generate a diagnostic hypothesis (5). The student relating his difficulty also helps the tutor to evaluate that hypothesis (6). Decision point 10 is a flow determinant which asks about the past state of the interaction. If the student previously provided STA RT PROVIDE o - 3v— WRITE o I N. DUE CONPIRNS N. ASK FOR PV 8 ART CUE ACO SEN=H3 I S ANS CORRECT CI‘ INTERP EVAL 3 Ha SEN: Hg 20 ASK FOR S DIFFICULTV d S SA” USE PVIIIRT ASN O DEFINITION W CUE ACO GEN: N. SICSINFO ASK 0 DEFINITION WE ACO 152 ‘—i WRITE GIVENS CUE INTERP PRON ' PIEVIOUS INTERACT SEN: N. , Ho 4 NAVE S 00 MEN READ PROSLEN S IRITES GIVENS. ' INOOIIIECT PORNULA ca INTERN GEN: Ho .1 S NOTICES PROS NOT ' SOLUTION CW "TEN? EVAL 1 Ho SUIDE S TO “WEN L SUGGEST AND WRITE ' PV 8 RIT GEN: R. film er0 ALREADY ASKED DEF D ? 26 EMAIN UNSURE CONCFTI GO To CUE INTERP DEN; Ho E“Lz ”D “#3 7 ES REARRANGE Pv - a”; RT‘ SAv AND WRITE 3° °' NTl SS 8 ANS CORRECT 153 SS PROVIDE NI nu: . u m m “ S ANS CONTECT “ S PROVIDES "° u l “mu, u e “u "I“ CUE INTERP “m . CI u. “L: “I TES ASK T WIT mull .1 M INTER? SEN: N' INPUT UNITS 04 CANCEL UNITS CMERT WITS INPUT WITS J. EXPLAIN R SIVEN OR ASK HKES SE“ sup D GO To 154 56 S CALCULATES NATH 57 ANS INCORRECT ESTINATE ANS 5' CUEINTERP GEN IIR SS ANS STILL INCORRECT CHECK FORNULA CHECK VALUES ‘ CUE INTERP GEN S EVAL HR s ASK AOOUT SPECIFIC ‘2 PROCEDURE 63 REMEDIATE CONFUSION 64 UNITS INCORRECT 65 CONVERT UNITS Figure 4.4. Tutorial model II-2. 155 part of the knowledge base, then the tutor guides the student through that part (10, 11). If no knowledge was previously displayed, then the tutor provides a test of prerequisite knowledge (the formula for density, 15). Operations 13 through 22 essentially deal with the tutor asking the student for the three required equations. The tutor is attempting to acquire cues to evaluate hypotheses that were generated (15, 19, 22). For example, an Operation common to all interactions is asking for the definition of moles to determine if that mathemati- cal formula is part of the student's knowledge state (22). Within that same sequence, decision points l3, 14, 18, and 24 are required to account for the possible sequences with which the tutor deals with the formulae. Also, in this and other models, decision points also deal with the correctness of a student's response (16, 23). The next common Operation involves the combination of two formulae and is followed by algebraic rearrangement (26). The subse- quent decision point represents another type Of condition. This con- dition concerns the past generation of an aptitude hypothesis (27). If the hypothesis has not been generated, the tutor breaks down the algebraic rearrangement into substeps and asks about one of these sub- steps (28, 29). Otherwise, the substep is skipped (27, 30). This decision point reflects another of the verbalized decision rules (Rule 6). From Operation 31 through 44, the variables of the rearranged equation (30) are substituted for their correct numerical values. Molecular weight is an important substitution Since it is only 156 implicitly provided and must be derived from the name or formula of the gas. This is dealt with in one of three ways. If the student cites that his own deficiency involved algebraic rearrangement, then the tutor provides the molecular weight (31-33). Otherwise, either the student provides the value (38) or the tutor asks for it and generates and evaluates a remediation hypothesis on the basis of the response (36, 37). Two decision points suggest the importance of units for this task group. Decision point 39 is a conditional statement about the current state of the interaction and the other point (40) concerns the state Of the exam problem. The tutor asks the student how to convert the units of the variable requiring conversion depending upon the state of those decision points (39-43). Appropriate hypotheses are generated and evaluated (41-44). In the next few operations, the tutor deals with generating and cancelling the units for the R constant (45-50). Then either the equation is solved mathematically and the interaction ends or those steps are skipped (51-54). The connector 93 was used between opera- tions 47 and 51 and 53 because no condition was evident that explained why the tutor proceeded one way during one interaction and another way during a different interaction. The pathway 55-65 is linear because it represents only one interaction where the answer obtained was incorrect and the tutor was forced to diagnose the problem. The problem was that the units didn't correspond with the alternatives provided and the answer obtained required conversion. Tutor 1 also obtains an incorrect answer, but 157 notices quite quickly that the units didn't match and that the Obtained answer required conversion. The two tutors have significantly different models for Task Group 2 questions. Tutor 2 asks the students for their difficulties. Tutor 1 never does. Again, Tutor 2 asks many more questions than Tutor 1. Tutor l teaches a solution quite differently from Tutor 2, but switches during an interaction to Tutor 2's procedure based upon cues acquired during the interaction. Here again is an example of a verbalized decision rule (Rule 4, Table 4.10). Tutor's Conception of Tutoring Most of the data used as evidence to support a diagnostic conception Of tutoring have been derived from actual interactions between tutors and students. To gain additional evidence from a dif- ferent perspective, each tutor was asked rather directly about their preparations for tutoring and their perceptions of their task. The purpose of the preparation question was to determine if the tutors used any areas of unclear or incomplete instruction and their own discovered misconceptions as sources of possible diagnostic hypotheses. Both tutors reported that their preparations generally included a review of all instructional materials and questions, but to different degrees. One tutor did suggest that several series of study and exam questions were worked through to determine if there were ones that were confusing and to try to remember where the stu- dent's problems would be. 158 When asked how the tutors view their task, each tutor inter- preted the question in a somewhat different way. Tutor 2 communi- cated that her task was a function of the student and that there were three types Of students: (1) those who want a one-Shot interaction (to learn why a specific answer is incorrect or how to solve a spe- cific problem); (2) those that want to interact for longer periods of time, discuss several problems, and learn patterns of solutions (the Task Group 3 interactions for this tutor are examples of this type); and (3) those that don't want to come for help at all. Tutor 1 had a different conception. Ideally, the tutor expected the student to initially explain their present view of the solution and the tutor's task was to determine which parameter was ignored or remained unclear. He felt that most of the solution should have been understood by the student before the interaction and the tutor Should only have to explain one little "hang up" concerning a chemistry concept. Thus the tutor would interact with many students, each for a relatively Short period of time. For real interactions, he reported that the students never attempt the problem, so their initial presentation is by-passed. The tutor has to lecture, reexplaining the material presented instructionally, and deal with mathematical rules and procedures in addition to chemistry. Both tutors reported that the amount of time per interaction decreases as they approach the end of an exam period (about two weeks long). 159 Summary Three types of evidence which support a diagnostic model of tutoring were reported and discussed. First, tutors reported using decision rules which help them diagnose and remediate student diffi- culties. Confirming evidence of the use of many of these rules was obtained from Observing the interactions. Models of tutor interactive behavior and thoughts were presented and compared. These models depict how tutors integrated the diagnostic and remediation process for different task groups. Finally, the tutors' own narrative concep- tions of their tutorial preparation and interactive tutoring provided an additional perspective from which some confirming evidence was discovered. Tutor Effectiveness All of the findings presented above relate to the process Of tutoring. The purpose of this section is to relate this process data to outcome measures of tutor effectiveness. Essentially, an attempt was made to determine how what a tutor does or thinks during an interaction influences the students' chemistry competence. These measures of tutorial outcomes might also confirm that these tutors, initially identified as effective instructors, were indeed so. Effectiveness Criteria The data from which effectiveness criteria were developed were collected in part from the tutorial interactions. Most evi- dence, however, resulted from post-interaction student interviews. Three Significant types of criteria were used to assess tutor 160 effectiveness. These included (1) the diagnostic abilities Of the tutor and the student, (2) the students' solutions of practice prob- lems, and (3) the students' opinions of the tutors. Diagnostic abilities. The match between the researcher's diagnosis of a student's pre-tutorial deficiency and the tutor's diagnosis, the student's self-diagnosis, and the student's practice problem errors were three measures used to Operationalize this criterion. For each interaction, the researcher's diagnosis was carefully documented and based only upon the interaction protocol and the students' report of their initial attempt at solving the study or exam problem prior to the interaction. The first match, between the researcher's diagnosis and the tutor's diagnosis, was used as a measure of the tutor's ability to correctly diagnose student deficiencies. Three different types of tutor hypotheses were each compared to the researcher's standard; any or all of these might be used to assess the tutor's diagnostic ability. In some interactions, it was apparent that the tutor believed that a particular hypothesis was "the diagnosis" of the student's deficiency; this was the first type of hypothesis compared. Some interactions contained one or more diagnostic hypotheses, but it was not apparent which of these was "the diagnosis." These hypotheses served as a second basis of comparison. Other interactions contained only reme- diation type hypotheses. Since some of these remediation hypotheses (concerning what the student doesn't know or what the student may or may not know) may have been misclassified (and in reality represent 161 diagnostic hypotheses), these were also compared to the researcher's diagnosis. For any particular interaction, a match was either present (1) or not (0) between the researcher's standard and the best esti- mate Of the tutor's diagnosis (a diagnosis was used if present; if this was unavailable, diagnostic hypotheses were used if present; if these were unavailable, remediation hypotheses were used). Each diagnosis consists of those specific operations or propositions refer- enced to a knowledge base. Three types of criteria determined that a match had occurred: (1) the tutor's and researcher's diagnoses matched perfectly, (2) the tutor's diagnosis contained at least two- thirds of the operations or propositions of the researcher's diag- nosis, or (3) the tutor's diagnosis contained no more than two Opera- tions or propositions that were not included in the researcher's diagnosis. These same criteria were used with the other comparisons discussed below. A second comparison was made between the student's diagnosis Of his/her own pre-tutorial deficiencies (as reported after the inter- action) and the researcher's diagnosis. This measure was believed to possibly indicate the student's understanding of his/her past defi- ciencies and what he/she had learned during the interaction. A final comparison was made between the student's errors made when solving the practice problems and the researcher's diagnosis of his/her original deficiencies. This was used to indicate whether or not the deficiencies were remediated (e.g., was the student experi- encing the same difficulty, both before and after the interaction). 162 Practice problem solution. Three measures of effectiveness are also reported here. An imprecise measure used was whether the student Obtained the correct answer to the practice problem. There were several problems with this measure. In several cases, students had Obtained the correct answer, but made errors in their thinking ‘about chemistry which suggested that deficiencies were still present (e.g., the student guessed correctly). Occasionally, students failed to obtain the correct solution because of an error made in a part of the practice problem that was not part of the original study or exam problem (e.g., the student cannot convert specific units in the prac- tice problem, but no unit conversion was necessary in the original problem). Another measure was developed to rectify these problems of the correct-incorrect measure. It relies on the number and type of errors made during the solution of the practice problem. The measure was: either the student made no errors at all or made no errors that were preventable by the tutor during the interaction. There were three types of errors that a student could make: (1) the student makes no errors in the solution of the problem, (2) the student makes errors that could not have been remediated by the tutor (e.g., the student adds incorrectly), and (3) the student makes errors that Should have been remediated by the tutor (e.g., the student didn't know the relationship between volume and pressure and this relationship was required for solution of the original study or exam problem). It was thought that error types 1 and 3 were the best measures Of tutor effectiveness. 163 The final measure of this group is more an indicator Of con- gruity between the tutor's and student's approach to the problem than Of tutor effectiveness. For each task group and tutor, one interaction was randomly chosen and the tutor's taught solution was compared to the student's solution of the practice problem. Student opinion data. Students were asked for their thoughts or feelings about the tutor and the interaction. The first indicator of effectiveness was whether or not the student made only favorable comments. A second indicator was whether or not the student preferred any changes in the interaction. This may indicate areas of tutor inef- fectiveness. Another indicator was whether or not everything was clear to the student (the actual wording of the question asked [was there anything unclear] may have biased the student's reaction). A fourth indicator used was whether or not the student would return to the tutor for additional help. Effectiveness Results A summary Of the effectiveness findings for each tutor within each task group is presented in Table 4.12. TO Simplify the task of comprehending this numerical mass, only the significant features within each major criterial grouping are discussed below. Except for student practice error, all specific criteria were worded in such a way that generally, higher cell numbers represent greater degrees of effectiveness. Diagnostic match results. Since there were very few diagnos- tic hypotheses available, any Of the hypotheses representing a tutor 164 eepez Hmepewpm H H H H H H H H eereHem wepeH-peeeepm eewpeHem e.wH e.wN N N e e H e weeeee ppewwee seHeewe mum: ULOLLm mu .5095 H H N e e H m H eHeepeesewe ez we eeez wewwm ez weepHm H N e m N N e N wewew epwpeewe peeeepm HeereewepeH e H N N H N m N wepwev xe peeeepm xe e H H N H N H m H xe wepeH HepeH e.weeeweeeee H e N H H H H e Nee we -ee wepeH wee ep eepee OHpmeemeHe e H e e H e e e ee wepeH wee e e e e e e N eH xe eHem e.wepeH N H N H eN H N H wepew e eeeww emew m eeeww ewee N eeeww emew H eeeww emew ewwepwwu mmeee>wpeewwu .eeeww emew we mmeee>HpeewwN eewpeweesee eee ewpmeemewo wepew we eemwweeseu .NH.e eHeew 165 .eHHe> HHHeeHEeee pee ewe: peep pee .wezmee peewwoe eep meHeHEwepee eH HeweHee ewe: peep .eEeep we meHew eepeemewe wopep ewe op eeeew ewe: meer -oewepew emeep Eoww mpeeeepm eep .EeHeewe eewpeewe eep op wezmee wHeep eH peewwoe emeoepHeee ewe: epeee .peeeepm eee wopep eep eeeZpee eepes e mewwwemwm .H. weeEee eeH .EeHeowe eewpeewe eep meH>Hom eee; peeeepm eep He eeme peep esz Heeme meHew eee meepm we onw eee eer eep we mswep eHV wopep eep He pemeep Hmepewpm eereHem eep eweeeeo op eemeeo HHsoeeew we: eewpoewepew eeoe .mHHepee.weepwewwow pxepeep eem .eoHpoewepeH wowweeep meHweeeepeHeeEew eeeee>ee eHeoe.:peHeeem eeHez EeHeewe eowpeewe e we eoHpeHem eep mewwee eees mwewwe peeeepm ewe mwewwe eHeepee>ewee . N eeeH HHeHmmee peeps we HIV meeeH peemmwm eep pee: mpeemeweew eeHez wepep eep He eepeweeem mwmeepeme eer eereHeeEew < Hm: mmHmeemeHe eep ee pee Hes we Hes eewez wepep eep He eepeweeem mHmeepoeHe we eer prmoemewe < He: HEeHeewe prmwseeo e e>Hem op HpHHHeeew wee\mHe He eeeeeew>e me prmHEeee we emee IHzoee wee\mHe eH mewoeewewwee ewee we eee m.peeeepm eep we mwmoemewe er .mHerHm op Hexe .eeeow ewe: meerHeeee HeHwepre emeep eewez eH Heewep we Eeewxee e op eee meoHpoewepeH we weesee eep mpeemeweew eHeep eep we HHee eeeme m m m m N m m m ewepem eHeez N ee N eN e H N N .85 we: eewepre: epee eeHeHeo em e o e we e m N m N H m m eN m e eu z e w w we peeeepm m m m m H m m m mpeeeeeu eHeewe>ew HHeo N H N H eN H N H wepeH eHwe Hw mmeee>H ee e eeoww emew m eeewu ewee N eeeww emew H eeewu emew . p. u .p wwm .eeeewpeeu .NH.¢ eHeeH 166 diagnosis was used to judge diagnostic ability (the specific criterion is Total Tutor Dx)’ Tutor 1 fails to diagnose correctly two out of three interactions no matter what task group is considered. However, Tutor 2 appears to diagnose student difficulties quite well except for the problems from Task Group 4. It should be noted that the solution to Task Group 3 problems requires the use of the knowledge base for Task Group 1. The relationship between the tutor's diagnostic ability and the student's practice error is important to note Since it pro- vides some hints as to why the student was or was not helped. For all task groups except number 3, the one interaction that Tutor 1 did diagnose correctly was also the same interaction that the student's deficiency failed to appear during the solution of the practice prob- lem. For Task Group 3, Tutor l, and for Task Group 2, Tutor 2, cor- rectly identifying the student's deficiency didn't stop the student from maintaining the deficiency during the practice problem solution. This seems to suggest that the remediation aspect of tutoring may not have been adequate for those students. In Task Groups 1 and 3, where Tutor 2 is strong in diagnostic ability, the students do not repeat the same error that they had prior to the interaction. Except for the last task group, the students interacting with either tutor tend to be able to label their own pre-tutorial deficiencies after the interaction. Apparently, for Tutor 1, Task Group 3, and in a few other isolated cases, being able to label their own deficiency did not prevent the students from maintaining the same deficiency. 167 Practice problem solution results. For Tutor l, the majority of students for all task groups made errors in solving the practice problem that might have been corrected during the interaction. The students who interacted with Tutor 2 and who focused on Task Groups 1 and 3 made few errors, or if they made any, were of a nature that the tutor could not have prevented them (e.g., a mathematical error or using an incorrect molecular weight for a gas not dealt with during the interaction). For the other two task groups, the students of Tutor 2 did much worse. In several cases (Task Group 3, Tutor 1, and Task Group 4, Tutors 1 and 2), the students were obtaining the correct answer, but still maintained some deficiency in their chemistry knowledge. The final criterion of this group shows that in every ran- dom comparison made, the student's solution operations (and their order) match those that the tutor presented during the interaction. Student opinion data results. These data Show clearly that almost all students that were involved in taped interactions had only favorable comments about their respective tutor and would return to these same tutors for additional help. Several favorable comments were made about the algorithmic nature of the tutoring (e.g., "step by step . . . breaking each example down"), the tutor staying con- ceptually close to the task at hand (e.g., "doesn't run off on tan- gents") and the tutor's pleasing affect (e.g., "patient," "friendly") and ability to be understood (e.g., "speaks clearly,“ "explained it well"). Both tutors appeared to have some explanation problems with Task Group 2; most students thought at least one item was unclear. 168 For Task Groups 1 and 3, students who interacted with Tutor l believed his explanations to be clear yet maintained the same deficiency they had prior to the interaction (see student practice error). This sug- gests that either students can't tell when something is unclear to them and/or students found everything clear, but couldn't remember the explanation. For these same task groups, Tutor 2's students found everything clear and their deficiencies were remediated. However, the same deficiency that the students of Tutor 2 exhibited when solv- ing a Task Group 2 practice problem was described by them as being unclearly explained during the interaction. Effectiveness across all task groups. Table 4.13 summarizes the data for each tutor across all task groups. Tutor 2, in compari- son to Tutor 1, seems better at diagnosing the student's deficiency correctly and at effecting student learning. In terms of student Opinions, both tutors seem to do quite well. Validity Of Knowledge Base Representations It has been suggested that the tutors maintain models of the knowledge the course was designed to develop and which underlies each task the student is asked to perform. It was further suggested that the generated hypotheses are derived from these models. Procedural knowledge representations (for the three mathematical tasks) and a propositional knowledge representation (for the verbal task) of these tutor models were developed from the instructional tapes and text- books (these representations can be found in Appendix B and Chapter III contains the details of their development). This section concerns Table 4.13. Comparison of Tutor Diagnostic and Remediation Effectiveness Across All Task Groups. 169 Effectiveness Criteria Tutor l Tutor 2 Tutor's Sole Dx i/iza 2/11 Any Tutor “0 1/12 1/11 Diagnostic Any Tutor HR- or HR? 2/12 5/11 Mate“ t° b Total Tutor 0x 4/12 8/11 Researcher's Dx Student Dx (after interaction) 7/12 7/11 Student Practice Error 9/12 3/11 Either No Error Made or 3/12 7/11 c Practice No Preventable Error Made Problem Correct Answer 5/12 5/11 Solution Student-Tutor Solution 4/4 4/4 Strategy Matchd Only Favorable Comments 9/11 10/11 Student Preferred NO Changes 6/9 7/11 Opinion Data Everything Was Clear 12/12 11/11 Would Return 12/12 11/11 aThis is the number of interactions for which data were available. b See Table 4.5, footnote b, for a key to these symbols. CSee Table 4.5, footnote c, for an explanation of this cri- terion. dSee Table 4.5, footnote d, for an explanation Of this cri- terion. 170 the degree of identity between those developed representations and the tutor's own structuring of that knowledge for tutoring (the tutor's models). There were three criteria used to examine this validity issue: (1) comparison of the operations, (2) comparison of the flow or order of executing the Operations, and (3) comparison of the general approach to solving the study or exam question. Comparison Of Operations Comparisons were made between those Operations that were incor- porated in the knowledge base representations and those Operations that the tutors used during the interaction. There are three basic findings. Several alternative routes that a tutor might have used were incorporated into the procedural knowledge representations. Four Of these were never used by either tutor (Knowledge Base I, Routes H and E; Knowledge Base II, Route B; Knowledge Base III, Route B). The existence of almost every hypothesized knowledge base representation step of those alternative routes that tutors did use was validated by interactive evidence. Two of the procedural representa- tions and the propositional representation had to have three Opera- tions (Knowledge Base 1, operation Alb; Knowledge Base II, operations Alb and 01b) and twelve propositional rules (Knowledge Base IV, Group F) added to them, respectively, in order to match more closely that which was dealt with during the tutorials. Most of the added rules could simply be derived from those rules already proposed. 171 Comparison of Flow The orders Of the Operations that were originally prOposed were based upon two criteria: (1) that they be logical and (2) that they be easy to diagram. It was realized early on that there were several flows that might logically fit a particular knowledge base. Thus, it was expected that the flow for a specific tutor might vary from that proposed. The order of the Operations for the procedural representations was followed quite closely by Tutor 2. However, Tutor l reordered all three representations in rather insignificant ways (most of the essential flow of the procedures remained valid). Usually, the changes involved switching the order of one or two operations or condensing several linked operations into one. For example, on two occasions, Tutor l converted the units of the temperature value before, rather than after setting up the equation to be solved (as proposed). Heuristic vs. Algorithmic Problem Solving The proposed procedural knowledge representations are actually algorithms and this implies that the tutor teaches in a like manner. The approach the tutor uses could be algorithmic, heuristic, or both. An algorithmic approach is characterized by (l) a procedure that, when executed, obtains the correct answer every time; (2) a set of small steps linked together to form larger pathways, and (3) a set of rules that are accurate and reliable in all circumstances for which a par- ticular knowledge representation was designed to deal with. Con- versely, a heuristic approach is characterized by (1) a procedure 172 that is less structured, involves haphazard use of operations, is more discovery-oriented, and does not always yield a correct answer each time; (2) steps that tend to be larger units; and most importantly (3) a set of rules that are not always reliable and accurate in all circumstances that the procedure could be used for. Evidence exists which suggests that both approaches are used by tutors. AS mentioned above, for the most part, both tutors did follow the hypothesized knowledge representations during their tutorials. Also, five of the students who interacted with Tutor 2 and one of the students who interacted with Tutor l voluntarily reported that the "systematic," "breaks everything down," "step by step" approach of the tutors helped them to learn. Some evidence also indicated that both tutors teach heuristic rules in one of three ways: (1) implicitly by modeling, (2) explicitly by modeling and verbalizing actions, and (3) explicitly and rationally by modeling, verbalizing actions, and by stating the rationale for using the rule. Examples Of these three modes of tutoring are pro- vided in Table 4.14. Two heuristic rules (Write Givens and Use Units), defined in Table 4.15, were taught during some Of the interactions. Table 4.15 compares the teaching of these rules across tutors and mathematical task groups for which these rules are potentially useful. Each tutor tends to teach these heuristics in selective ways. Tutor 1 never teaches the units heuristically. Neither tutor teaches the units heuristically for Task Group 1 problems. This is not surprising since the units are not a crucial component of that procedural knowledge Table 4.14. 173 Prevalent Ways Tutors Teach Heuristics for Problem Solving. Ways to Teach Heuristics Definition Example Indicator Implicit The tutor only models The tutor reads the study the use of the heur- problem and draws a dia- istic. gram with most of the given information in- a cluded. (2-12a/l: 38-45) Explicit The tutor models the 1b "The first thing I heuristic while ver- always suggest would be balizing what is to write down everything being done. that you know." The tutor verbalizes and writes down the given information. (2-20/1: 40/2: 16-17, 20) Rationally The tutor models the The tutor writes down the Explicit heuristic, verbalizes units and verbalizes each what is being done, and states a rationale for its use. aS she does. (2-16/8: 8, 33/9: 5) I "The reason I always do it, always with units, I can say...[the tutor mathematically cancels out like pairs of units]. We are left with grams per liter, so I can be pretty sure my answer is right because I got the right kind of units." (2-16/10: 26-35) aSee Table 4.1, footnote c, for the key to these coded references. 9; refers to the tutor-instructor during the interaction. 174 Table 4.15. Comparison of the Ways Tutors Teach Two Problem-Solving Heuristics. WRITE GIVENS HEURISTICa USE UNITS HEURISTICb Tutor 1 Tutor 2 Tutor 1 Tutor 2 Task Group Task Group Task Group Task Group 123123d123123 Implicit 2 3 1 7 3 l Explicit 2 l 1 l Rationally 2 3 1 Explicit aThe complete heuristic is: Write all the given information down and use as an aid in searching memory for the apprOpriate proce- dure and formulae to use. bThe complete heuristic is: Translate all variables in a for- mula to their appropriate values and units and see if the units can be mathematically cancelled to produce the correct units. This is a means of heuristically checking whether the solution is correct. cThe cell numbers refer to the number of times the tutor used or attempted to teach the respective heuristic. dThere were seven total Task Group 3, Tutor 2 interactions. 175 base. Tutor 2 teaches the givens heuristically for Task Group 2 problems but Tutor 1 does not. In this case because of the nature Of the solution, the rule might be quite useful, although the number of units makes it difficult to apply. Most of the heuristic teaching tended to occur at the implicit level. Tutor 1 teaches the givens heuristically at the rationally explicit level, probably because of the reported importance he attached to drawing a diagram which includes writing the givens. In summary, some proposed alternative routes were never used by the tutors, yet most of the Operations of the other routes were. A small number of operations and rules were necessary additions to the representations. The proposed order of executing the operations was followed by both tutors with a few minor exceptions. Finally, it was discovered that both tutors teach both heuristically and algorith- mically. Method Validity The stimulated recall (SR)technique suffers from a particular weakness: the questionable accuracy with which the participants report their prior thoughts. Although not systematically collected, some data do exist which relate to this issue. Three measures Of method validity are discussed: (1) tutor overt event recall, (2) tutor ability to distinguish interactive and post-interactive thoughts, and (3) ambiguities in the data. Occasionally, during stimulated recall, the tutor would volun- tarily recall an interactive event not yet heard on the interaction 176 tape. The accuracy of these responses is compared for each tutor in Table 4.16. Although both tutors exhibit some recall inaccuracy, the frequency of accurate recall is higher. Table 4.16. Accuracy of Tutor Recall of Overt Interaction Events. Tutor l Tutor 2 Accurate Event Recall 9a 6 Inaccurate Event Recall 4 2 aThe cell numbers refer to the number of times the tutor voluntarily' predicted an event not yet heard on the interaction tape during stimulated recall. The second indicator dealt with the tutor's ability to dis- tinguish recalled interactive thoughts (then) and post-interactive explanations (now). For Tutor 1 the frequency of success for differ- entiating between recalled "then" thoughts and generated "now" thoughts was greater than that of failing to distinguish. Tutor 2 doesn't appear to be able to differentiate as well (see Table 4.17). Data ambiguity is the final measure of method validity. About a half-dozen instances of ambiguities were evident, most of which involved Tutor 2. The ambiguities concerned tutor expressions Of uncertainty Of prior thoughts, expressions of uncertainty of when a particular thought occurred, and expressions of ambiguous meaning. Two additional findings have resulted from anecdotal evidence. Generally, Tutor l exhibits more confidence in his statements and 177 less post-hoc analysis of the interaction during stimulated recall (SR) sessions than Tutor 2. An unplanned benefit derived from the use Of the SR technique was observed. On several occasions, both tutors communicated an increased awareness of their own tutorial behaviors and a recognition of areas for improvement. An example of this can be found in Appendix F, interaction 1-15, inference number 13. Table 4.17. Tutor Ability to Distinguish Between Prior Interactive Thoughts and Thinking During Stimulated Recall. Tutor l Tutor 2 Example Indicator R (SR)b "Were there any particular thoughts or reactions to the student's response at that time?" I (SR) "Now, yes, then no." (1-20/3: 22-25)c Does Distinguish 7a 3 I (SR) "See, it's hard to 6 tell whether it's some- thing I'm looking at now or then." (2-12b/21: 32-33) Fails to Distinguish l aThe cell number refers to the number of times the tutors voluntarily either distinguished or failed to distinguish between what they were thinking in the prior interaction and what they were thinking during the stimulated recall session. bR (SR) refers to the researcher during stimulated recall. I (SR) refers to the tutor-instructor during stimulated recall. CSee Table 4.1, footnote c, for the key to these coded refer- ences. 178 Chapter Summary The results of this chapter were presented in six segments. First, a summary flow chart was presented and discussed as an example of a basic result of protocol analysis for each interaction. The second section dealt with the constructs of the diagnos- tic model of tutoring. Eight primary and secondary generic SR state- ments serving to indicate the generation of hypotheses were presented. Four types of statements served to Operationalize hypothesis evalua- tion. Evidence was cited supporting the classification of hypotheses into diagnostic, remediation, and aptitude types. Student verbal and nonverbal communication, tutor memory, the target question and answer Obtained during the tutorial were identified as sources for cues to be interpreted as either supporting or refuting a generated hypothesis. Cue elicitation was found as the rationale most often cited by tutors for asking interactive questions. Five possible relationships were shown to exist between cue acquisition and cue interpretation. This section ended with the presentation and explanation of an example cue x hypothesis matrix. One matrix per interaction was an additional basic result of the protocol analysis. It was found that, on the average, these tutors generated five total hypotheses of which one was diagnostic for interactions of about four to five minutes in length. An aptitude hypothesis tended to be present for Tutor l but not for Tutor 2. The majority of the hypotheses were Specific remediation types. One-cue, multi-cue, and unevaluated hypothesis types occurred in the ratio of 1:2:2. Six 179 total cues were acquired per interaction; about half of these were elicited by Tutor 2 while less than a third were elicited by Tutor 1. In the next section, three types of evidence were presented which support a diagnostic conception of tutoring. Tutors tended to use diagnostic and treatment decision rules which served to guide their interactive behaviors. Two (of a possible eight) general tutorial models depicting tutor behavior and diagnostic mental processing were displayed and explained. The overlap between these models and the verbalized decision rules was discussed. Summaries of narrative accounts of how the tutors perceive their tutorial task were dis- cussed. These perspectives provided some evidence that the tutors maintain some implicit notions about tutorial diagnosis. The fourth section of this chapter dealt with three criteria of tutor effectiveness: the diagnostic ability Of the tutor, the student's ability to solve a post-interactive practice problem, and the student's Opinion of the tutor. The diagnostic ability of the tutors varied across task groups. Tutor 2 seems to lack diagnostic acumen with problems from Task Group 4 only. Tutor l maintained few correct diagnoses for any of the task groups. Only students who interacted with Tutor 2 over problems from Task Groups 1 and 3 did well on the practice problems. Both tutors, across all task groups, were rated favorably by almost every student. The relatively high congruence between the proposed content and order of the Operations which formed the knowledge base repre- sentations and those that were actually used by the tutors were sum- marized in section five. It was also found that the tutors' general 180 approach to problem-solving could be characterized as partly heuristic and partly algorithmic. In the final section, relatively soft data were presented which suggest that the tutors were not always able to accurately recall interactive events, nor to distinguish recalled interactive thoughts from post-interactive explanations. Also, some instances of data ambiguity were discussed and some anecdotal evidence presented. CHAPTER V CONCLUSIONS, IMPLICATIONS, AND RECOMMENDATIONS Overview This chapter begins with a brief overview of the study. The conclusions are then presented in the form of answers to the previously posited research questions and accompanying issues are raised and dis- cussed. Finally, the chapter ends with a listing of the implications and recommendations from a practical and theoretical stance. Due to the rapid adoption of systematized and individualized instruction among college science instructors, tutoring is becoming a significant focal point of instructional interaction. Most Of the research on tutoring within this context is designed experimentally and has identified effective tutor behaviors and tutorial variables. Some progress has been made in training tutors to use these skilled behaviors. In addition to interacting and behaving, the tutor must make decisions and solve problems involving the tutorial. However, there appears to be a lack of training programs designed to help tutors develop and improve these important mental Skills. Unfortunately, until current models of tutoring are modified on the basis of empiri- cal evidence, the development of these training programs can only be grounded in inadequate theory and can only proceed via trial and error . 181 182 TO better understand competent tutorial performance, this study sought to assess the applicability of a model of medical inquiry in explaining and analyzing tutoring. A basic assumption underlying the adaptation of this model was that thoughts lead to action. Therefore, understanding competent tutoring required an analysis of tutor mental processing and behavior. This research was guided by a conception of the tutor as a clinical information-processor. The tutor is a clinician in the sense that student behavior is informally observed, expert judgments of what the student knows and doesn't know are rendered, and remediation is provided on the basis Of these judgments. The tutor, as an information-processor, attends to, perceives, modifies, translates, stores, recalls, acts on, and in general, mentally processes tutorial information. Procedures and methods, appropriate to a clinical information- processing perspective and useful for analyzing tutor thoughts and actions, were developed. Within a self-paced, tape-tutorial freshman chemistry course, two experienced tutors were tape recorded and observed while interacting individually with twenty-four students. The interactions focused upon four types of practice or exam, ideal gas problems (tasks) for which the students sought help. The knowl- edge required to successfully solve these problems and that the course was designed to develOp was represented either as procedural flow charts or as lists of propositional rules. These knowledge represen- tations were used to characterize: (l) the tutor's hypotheses of what the student knew or didn't know during an interaction, and (2) the 183 chemistry knowledge the tutor or student deals with explicitly. During a post-tutorial stimulated recall session which was tape recorded, the original interaction was replayed in order to stimu- late the tutors' memory and help them recall previous interactive thoughts and feelings. The interaction and stimulated recall tapes were transcribed and inferences, based upon the medical inquiry model, were made concerning the tutor's mental processing. Specific instances of the constructs of the medical model and indicators of method validity were identified and counted in a second pass through the transcripts. Each interaction and the appropriate set of infer- ences were summarized in the form of a referenced flow chart and a cue x hypothesis matrix. Finally, these intermediate summaries were combined for each type of mask and for each tutor into eight models of tutor behavior and diagnostic mental processing. Conclusions The conclusions of this study are presented in relation to the four posited research questions (see pages l2-l3). Accompanying the set of conclusions for each question, the related theoretical and procedural issues are elaborated and suggestions for their resolu- tions are discussed. Question l This question is concerned with the intellectual strategy that the tutor uses to assist students who seek help because of a learning deficiency. 184 Conclusions. l. The information-processing, diagnostic model of tutoring (presented in Chapter II) does have some power in describing how the two tutors of the present study functioned. It has also been useful in suggesting a set of constructs and relationships that have been shown to be a fruitful basis for generating important research ques- tions and areas of future inquiries. a. The two tutors studied did attempt to determine what the student did and didn't know. They did this by hypothesizing and diagnosing student knowledge states. The occurrence of the generation of hypotheses is supported by instances of eight types of generic statements tutors tend to make during stimulated recall. b. During this diagnostic process, the tutors generated aptitude, diagnostic, and remediation hypotheses. Diagnostic hypotheses are estimations of the student's pre-tutorial defi- ciency and can be categorized into major and minor difficulties. Remediation hypotheses concerned the tutor's conception of what the student knows and doesn't know as the interaction evolves and were reported with varying degrees of explicitness. Estimations of students' general intellect were called aptitude hypotheses. The tutors conceptualized the student's knowledge state in terms of what the student knew, understood, remembered, and rotely learned. 0n the basis of this evidence, the diagnostic tutorial model requires modification to include the different types of hypotheses possible and the different ways of stating these hypotheses with respect to a given item of knowledge (e.g., the 185 tutor believes that a student can't remember some items of knowledge vs. a student didn't understand an item of knowledge). c. During a tutorial, these hypotheses are evaluated by determining their logical fit to a set of interpreted cues. The occurrence of rejection or acceptance of hypotheses was supported by instances of four different types of generic statements. d. In terms of the number per interaction of generated hypotheses and acquired cues, greater variations were found across task groups for each tutor than between the tutors for any one task group. Thus, the components of diagnostic reasoning tended to be task specific. e. The tutors of the present study were found to generate about five hypotheses during a typical five-minute interaction involving the selected ideal gas units of the freshman chemistry course. Typically, one of these hypotheses was a diagnostic type; most were remediation hypotheses. Occasionally an aptitude hypothesis was generated; the frequency of its occurrence varied as a function of the tutor. 2. One valid way to represent these hypotheses is to develop a rather formal representation of the knowledge the course was designed to develop. Then, the particular items of knowledge that the tutor hypothesizes about can be referenced to these formal models. That is, these hypotheses can be defined in relation to the intended knowledge base to be learned. a. The knowledge to be learned can be characterized as lists of propositional rules or as steps in an algorithmic 186 procedure. In this way, a hypothesis of the student's state of knowledge can involve an entire list (or procedure) or simply one rule or step. b. The particular knowledge base representation under- lying an instructional task determines to a large extent the nature of the hypotheses generated. The tutors' familiarity and experience with the knowledge base determine their diagnostic ability. Thus, diagnostic ability is a function of the particu- lar knowledge base being dealt with in a tutorial and it is diffi- cult to judge the tutors' ability to acquire and interpret cues and generate and evaluate hypotheses without reference to a spe- cific knowledge domain. The differentiation between hypothesis types was made on the basis of information volunteered by the tutors. Now that some evi- dence has been shown to support these types, future studies of the diagnostic aspect of tutoring might probe the tutor's meanings more carefully during stimulated recall to determine more precisely (l) which type of hypothesis was being generated during the interac- tion and (2) what were the content boundaries of the hypothesis (e.g., when the tutor suggests that the student didn't grasp hybridization, one might ask if the tutor believed that was the student's "problem" and what was meant by hybridization or what about hybridization didn't the student understand). Originally, hypotheses were believed to be tutor assumptions of what students do and don't know. However, the tutors used terms like understanding, rote learning, and remembering to describe their 187 hypotheses about the students' state of knowledge. Therefore, the tutors seem to characterize the state of a given item of knowledge in a qualitatively more complex manner than was originally conceptualized. This finding adds greater detail to the diagnostic model and can be adequately explained using a theory of information processing. It seems more appropriate to view an item of knowledge in one of four states with respect to the student. These states depend upon whether the students meaningfully associated this item with other items and whether they store and can retrieve it from long-term memory (LTM). The four states for an item of knowledge are: l. It is understood and remembered (the item is meaningfully associated and stored in LTM), 2. It is neither understood nor remembered (the item is not meaningfully associated nor stored in LTM), 3. It is understood but not remembered (the item is meaning- fully associated at some initial time but not stored or available in LTM), and 4. It is remembered but not understood (the item is rotely learned by being stored in LTM as an isolated fact). Therefore, when tutors suggest during stimulated recall, what they thought a student knows or understands, additional probing may help determine which, if any, of these four states they are referring to and what their basis was (what cues, if any, were used?) for arriving at this judgment. The notion of a hypothesis must be tied to concrete referents, if it is to have any validity as a concept. In translating the medi- cal inquiry model to a tutorial situation, two early questions arose: 188 In the medical profession, diagnostic hypotheses represent specific describable disease entities of bodily malfunctions, but what con- stitutes diagnoses for students learning freshman chemistry and how can these diagnostic hypotheses be described? These were answered by developing accurate representations of the knowledge the students are asked to learn (the "healthy" knowledge state) and defining hypotheses in terms of the number of components from these representations that a student has or needs to learn. Two critical related issues were raised: What are the logical boundaries between items of knowledge and what size chunk ought each item be? For the present research, three heuristics (see page 79) were used to deal with these issues. These heuristics concerned the degree of significance of the operations or rules for chemistry students. The heuristics were found to be sufficient for developing items of knowledge (in the form of operations and rules) which, for the most part, adequately described the tutors' hypotheses. These issues are currently being debated among cognitive psychologists. These knowledge representations can stimulate additional research areas. Once plausible diagnoses are specifically defined in terms of some developed knowledge representation, then such questions as: which diagnostic hypotheses are generated most, what deficien- cies are most prevalent among students, and what generated hypothe- ses and student deficiencies are common across content units become questions for which answers can be sought. Several methodological issues were uncovered during the data analysis and should be mentioned here. Developing operational 189 definitions of hypothesis generation and evaluation required certain inferences and assumptions to be made which were not empirically grounded. Therefore, there may be errors of commission and omission in observing these processes. Developing additional indicators of these processes, asking more clarifying questions during stimulated recall, and documenting instances for which the assumptions are invalid may help to increase the accuracy of the number and nature of the hypotheses generated and the evaluations of those hypotheses. The precise locations of hypothesis generation and evaluation were difficult to determine. For example, hypothesis generation was assumed to have occurred at the first instance in which it was men- tioned. However, it sometimes may have occurred before that point, and it was not evident previously because the tutor may have forgotten that it had occurred or may have not believed it was important. Hav- ing the tutor introspect during tutoring, asking more "when" types of questions during stimulated recall, and developing units of analysis which express sensible chunks of thoughts or actions may help deter- mine when, during the interaction, these processes occurred. About a fifth of the hypotheses were represented as having been generated and evaluated at the same time, usually after a single cue was interpreted (one-cue hypotheses). This may have been an artifact due to the methodology since the same item of evidence was used to indicate both the generation and evaluation of hypotheses. Determining the precise location for these processes may establish the existence of a one-cue hypothesis. 190 Another issue concerns how unevaluated hypotheses are dealt with. Usually two of a total of five hypotheses generated during an interaction were not shown to be evaluated. This raises the question of what happens to them. There can be several explanations. The methodology may not have been sensitive nor accurate enough to capture when these hypotheses may have, in fact, been evaluated. It is also plausible that these hypotheses are simply lost from short-term memory during the interaction and are never evaluated. Another alternative explanation is that these nonevaluated hypotheses remain in short- term memory until the interaction is completed. This would overwhelm the 7 i 2 chunk information capacity of short-term memory unless one assumes that each generated hypothesis and its evaluative status and associated set of cues is one chunk. This then would allow tutors to carry about seven hypotheses in their head. Elstein et al. (l976) have determined that physicians hold about five hypotheses in their mind at the same time when reasoning diagnostically. Question 2 The second question deals with how the tutor gathers and deals with environmental information during the tutorial interaction. Conclusions. l. Some information (cues) is attended to and interpreted by the tutors. These interpretations involve a judgment of whether a cue supports or refutes a particular hypothesis. Hypothesis evalua- tion is an overall judgment of the logical fit between a set of cues and a given generated hypothesis. 191 a. These cues are basically obtained from the students' behaviors and the structures and content of the question of focus during the tutorial. b. Some cues are never acquired and of those that are, not all are interpreted correctly. c. Eliciting cues from the student in order to estimate the student's state of knowledge was the purpose of most of the tutor's questions. 2. A total of six cues were acquired by the tutors during an average interaction. The tutors differed significantly in the per- centage of the total cues that were elicited as opposed to volunteered. Several different types of cues were found to be acquired by tutors; most of these were derived from student verbal behaviors. Clark and Peterson (1976) studied junior high school teachers and also found that the most frequently mentioned cue was student behavior in relation to the teaching process, but their meaning of a cue dif- fered slightly from the definition developed for this study. A cue is defined by Clark and Peterson (1976) as the most important observable changes in both the teacher and pupils that are produced by some teach- ing performance. Here, most important is taken to mean relatable to a hypothesis and also, changes in the tutor were not conceived to be cues. Nonverbal behavior as a source of cues is interesting for two reasons. First, very few visual cues were reported to be used by the tutors. This may be due to poor recall since there was no visual stimulus component for the tapes. However, Clark and Peterson (l976) 192 reported that it was rare for teachers to mention this cue type. Second, the lack of behavior (silence) is also a cue for the tutors, especially if the student has already established a pattern of verbal behavior. Again, more directed probes during stimulated recall may help in understanding the concept of a cue. For example, when tutors report the generation or evaluation of a hypothesis, they could be asked if there was anything in particular that was occurring either in the environment or in their mind that made them think that, then. Simulated case and policy-capturing studies may also help to clarify the nature of a cue. The process of a cue interpretation requires much additional analysis. Nothing is known about how tutors weigh each cue's impor- tance in rejecting or accepting hypotheses. Also, the judgment rules for combining all the cues in evaluating a hypothesis (e.g., are they simply added together) have yet to be determined. Question 3 This question involved discovery of some of the kinds of rules tutors use during a tutorial. Originally, the types of rules presented with this question served as examples of what might be found. Only some of those kinds of rules were discovered. Conclusions. 1. Two types of rules were used by the tutors during their tutorials. Some rules helped them to diagnose student deficiencies and to determine the knowledge state of the student. Other rules 193 helped the tutors make treatment decisions like when to provide prac- tice or which of several possible solutions to teach. 2. Some of these rules are explicitly verbalized by the tutor and others are only implicitly demonstrated in their behaviors. There are task-specific rules and rules that are more generally applicable. The tutorial interaction models developed here are somewhat limited. First, each can potentially account for only those particu- lar interactions involving a specific tutor and a specific task group. Therefore, the scope of applicability of these models is narrow. They are for the most part based upon only three Specific interactions, and it remains uncertain whether these models can accurately predict actual tutorial performance and meet even the weak form of Turing's test (do the operations chosen by the model match those chosen by the tutor; see Newell & Simon, l959). However, these models do account for the interactions they are derived from with a reasonable degree of economy. Also, these models are not totally reflective of the amount and detail of the diagnostic reasoning that occurred because of the development process used. Rather than eliminating unlike diagnostic operations derived from two interactions, the inclusion of both unlike processes would have enhanced the richness of the models. 194 Question 4 This final question refers to the accuracy of the tutor's interpretations of the student's state of knowledge and to the effec- tiveness of the tutor in producing learning. Conclusions. l. The tutor's ability to diagnose student deficiencies tends to be task specific. Yet even if a tutor does diagnose correctly, the student may still fail to learn. Thus, in some cases, the deficien- cies the student maintains after the interaction can be traced to tutor errors in diagnosis or treatment or both. 2. Judgments of tutor effectiveness will be in error if these judgments are based solely upon whether a student obtains the correct answer to a practice problem. Sometimes, students who obtain the cor- rect response demonstrate other deficiencies which suggest the tutor was ineffective. Other times, the student might be wrong, but the deficiencies may not have been preventable by any tutor. 3. These tutors tended to be well liked and appreciated by the students although the ability of some of these students to solve practice problems was still impaired. Determining whether these tutors were effective is not an easy task since it depends upon how one measures effectiveness. Clearly, in an objective and absolute sense, these tutors were not as effec- tive as might have been expected from anecdotal evidence. Relative to their peers, these tutors may be the best. Unfortunately, the data for deciding this issue are unavailable. 195 This notion of effectiveness raises the issue of the tutor's responsibility. What level of learning ought to be expected from a student leaving a tutorial? Ought the student to have understood the solution? Ought the student to have remembered the solution? Ought the student to be satisfied with the tutor and is this sufficient? No measure was taken of how the student felt about chemistry after the tutorial experience. Is this a measure of effectiveness? Not all tasks could be used to accurately assess tutor diag- nostic skills and effectiveness. The verbal task group is somewhat anomalous because the course knowledge itself was insufficient to allow the solution of every parallel test item. The tutors unknowingly com- pensated for this by providing rules of thumb that were not always reliable. Validity of Methodology Although not a fundamental question for this research, some data were collected which relate to the question of method validity. The tentative conclusions below are based on these findings. Conclusions. 1. Valid representations of a tutor's model of the chemistry knowledge that the course was designed to devel0p can be developed by analyzing the instructional tasks and materials. These representa- tions can be formated as procedural flow charts and lists of proposi- tional rules. a. The algorithmic structure of these representations also seems to capture the way the two tutors remediate deficiencies. 196 These tutors also teach (mostly by modeling) heuristic rules for problem-solving. b. Students tend to follow the steps to solution pre- sented during a tutorial when solving other similar problems, which supports the algorithmic nature of the procedural repre- sentations. 2. Tutor stimulated recall reports of what they were think- ing during a prior interaction are probably some combination of accu- rate reporting and post—hoc reconstructive explanations. Thus, the data are descriptive to some extent of practice and of the tutor's theory of practice. Generally, it is unclear which stimulated recall evidence represents interactive occurrences or tutor theory. The accuracy of the proposed knowledge representations prob- ably stems from the fact that these representations and those held by the tutors are derived from the same instructional sources. The data from which conclusions of the verity of stimulated recall reports were made are relatively "soft" for two reasons. First, the frequency counts of indicators of accurate recall are low. Also, these counts reflect volunteered information and as such, are subject to selective reporting. A more rigorous sampling of elicited tutor information would help eliminate this possible bias. Summary_of Significant Conclusions 1. The diagnostic model of tutoring based upon a model of medical inquiry does have validity in describing how the two tutors functioned. 197 2. The verbal reports made by tutors during stimulated recall were probably some combination of accurate recall and post- hoc reconstructive explanations. 3. Hypotheses of student knowledge states were defined in terms of the specific knowledge to be learned. 4. Some components of diagnostic reasoning and skill tended to be task specific. 5. Valid representation of the course knowledge can be developed as procedures or propositions. There appear to be three possible interpretations of the results which demonstrate the applicability of the medical inquiry model. First, these findings may actually represent the tutor's interactive diagnostic processing. Some weak data were collected which suggested that the tutor's stimulated recall reports of previous interactive thoughts were sometimes inaccurate. In that case, the reports could represent the tutor's post-tutorial reconstructive explanations. Clark and Yinger (l978) argue that these implicit theories are important in their own right because they relate to mental processes like judgment and decision making. They also sug- gested that teachers' reported theories do not always reflect their employed practices. This presents a third interpretation. It implies that reported and applied implicit theories may be separate concep- tions. However, the consistency of the introspective reports with the tutors' observed behaviors and the conservative criteria used in infer- ring the occurrences of the constituent processes support the validity 198 of the interpretation that the tutor's diagnostic thinking is reflected in the model. Evidence was found in the collected protocols demonstrating the occurrence of the constituent processes identified in the model (hypothesis generation and evaluation; cue acquisition and interpre- tation). The tutors did hypothesize student knowledge states. The particular knowledge base representation underlying an instructional problem determined to a large extent the nature of the hypotheses generated. That is, the hypotheses were found to be con- tent or knowledge specific. Furthermore, since variations in numbers of hypotheses and cues were found to be greater between problem types than between tutors, certain components of the hypothesis-testing strategy are also problem or content specific. This seems to agree with similar findings for physicians (Elstein et 61., 1975)- In order to define the possible student knowledge states that a tutor might hypothesize, an explicit representation of the knowledge states the course was designed to develop is required. This knowledge state was represented as procedural flow charts and lists of proposi- tional rules. A high degree of congruence was found between these representations and that knowledge the tutors applied during an inter- action. Implications and Recommendations The results of this study have certain implications for research and practice. 199 Implications for Research l. The conception of clinical information processing was helpful in determining the mental operations of "expert" tutors. The research presented here demonstrates the applicability of this con- ception when studying tutors. It may also be applicable to other case studies of expert thinking and behavior in which judgments have to be made. 2. A diagnostic model of tutoring incorporating human behav- ior and cognition, was put forth to help define what tutoring is and how it works. It demonstrates the necessity of using both behavioral and mental constructs when defining competent performance. 3. Those researchers interested in developing explanatory models of teachers might want to consider focusing upon mental and behavioral operations using the analytic-synthetic approach explored here. 4. One significant implication of the results reported here is the need for research on teaching to consider all of Schwab's (l973) four commonplaces. In particular, the importance of the formal analysis and description of subject matter is suggested by the differ- ential effect of subject matter knowledge on the diagnostic ability of the tutor. Also, the nature of the hypotheses themselves is subject matter specific. The discrete specification of the content dealt with also per- mits a more accurate estimation of generalizability and may also be helpful in determining the relationship between different tasks focused upon during tutoring. 200 Implications for Practice l. The diagnostic model developed here may be one useful way of thinking about the practice of tutoring. The model could pro- vide a handle on where to look for areas of tutor improvement (e.g., in generating hypotheses, in acquiring cues). Also, it may increase tutors' awareness of the tutorial process and they might begin to observe, analyze, and interpret what occurs (using the diagnostic constructs) in order to improve the level of student competency. The diagnostic model implies that tutors need to be thoroughly familiar with (l) the knowledge the students are to learn and (2) possible and probable student deficiencies in order to be effective. Finally, a tutor who adopts this conception may wish to consider the process and content dimensions of diagnosis and treatment. 2. The explicitness of the tutorial decision rules and diag- nostic processes is a formal way to overtly display and communicate what was a largely intuitive application of tutorial strategies. As these strategies are defined in greater detail, objectives and proce- dures (e.g., using the stimulated recall as a training technique) for training can be develOped and tested. 3. Actual computer interactive tutoring already has been demonstrated. As analytical and introspective methods become more refined and more information is collected about tutor functioning, more serious consideration needs to be given to the value of imple- menting computerized tutoring (in what circumstances is it to be used, and is it cost effective, how to articulate it with other instructional methods, how does it affect student actions and attitudes). 201 4. Developing representations of knowledge has some practi- cal significance. These representations might depict what the student is to know as a result of instruction (e.g., that they serve as cog- nitive objectives). Also, these models may suggest areas where the instructional materials need to present more content or require modi- fications in their organization to better integrate the content. Very general knowledge representations might be used as handout summaries or advanced organizers. 5. The measures of effectiveness presented in this research suggest that a right-wrong global score for any particular test item provides very little information about the precise nature of a stu- dent's deficiency and knowledge state. The implication here is that if the student is to be maximally helped, the notion of testing needs to be more diagnostic, making the student errors more evident. Recommendations for Developing, Skilled Diagnostic Tutoring This study provided evidence of the applicability of a model of medical inquiry in understanding diagnostic tutoring. Although it remains to be seen how extensive this kind of tutoring is, it seems worthwhile to put forth a set of tentative recommendations for tutor training that may guide the development of training programs and, in turn, be tested and modified in the context of practice. It is also important to note that these heuristics apply to the development of diagnostic skill which is a necessary but not sufficient condition for effective tutoring. Tutors must also be competent in treating the discovered deficiencies. 202 To help tutors competently apply diagnostic reasoning: 1. Have tutors learn the concepts of the diagnostic model by a. identifying concept examples in previously analyzed natural or simulated tutorial protocols and b. suggesting appropriate diagnostic reasoning and behav- ior: and their rationale to other tutors in a small group upon listening to segments of natural or simulated taped tutorials. 2. Have tutors review their own taped interactions with com- petent diagnosticians who provide feedback on the tutors' diagnostic abilities (e.g., thoroughness of cue acquisition, accuracy of cue interpretation, adequacy of hypothesis generation and evaluation). 3. Provide a checklist of questions tutors might ask them- selves during an interaction to encourage overt diagnostic reasoning (e.g., What do I hypothesize the student's state of knowledge to be? How might I test that hypothesis? How do I know the student meant what I interpreted him/her to mean? How might I check for further evidence that my interpretation was correct? What other evidence do I have that suggests my hypothesis is correct or that suggests other plausible hypotheses?). To help tutors generate and evaluate hypotheses: l. Provide or have tutors develop (this requires task analy- sis skills) an organized knowledge base for each instructional task. 2. Evaluate the tutors' mastery of those knowledge bases. 3. Provide or have tutors generate from personal or tutorial experience a list of typical student deficiencies listed in order of frequency of occurrence. 203 To help tutors acquire cues: 1. Provide tutors with an initial generalizable cue elici- tation strategy (ask the student: Did you attempt the problem? If yes, how? What was the result? Show me what you did. What or where do you think your difficulty is? What information did you use and why?). 2. Suggest appropriate sources to search for cues (previously written notes, the problem itself, etc.). Recommended Future Studies 1. As a further validation of the model, one might assess the degree of agreement between the researcher's analysis of an interaction and tutors' self-analysis. The tutor could run through a stimulated recall session after a prior interaction; this would be analyzed using the diagnostic model. The same tutors could then be asked directly if the analysis captures their thinking. This could help decide the validity of the model as a description of practice. 2. More controlled experiments where such independent vari- ables as student deficiency, student errors, and number and types of cues might be manipulated to test their effect on the tutor's diagnos- tic skill. These experiments might vary in their fidelity to reality. Use of student actors in a simulated tutorial, video or audio segments of programmed tutorial sessions, and short narratives or programmed tutorial transcripts are all possible ways of gaining additional con- trol. The tutors at convenient points during a simulation might be asked to report their estimates of the student's knowledge state and 204 what information they used to generate these estimates. These experi- ments might seek to determine how each cue is weighed and what judg- ment rules are applied to sets of cues in order to evaluate these estimates. Also, relationships between the diagnostic process of evaluation and effectiveness criteria could be explored. 3. To begin the development of a prescriptive diagnostic model, formalized tutorial rules, derived from models of tutorial interactions, might be used to begin the development of a computer program for interactive tutoring. This program would serve as an explicit model and make it easier to identify its inadequacies (Can the program run? Is it effective?). Those inadequacies would suggest what information to search for in complementary descriptive studies of tutorial interactions. This is an extension of the analytic synthetic approach to model building. 4. The cue x hypothesis matrix has some potential as a research tool. Process tracing studies of students performing instruc- tional tasks would help in the development of standard matrices which could be compared to those developed from actual tutorials as a means of assessing diagnostic acumen. Also, experts might be asked to develop these standard matrices. The existence of hypotheses which are generated and evaluated on the basis of a single cue could be established using this tool and if it is, it would be of interest to determine if these one-cue hypotheses cause any diagnostic errors. Additionally, these matrices could be used to identify any relation- ships between cue elicitation and the generation and evaluation of specific types of hypotheses. 205 5. Using simulated case studies, the nature of a cue could be explored. Measuring frequencies with which cues are acquired and cor- rectly interpreted might help us to understand which characteristics of cues are related to the ease with which they are acquired and inter- preted. 6. A possible expansion of this work could include determin- ing the effect of tutoring two or more students simultaneously. How does the tutor deal with this situation and what are its effects on diagnostic reasoning are key questions to be answered. 7. The notion of process diagnosis remains a fertile area for research. Very little is known of the diagnostic categories and their associated cues, although researchers were aware of its importance in helping students to learn how to learn (Glaser, l976; Robin, 1977). 8. Research efforts might also begin to focus on the student's knowledge state. Once developed, procedural and propositional knowl- edge representations could be programmed on a computer and deliberate deficiencies created to determine the types of errors made during task performance. These errors could be compared to actual student errors to validate the models. These programs could also be used to train tutors to diagnose student deficiencies. 9. The research into the student's perspective of tutoring is another area that needs to be explored. Students could also undergo post-interactive stimulated recall sessions to determine their percep- tions of the tutorial. The effects on the tutorial of students trained in understanding the diagnostic processes could be of an important intervention to observe. 206 lO. There are several methodological issues that require addi- tional study. Does the use of video aid accurate recall and are more nonverbal cues reported? Does recall accuracy increase as stimulated recall follows the interaction more immediately? What is the nature and amount of subject training required for accurate recall? APPENDICES 207 APPENDIX A TASK CLASSIFICATION 208 209 mPoE x .mmu a Ego cm x Low mspm> m cw cm>wm among seem pcmgmwwwc mew gown: wave: :82 x86 cw =m>wm mm3~m> ages to mac umm_=om_os wgaumgmasmp do * can .ms:Fo> .mgzmmmga so; mm3~c> wmpnsmm avg» =_ mmpau Fawn? mazmmm .upou yo: mH em: I >g -mpoe acme 30: ._5 com mo 1 ms:_o> m mmwaauoo es ooe use mmpzumpoe umwmwucmuw on yo: xme go Ans mmu ooom um mom mgza we m_qsmm < mo gmasaz mom a mo mpasmm < N._ ope” Hummmw cw m Low mspo> a cw cw>wm wmogp soc» ucmgmmmwu mew gown: wave: cw =m>wa mmspm> mgos so ago mmpos mo Logan: so mgsumgmasmu .me:Fo> .mgammmgn "mmpnmwgm> m wcpmucou xmme mchz gap mcwzoppom any we Ace com mmapw> pm: u >a mg“ mmou mum mo mm—os acme usumucma cw Fawn? «gamma .upou no: mm 3o: .uooom um Egg om. ma om m_nmwgm> ucsoe m_ mgzmmmga «meg; mom a m:_:wmemg um_w_p:muw on we: has go xms moo ;o_z um__wc m? xmmpc _5 co. < co a=Fa> mom a to w_asam < .._ 3<4 mwo u=m>mpwm xmmp .AH mushy mxmap _aawoasa;oaz .p< «Pack 210 ewes x .mee a Ewe cw m Lew wewe> m cw ee>wm emcee Eegw “cogewwwe ewe sews: mews: cw ee>wm meewe> egee Le eco weepegeeseu eee .esewe> .egemmege Lew meewe> wewew usemme .eweu we: wH meweeewes we w weweemwes meepe u mseue we w x gems See eweee_ee peesewe we e_eswew Feewsegu we w wooeow eee sue o.m ee mem No we weeweu eewwwuceew mw mew we: u >e -eee geuwp o.m e cw “cemege mseue eee eemzxe we msepe sees 3e: we geese: mem e we ewesem < _N.P 3ww pee>mFem xmew .eeeewueeo ._< eweew 211 mooeom mw eweeeweeeee spewuw_axa Lo weepew_asw cm>wm eee “newez Lepeeewez mew ww pcmwez gepeuewes mew . m.w mw peg: .Eue m. we ecemmege unmwez eee eewwwueeewee mw mew agemeueu e meeeeewe xmepw Leuwp N e Leweee_ea ”nemexe xmee cw cw emewweeu ewesem new a om < we eewe> m._ agemmueu xmew cw me eEem me eEem pm._ ewee x .mee a Ewe cw a new me~e> m cw ee>wm ewes» me mews: esem.mm m cw ee>wm emegu Eegw peewewwwe ewe news: muwee cw =e>wm meewe> ewes Le eeo ce>wa xwawewwexe “cowez weweeewes we Aeewwwp -ceew memv ee>wm xwpwewwesw .fiw n c usawez wepeeepee sew eewe> mmes .mseaeweeseu .eE:—e> .egemmege "meweewwe> mew mews: -3e__ew use we m ace Lew mewe> was u >e wewew eee mw egemmewe ewes: Leweewueee wewew esemme .eweu pee wH eee gooem mw weepeweeseu ewes; cw eweewwe> Leewepeee geuwp m e cw “cemewe mcwcwesew we eewwwuceew mw mew eee mem No we msegm wees 3e: eepe> eeww mem e we eweEem e =m>ww m.w 3wu ace>mwem xmew .eeeewpeeu .~< mpeee 212 .Lepw— gee msegm m spewew_axe so a_e_0w_asw mepwm ce>wa we: mw “sewez Leweempee wew eewe> eee we xuwmeme e we; .ewemmege . e._ see e eee eoeep ee eme_w e cw eeewez =e>wm we seemeae we e=Fe> zeoemeee ewee—eee ems: .gewsz new e we Leweemwes ”peeuxe xmew cw ucmwez Le—eeewes ecu epeweoweu we eepe> e.w Agemeueu xmew cw me esem me esem Ne.w .Lgep eee we egemmege e eee mooo— we egeueweeeeu e we men we we ewesem we oow prmeee ee>wm emwe ese~e> m>eee e we auwmeee eee eeeweepeu we mewe> we eewe> peeexm e>eee me esem me esem we.— mwes x .mee a see cw m Lew eewe> m ew ee>wm ewes“ Eegw peewmwwwe eee 5.23 3.25 w u e cw ce>wm mmewe> ewes we meo 33352 33 :23 3:39., ucmwez Leweemwee we ee_e> .35 n e ewemmege s eee eweeegeeeee Lew meewe> wewew esemme .ewep we: wH we: u >e .eoem eee see N ee Nee we seemeae eeww_e=eew wee «\m cw xuwmeee one eueweeweu we mewe> mem e we eweEem e =m>ww e.w 3ww pee>ewem xmew .eeecwpeeu ._< eweew 213 was com eee coop we eewemees mw aw ww we esewe> mew wwwz eee: .55 com eee eoem we _5 eee aw wee we mmee ee>wm e we msewe> mew _ + w mew“ we eweewwe> weweweEew we wewe> a cw ce>wm emecu Eeww eeewmwwwe ewe gown: mews: cw ee>wm meewe> ewes we was em>wm we pee wee we xes men we mmes wew eewe> cm>wm me pee wee we aee mFee -wwe> pceameee ecu wew mewe> w + w mew» we meweewwe> ee>wm we use wew ee_e> w mew“ esem we eweueweesee we .ese_e> .ewemmewe "mmpeewwe> mew -zewwew use we eze wew meewe> peeemcee mw ee>wm ewe meewe> e: sows; wew eweewwe> peep esemme .ewee pee wH wewew maemme .ewep pee wH eewwwuemew we we: mes we we: mem e we eweeem e em>wo x n I— I> x II >e P.m mzomhmozoo 4ww meewueeem awemeueo pee>eFem xmew .eaacweeoe ._< e_eew 214 .ewew -xwe mew cw Nz we ewemmmwe AFewewvewemmewe .Am mewvewemmmwe .A< mewvewemmewe mew eweweeweu e.wwew cow “meweewwe> eewzew we we ewemmmw e mwwexe . . . No .wwew omm we mwemmewe eweewwe> wew mew we ezw Ace we mmewe> u me + -ceew ezw we ewewst e ce>ww w.m mmzmmmme ewm emegw seww wemwewwwe ewe sewn: mwwee m e cw ee>wm meewe> ewee we one x u.H ee>wm we we: mes we mes men we mewee we mmee wew eewe> N > :weswwwe 37H meweewwe> emesw we ezw we meewe> . w esww we ouomm we weeweweee . w a we ooow e cw cemewex; mwgw we w + w meweewwe> wewew wwe we meewe> w . >e ewemmewe mew mw wen: .weewew esww we wewew esemme .ewew we: ww -eee we eee e cw woo we wwew eweewwe> cow we ewemmewe e mwwexe mewewesew eewwwwceew we we: wee we we: mem cemewex; we zwwweeee < we eewe> mem e we eweEem e ce>ww N.~ mzowwwezoo w<2wwuwww wce>ewem xmew .eeeewweeu .w< eweew 215 eweemewe ewe m: we maewm wees 3e: .oewm we Ewe mw ewemmewe wewew eew . I eee Ne we weewew -eee wewww m e cw Ewe m mw mem A< mmov mmmE ewes w .mee w Ewe cw a wow eewe> m cw emeww Eeww wcewewwwe ewe wows; mwwee ew ee>wm meewe> ewes we eco weewwwweeew meme ee>we wwewewwe new wzmwe; weweeewes we eewe> mwwewewvewemmewe eee Am meovewemmewe we meewe>u ee>wm awwwewwesw A< meovewemmewe we eewe> ee>wm eweweweesew .esewe> we meewe> weeew esemme .ewew we: ww memem eeww I— a C II >e eemzxe we ewemmewe wewwwee esw we eewe> -wweeew ezw we ewewst e ee>ww N.m $5333 .2253 56 weeweo meewweeem zwemeweu eweEexm eewweeem weeew ee>wo wee>ewem wmew .eeeeweeee ._< eweew 216 weewwwweeew mememv xwwwewwe new ee>we weesewe :eee wew wsmwez weweeewes we meewe> NB: .mBszmBE < < 2 .38 Le - IE 2 meovewemmewe 2 meew ewemmewe we mmesu meweewwe> w w ezw weweweeew we eewe> eeo :5 Am mmwv Kc. I ...—“W .Ewe o.N A< mewvmmes AwewMMme ewes eee H ewemmewe we meewe> eee... 2: ww 8 we e S eee . z: .. m: we m on we eemeesee ewew eweewwe> weeew esemme ewew we: ww .mfl u : -st e cw :emewexg we ewem mewewesew memem eeww -mewe wewwwee eww ewe—seweu we eewe> -wweeew ezw we ewewst e ee>we mew.wm wee weee wew mewee we meewe>g ee>wm xwwwewwesw mew.<:n we 2ewewvmewee we eewe> Ncemewebw we e e V > .. wk ewemmewe wewwwee ecw mw wee: Aw w w ewemmewe we eewe> .Hm u -wweeew ezw we ewewst e ee>wo we we m.m mmemmmme .EHEE "9m weeweo meewweeeu awemeweu eweEexm eewweeem weeew ee>we wee>ewem xmew .eeeewweeu ._< eweew 217 eweE x .mee w Ewe cw a wow eewe> N ew emegw Eeww weewewwwe ewe sews: wwwe: .flw u : cw ee>wm meewe> eweE we eeo weeww: ee+wm wwwwewweEw c . wwewez weweeeweE wew meewe> we I >e flce>wm mem geee wew mmeE N wem wew meewe>H ee>wm awwwewweEw AwewewvmeweE wew eewe> .flw u : ce>wm eEewe> .oeeON we weewew .eweweweeEew wew meewe> mew. -eeew ezw we ewewwa e ee>we e.m $335 #2me "ea weeweo mcewweeem awemeweu eweEexm eewweeem weecw ee>ww wce>ewem xmew .eeeewwceu .w< eweew 218 NEwe e ew ewemmewe wewew eww eEwwe ew eeew eee Ewe e we Nz we ewewww m ew eeeee ee wmeE eeew eee Ewe _ we eewemeeE wee N: we eEewe> wee: w + w eEww we Am mewvewemmewe eee w + w eEww we A< meovewemmewe we w eEww we w< memveEewe> wegwwe we eewe> w eEww we A< meSewemmewe eee Sewewvewemmewe we meewe> em eEemme we weewmcee mw w we ee—e> Aw + w eEwwv eewa me: ww wewwe ew Aw eEwwv eewa we: ww ewewee Eeww ewemmewe eee eEewe> we eewe> mww eeeeene we; Awo ew.e mmsmmmme 4ww wee>ewem xmew .eeeewwceu w< eweew 219 weeew eEemme .ewew we: ww mwm mewVewemmewe we “wewewvewemmewe :e>wm ”wwwwewweeww w + w eeww we A< mewvewemmewe we w eEww we A< meuveEewe> we eewe> mwm meoveEewe> :e>wm a:ewewveEewg u Am mewveEewe> ”wwewweEw AwewewveEewe>H awwwewweEw we wwwwewwexe :e>wm mw Am w < memewveEewe> we eewe> we.e=eee ew.e mmzmmmme 45.55. .mzowwwozoo 4<2wmnwwu m:ewweeem awemeweu w:e>ewem xmew .eeeewweee ._< eweew 220 weweweweeEew wcewm:ee e m:wEemme .eewa e>e: memem e:w ee:e Eew umxm ewww:e e:w we ewemmewe we:ww egw mw we:3 .ee:eee mw :5 eee we N: we ewee e e e e:e EE OON we No we ewee ww.w eEww we AFewewvewemmewe we w+.w eEww we w:eEe_e :eee wew ewemmewe ezw e:w we wewwew we we me8eE3e> + A< meeVeEewe> u AwewewveEewe> :e>wm wwwwewweEw we wwwwewwexe “wewewveEewe> we eewe> w:ewm:ee e:e weeee ewe eweweweeEew we meewe> w eEww we memem wwe wew ewemmewe e:e eEewe> we meewe> w + w eEww we we mewveEewe> A< me8eE3e> . AwewewveEe—e> we eewe> eEemm< weeew eEemme .ewew we: ww eewa ewe A. + wv eEww wewew eEem we eewez .w esww we memee eewwww we. a." FIG. e w N e :eezwee weeeeewm egw we meewe> -:eew eweweeem awwewww:w :e>we N.e mmemmum: #:me .mzowwwezou www w:e>e—em xmew .eee:ww:ee .w< eweew 221 .mewe: weewe e:w :w eewweeee ww e:e .eexw mwcw we e:eew Eeweewe weweewwwee e:e >_:oe w .. m... me wwewew memem m u eewwww:eew we meweEem ezw 3: ew.e zowmztm we NE: 56 wwewewvewemmewe we ee—e> m.: > weeww:eew ewe wee .we:wew:ee e:w eweweeem :eee wew ewewewee _ w e . u we eEewe> e:w mw we:3 e:e Eew e:e ewemmewe we meewe> me u we meewe> weeew eEemme ewew we: ww ewemmewe wewew e we eEewe> e e: e> :e pm w< wemwew e :w wa ew eezewwe We v w e eewa ewe eEww wewew >+. > ewe ewm we N: we wE com 2 w ewemmewe eEem we 5.23 memem eewwww w< e:e Nz we wE com we:w eEemm< we eewe> u:eew eweweeem xwwewww:w :e>we > e_.m ewu w:e>ewem xmew .eeecww:eo w< eweew 222 .eweweewww ew:ewweewe wew leeeweE ”weweeeweE we eeeww weowwe:ew:e: "wwewe>e ewee neweE e:w we wwwwewee wewee neweE e:w we wweE neweeeweE wew>e:ee weeew e :w we:ee e:w we awwwewee Eeww wweE e:w Nwew>e:ee weeew ewew>ee ew wee Eeww wweE ewew>ee ew wee e weweee weweee :ewwz wweweew e:w3ewwew e:w we :ewzz e:e wee wewe weweewwe> wewe>ew e.w w:eweew e:e >:aemwmwmmeemw wewee we w:ewwe:weEee wewe>em .e>eee we eEem :uw:3.wee xuwm eewwww:eew wewee ezw m.w .e: :e:w ew:ee ewe: ww e:e we: ew eeew e:e ew weweewe ewe weewew e>wweewwwe ecw we:wweww :e:w e>wweee:ewweewe eweE ww :eewee ewe—eeeweE e:e e:w :e:w wweE weweewe e>e: weweeeweE w: e:w w:eweew wewe>em "we:ee zuu e:w :e:w wewee wwew ewe we:ee w-: e:w eweeeee weeew eweE :eweew weewweu weewe :e:w weeew eweE wee e:e ww e:e .wewee :wee ewe w: e:e e:e wee wewe eewwww:eew wewee ezw N._ e: ”N: m8 ”Nee weeew "weeew wwwee: wweE wweew we wee eweEwew ww wewee e:w3ewwew e:w we :ewgz weeew wweE wow: wwe:w we wewee we wewwew e :e>wo _.w wemuzou www wwew .Aww eewwv wxwew weewe> .N< eweew 223 .wwew owm we: EE — mwwew one we: EE w.ew me: Eu w mwwew _ ew weeee ww eweseweEwe e:e we ewewwewe < eewe> e>wweweeEeu e:w we eewe> w:ewe -wweee eee eewe eewe> e>wweweeEee ewe:ww e :e>ww www:e w:ewewwwe :w ewewwewe wew weewe> we wewwew e :e>wo _.m mez: m4m Ho.m .weweeeweE e:w we eewe ww:e wee eewew e:w e:e ”wewee -eweE e:w we weuwew e>wweewwwe ezw “weweeeweE e:w we >ewe:e wewew e:w ”weweeeweE we weeEe: e:w ”wewee -eweE e:w we eeeew e:w we eweweeE weewwe e ww wee e we ewewwewe ecw ewewwewe eweewwe> e>ww neweeEee e:w e:wwee :ew:z weweewwe> e:w wee wewe :ewwewew < ewewwewe eweewwe> e>wweweeEee < weweewwe> we wewwew e :e>we w.N onszmeo mmsmmwme uo.N eweEexm w:ewee eewweeem wee:w :e>wo eweeeweu wwew .eee:ww:eo .N< eweew 224 .ewewwewe ewweceweEwe :eee e:we:eeee wwew we ewww ”weweEewe eeew e:w :eee e:we:eeee wwew we ewww weweeweE we eEewe> egw :eee e:we:eeee wwew we ewww "w:eww:ee :weEew mewww m—wew wwwz ewewwewe we:w e:wweweeE weweEewee e :w we>ew eweeweE e:w .weweewu -:w ewewwewe uwwegeweEwe e:w :e:3 eweEwe:e ew e>wwewew we>ew eweewez e:wweweeE ww weweEewee ezw :ew:z ewewwewe :w ee:e:e weweEewee e :e>we w.m mmzouuome wzm2mm3mew ewee -weE ww .Ewe wezwe e:w :e eweEew wee eew eewue::ee e:e w:eww egw :e :eee ww weweEe:eE eweeweE eeew-= < ewe:e -weEwe e:w ew e>wwewew wee we ewewwewe Ewe wwew ew weeewew :wwz Ewe w:eww :w eweeweE we we>ew e>wwewee wee e ew eewee::ee weweEe:eE we :ewwewwewee e :e>we w.e mmzomuome wzmzmm=mwe eweeeweo wwew .eee:ww:eu .N< eweew 225 .ewe: eeeeeeweew ewe e:e eewee Aeewm e:w :w eewweeee ee:w .w.ow :eeew:w o.o wewew wew e:eew we: Eeweewe weweewwwee e:e ew:oe we:3 weweweweeEew :ew: we sew we eew—eeewe eweE e>e:ee Noe weee w.ow wwwwweeew Eeww wweew wewew>ee ..e.w ”weeew wweE wee e ww w:ewwwe:ee ewewwewe e:e eweweweeEew wegz wee:= c.0— .weweE we weeEe: e:w e:e .e :w eEewe> wee e:w .Ewe :w ewewwewe e:w .Axv eweweweeEew :w>wex e:w we wEwew :w .m .w:eww:ee wee e:w e:wwee o.e we:3 werw ee:e:e we: we .e:eexe .weeww:ee ww wwwz .wweww:ew egw :w ww wee :e:w e:e .ewwceww ww eww .:eewwee wee—e e ee 3ewe w.Zewwwee eee ww N.w ew:3 Nerw ee:e:e we: we .e:eexe .wueww:ee ww wwwz .weweweewwwew e :w ww wee :e:w .ewwweww ww eww e:e :eewwee e ee eewe eee ww _.m .ewewwewe w:eww:ee e we eewewew ewe wee e we eEewe> e:e eweweweeEew 3e: :wewexm o.m e:e: eeweweweeEew ewww we eeweewwe ewww ewee :e we ewew -wewe eew ww .wewewwwee wee we :ewweE e:e ewweewe> e:w wweewwe eweweweeEew 3e: :wewexu o.w weweew weweeeweE e :e e:w:eeee: ww we:3 .ewewwewe www weew eee .eeew weee ew:w w:ewe ecwz e ww wewewwewe e eeeeewe wee e weee 3e: o.e eweeeweu e:ewwweec ewew .eee:ww:eu .N< ewnew APPENDIX B KNOWLEDGE BASE REPRESENTATIONS 226 APPENDIX B KNOWLEDGE BASE REPRESENTATIONS Notationg§ystem For easy referencing during the protocol analysis, each knowledge base was given a Roman numeral corresponding to the task group it represented. Additional Roman numerals were used to iden- tify those subroutines that were part of two or more task groups. Furthermore, each alternative pathway in a flow diagram was alpha- betically labeled and each step within a pathway numerically labeled. Likewise, each principle and organizing concept for the verbal tasks was numerically and alphabetically labeled, respectively. The smaller rectangle boxes that only have arrows coming out of them represent the chemistry knowledge required to perform the indicated steps. These rectangles are referenced by citing the operation number which they feed into but adding the label INFO to distinguish it from the operation. 227 228 I IN ITIAL- FINAL START scan mu m '- 2-3 vawucs or P,V,f on 1 AT T!!! 1 u. an 1-2 vaw- or r,v,‘r “'1'! “VIII 1 mm. mm AT rm: 2 won» as ”tune: I. IDEAL as aw. IF no VALUE "0 res no m 0 t, own 9 '3 Ya no no RETURN SET: ’ x - van or mo vault ”u" v-vmasrcomm z . van two common on ROUTE 0 IO ”'7‘ ' ROUTE res m on no P‘V, - MIT, C 21 and 2; on known ? Go To hour: 3 mm: o n I “a!” n . M88 in 7 meow: II new": 0 . amour: ""3 ' ""2 1:; END 229 C D serum: 1 I now um - out 7, sussowve 1:, mo an . UNIT 1’, ’V 3 K. N ' I. x. x x; ' v“! T'“3 -— . fl — Y, Ye .._’ EL . 1': Y1 Y2 ROUTE . E I X. Vq I K 3 Xng X. Y‘ ' xzvz x oecneases ' x mcneases ’ 2 L < .1. use r 1FACTOR use r > IFACTOR sussowve x2 L I Q vane “ res A 0 X2 ' X1 . _¥_ ICONSTMT . RITE x2 . x‘ o— o C } j® [m*:,...., ' SWSOLVE l4 L‘MTITUTE 2 FOR Y 230 “ITS PV . nR‘I’ I PI VI ' “'"I P2 V2 3 MIT: WIT! HATCH WEE TIHE ? 3 Known mm . UNIT 1" mo um . um 12 RETURN no Y“ I , e wane numwse eo p v 92v “3"“ r,x on same sloe "a a 1 1 , 2 , an RULES , RETURN 30330ng sus soLve RETURN END Figure Bl. Knowledge base representation I. 231 (CONSTANT M! AT MP,T MIRA RULES neannnwoe v-P 03‘“ 0' II suuulrnosweu '"AE’DEAENTV' cwecx:lnuot> START "we sirens Innre Pv ' "'7 PV . nRT u sussnrure 7 assume ermen " "' iii ‘EET ran n n ‘ ‘ or rv 'i%% If v .‘ ‘LPEPOA neanaswse ”L" .l. . m v «r swsrlme ‘ "5‘ a son 45- . neanaawse ‘ ‘ - In! ,, . 232 sussm'ure ' WRITE 7 m we. .4; “a, PV 8‘5 RT nuisance ' su W I mean nuwes '1 ' Lw SUOSOEVE d RETURN c an Figure 82. Knowledge base representation II. 233 scan PROBLEN I oauous Law PARTIAL PRESSURE o. «as NIxTURE “INTERFERENCE START " V0 cwawoes sas oarruslou onaw ooaonau BoTo NEXT KNOW IASE GWENEJP? res 4 YES FIND P, .9 no res 1 res use vawes eon oas RETURN o “'7‘ 3 We of" no A cwoose oas NOT re1’ F) 00103:» res I cnoose oas nor YET RETURN cwosew I ANOTHER OAS ? NO SUBSOLVE n RETURN END 9 SUDSOLVE "t‘“o“'b II. 300+". RETURN wane '° ‘_ P. V, ' 0' RT II suosowe Pt Figure B3. Knowledge base representation III. DALTONS LAW Pt ' PZOT '2» C END 234 Propositional Knowledge Base IV Higher Order Rules (H03) 1. Physical and chemical properties of elements change in a gradual manner from top to bottom of a group or acrossaarow. Variations in properties across a period are greater than variations up or down a group. Elements in the 2nd period are usually much different from heavier elements below them in the same group. In any one group, variations in properties are usually small from period three on down. Organized Rules A. Attractive Molecular Forces 1. 2. Ideality is inversely related to Attractive Molecular Force Attractive Force = f(Molecular Polarity and Van der Waals Force Molecular Polarity is dominant over Van der Waals Forces (in its effect on attractive molecular forces except when molecular polarities are approximately equal; i.e., between two nonpolar molecules, between series of H bonded molecules within one group from period 3 on down, HOR 2, 3) Molecular Polarity 1. Attractive Force is directly related to Molecular Polarity (except as noted above) 2. Molecular Polarity is directly related to Net Molecular Dipole Moment 3. Net Molecular Dipole Moment = Vector Sum of All Individual Bond Dipole Moments Electronegativity 1. Individual Bond Dipole Moment is directly related to Size of Partial Charges at the Two Bonded Atoms 2. Size of the Partial Charges is directly related to Electro- negativity Difference Between the Bonded Atoms 3. Electronegativity Difference = Electronegativity of One Atom Minus the Electronegativity of the Other Atom 4. Electronegativity Trend is directly related to Group Number (HOR l, 2, 3, 4) 5. Electronegativity Trend is directly related to Period (HOR l, 2, 3, 4) 6. Electronegativity is directly related to the Partial Charge of the Atom 235 Van der Waal Forces 1. Attractive Force is directly related to Van der Waals Force (for molecules that have approximately equal polari- ties Van der Waals Force is directly related to Polarizability of Electron Cloud Polarizability is directly related to Number of Molecular Electrons Polarizability is directly related to Electron Distance from the Nuclei Electron Distance is directly related to Molecular Size Molecular Size is directly related to Molecular Weight (given similar shape) Number of Molecular Electrons is directly related to Molecular Weight Attractive Force of Molecules is greater than Attractive Force of Monoatomic Gases (when molecular weights are approximately equal) Hydrogen Bonding 1. Attractive Force is directly related to Hydrogen Bonding HOR 3 Other Rules Derived From Interactions 1. 2. 3. LOCDVOTUT-b 10. Ionic Character is directly related to Molecular Polarity Elastic Collision is directly related to Ideality Elastic Collision is inversely related to Attractive Force (interactions between molecules) Ideality is inversely related to Molecular Polarity Molecular Volume is directly related to Ideality* Ideality is inversely related to Hydrogen Bonding Nobel Gases have Complete Outer Shells Critical Temperature is inversely related to Ideality Critical Temperature is directly related to the Temperature Span for Liquid State Ideality is directly related to the Temperature Span for Gas State *This rule is not completely correct. 236 ll. Molecular Volume is directly related to the Number of Atoms Per Molecule* l2. Molecular Dipole is inversely related to Ideality Observation-Measurement Procedures(OBS-M) 1. Determine Lewis Dot Structure 2. Determine Molecular Geometry *This rule is not completely correct. START ‘ 10k-r'c+';] hour's-:73 WERT mom '0 TO °K WERT FROI “R T0 °C 4 RETW AND USE CNVERTED TEN? TEMP MVERT mum In 21 UNIT CONVERT suesowe 5"“"' snuu' I ALGESRA ' IDENTIFY known auo mu, ‘ 'l e some eo meo REQUESTED UNITS OR VARIABLE NANED I J 1 I SELECT TRUE warm ... SUBSTITUTE VALUES comenslou eo , an. . morons eon vanIasLes sowve eo: neouesi'eo3 "I": “‘T'C‘L soLve eo eon uwn’s m wuuenaron “‘3 vanIasLe waneo 4 IULTIPLY KNOWN WITS “ATHEMTICAL BY CONVERT FACTOR 7 RULES J. RETURN AND USE WVERTED R.ULT Figure B4. Knowledge base representation V, VI, and VII. APPENDIX C TUTORIAL INTERACTION PROCEDURES 238 APPENDIX C TUTORIAL INTERACTION PROCEDURES Instructor Handout for Taping Procedures Student walks up and asks for help about specific problem. Check problem with list. If problem checks with list, continue. Check to see if Phil is available; if yes, continue. Determine if student has been previously taped. If no, continue. If yes, ask student to participate again. If no, ascertain reason, smooth over any ill feelings. If yes, go to 7. Explain the nature and purpose of the research. Make the following points: a. We're trying to understand the strategies that instructors use as they teach. b. To learn more about the process, we would like student's permission to tape interaction and a few minute session with Phil Heller after the interaction. c. Student does not have to participate in study in order to obtain your help. d. The tapes will only be accessible to the principal investi- gator and will be treated with the strictest confidence. The student will remain anonymous. Give the student consent form. If he (she) signs, continue. Flip transmitter toggle switch up (on position). Read the following introductory information into tape: Date Time c. Check list for next available interaction number. Recite 'your code number followed by the next available interaction number. Cross out interaction number. 239 10. 11. 12. 13. 240 d. Problem code number Study Guide, Page number, Question or Block number Exam number, Problem number, Term, Year Ask student to repeat his (her) initial comment when he (she) first walked up to you. Interact with student. Interaction ended. Read time into tape. Flip transmitter toggle switch down (off position). APPENDIX D TUTOR-STUDENT INTERVIEWS 241 11. 12. 13. 14. 15. APPENDIX D TUTOR-STUDENT INTERVIEWS Pretutorial Preparation Interview Schedule What is your student level at M.S.U.? What is your specific field in chemistry? How many and what type of chemistry courses have you had? What are your vocational aspirations? Have you had any formal teaching experiences in addition to tutoring in the chemistry course? Describe them. How many terms have you tutored in the chemistry course? How do you view your task in the help room? Did you or do you prepare for tutoring? What was the nature of your preparation and when did it or does it occur? Is your preparation different for different units? Do you have any particular strategy in mind either before or during your preparation? Did you prepare for the target research questions? What was the nature of your preparation? Did you have any particular strategy in mind either before or during your preparation? 242 243 Student Interview Schedule Introduction Experimenter's name and affiliation Nature and purpose of research project Agenda: Take a few minutes to ask some questions about the pre- vious interaction and the student's particular point of difficulty with chemistry. Will also provide an oppor- tunity to practice what was just learned. Would appre- ciate the student being as sincere and candid as possible. Review: Student anonymity, limited access to tape, confiden- tiality, nonevaluative use of tape Any questions? Part I 1. Do you recall anything that the instructor said or did during the interaction which helped you to learn chemistry? a. Unsure: Provide examples: Gave a good example, reviewed past material b. Yes, Affective Statement: Do you recall any specific chemistry content or teaching procedure that was helpful to you? c. Yes, Cognitive Statement: Were there any other significant points during the interaction? d. No: Go to 2 What do you prefer the tutor do differently next time? Do you recall any point during the interaction that was unclear or that you had difficulty with? Please describe it. a. Unsure: Provide examples, use of undefined terminology D. Yes: Were there any other unclear or difficult points? c. No: Go to 3 Do you have any other thoughts or feelings about the inter- .action? a. Yes, Nonspecific: Can you be more specific? Why was the interaction ? b. Yes, Specific: Go to Part II c. No: Go to Part II Part II 244 Would you come back to see this tutor? Why? Why not? Refocus: Now to deal with problems student had trouble with. Which question was it? What was it about the problem that you didn't understand before you received assistance? a. Very Specific Response: Go to 5 b. Unsure or General Response: Go to 4 Did you attempt the problem at all before seeing the tutor? a. Yes: How did you attempt it before you saw the tutor? Question the student to determine student's original difficulty. b. No: Go to 5 Do you think you understand it now? a. No: Determine student's difficulty, then terminate. b. Yes or Unsure: Go to 6 Would you mind trying a practice problem? a. Student Refuses Practice Problem: Terminate b. Student Attempts Problem: Have student verbalize all thoughts as he (she) solves problem. Have student write down anything he (she) wishes on interview notes. Ask about any unclear reasoning. When finished, provide feedback. Student Answers Correctly: Go to 7 2. Student Answers Incorrectly: Determine points of difficulty Thank student for his (her) participation. 245 List of Student Practice Problems Task Group 1: Initial Final Questions 1. 10. 11. A sample of oxygen gas has a volume of 2 t at lOO°C and 2 atm of pressure. What will the volume be at 6.75°C and l atm pressure? A sample of oxygen gas has a volume of 2 t at lOO°C and 2 atm of pressure. What will the temperature in °C be at 1 atm pressure and 3 1 volume? A sample of oxygen gas has a volume of 2 2 at lOO°C and 2 atm of pressure. What will the pressure be at 6.75°C and 3 2 volume? A 32 gram sample of oxygen gas has a volume of 2 liters at 27°C and l2.3 atm. What is the volume of 8 grams of oxygen at 127°C and l6.4 atm? At constant volume, a sample of gas at lOO°C and 300 torr is cooled to —86.5°C. What will be the resulting pressure? At constant temperature, a sample of gas at 300 torr and 500 ml is placed in a 1000 ml container. What will be the resulting pressure? At constant pressure, a sample of gas at 100°C and 500 ml is cooled to -86.5°C. What will the resulting volume be? At constant temperature, a sample of gas at 300 torr and 500 ml is depressurized to l50 torr. What will be the resulting volume? At constant volume, a sample of gas at lOO°C and 300 torr is depressurized to 150 torr. What will be the resulting tempera- ture in °C? At constant pressure, a sample of gas at lOO°C and 500 ml is put into a 250 ml container. What will the resulting tempera- ture be in °C? A lOO ml sample of neon gas at 100°C and 300 torr is depres- surized at constant volume to l50 torr. What will be the resulting temperature in °C? 246 l2. A 500 ml sample of neon gas at 300 torr and 27°C is depres- surized to l50 torr at constant temperature. What will the resulting volume be? 13. A 500 ml sample of neon gas at lOO°C and 380 torr is cooled to -86.5°C at constant pressure. What will the resulting volume be? Task Group 2: Density Questions 1. Calculate the density in g/l of Clz at 400 torr and 30°C. 2. Calculate the density in g/ml of lOO ml of Cl2 at 400 torr and 30°C. Task Group_3: Partial Pressure Qgestions l. Calculate the total pressure of the mixture when 5 liters of C02 gas at 420 torr pressure and 27°C and 2 liters of H2 gas at 140 torr pressure and 27°C are introduced into a 7 liter container at 27°C. 2. Calculate the total pressure of the mixture when 5 liters of C02 gas at 420 torr pressure and 2 liters of H2 gas at 140 torr pressure are introduced at constant temperature into a 7 liter container. Task Group 4: Ideal Concept Questions 1. Which of the following gases is most nearly ideal? a. NH3 b. N2 c. NO d. CO e. CO2 f. CH4 2. Which of the following gases is most nearly ideal? a. NH b. N2 c. NO d. CO e. C02 f. CH 3 4 3. Which of the following gases deviates most from ideal behavior? a. N2 b. H2 c. HF d. CH4 e. He 4. Which is more ideal at room temperature, F2 or HF, and why? APPENDIX E STIMULATED RECALL MATERIALS 247 APPENDIX E STIMULATED RECALL MATERIALS Instructions to the Tutor As you know, I have a tape recording of your teaching in the chemistry help room. We are now going to review that recording. As you were teaching, many thoughts probably passed through your mind. Some of these you may have written down on the board or said out loud. Many, however, were not reflected in any overt communication. As we review the tape, I would like you to try to recall any thoughts or feelings that occurred to you at that particular point during the interaction. For example, as you heard a particular student's com- ment, you may have formed an impression of that student's understand- ing of chemistry. Or, you may have imagined another student saying the same thing in a previous interaction. Or, you may have wondered if your previous communication to the student was clearly heard and correctly understood. As we listen to the tape, I want you to stop it whenever you recall anything that you thought or felt at that point in your teaching. I want to get inside of what was happening as you were teaching. Before you is a covered copy of your written communication as it occurred during the interaction we are about to listen to. The communications are arranged down a vertical time axis on the page. As each written communication occurs on the tape as we listen to it, the communication will be uncovered by the experimenter. Do you have any questions about this entire procedure? 248 249 Stimulated Recall Probes After the Initial Student Comment or Question - Were you thinking or feeling anything particular then, before you started the interaction? Anything about the student? Anything about yourself? Anything about me? After Tutor Asks a Question - Were there any thoughts or feelings going through your mind then? - Were you thinking or feeling anything in particular then? - Did you have any particular strategy in mind there? After a Student Response or Comment - Were there any thoughts or feelings going through your mind then? — Were you thinking or feeling anything in particular there? - Did you have any particular thoughts or feelings then as the student made that comment? - Did you have any particular thoughts or reactions to the student's response at that time? After a Tutor's Negative Response to Any of the Above Probes Continue Tape and After Next Instructor's Comment - Did that remind you of anything that you were thinking at that time? During a Tutor's Post-hoc Analysis or Lecture Cut off after a few sentences. Were you thinking that then? Why don't we save that until the tape is finished. For now, I would like you to concentrate on what you remember of your reactions, thoughts, and feelings as they occurred during the interactions. APPENDIX F PROTOCOL ANALYSIS FOR INTERACTION 1-15 250 APPENDIX F PROTOCOL ANALYSIS FOR INTERACTION 1-15 Protocol Analysis Symbol Definitions ACT: Apt: Cue Acquisition: Cue Interpretation: DEAL: EVAL: GEN: 1': Line: OBS: PR#: REM: /S: Interaction tape position number. Aptitude type of hypothesis. Taking in information relevant to evaluating a hypothesis. Judging whether a cue supports or refutes a given. That part of the knowledge base representations which were dealt with during interaction. Diagnostic type of hypothesis. The process of evaluating a particular hypothesis. The generation of a particular hypothesis. Tutor hypothesis of student knowledge state. Instructor or Tutor. A direct quote of what the tutor was saying that was transcribed from the tape. Line number. Reference to written tutorial observation notes. Page number. Protocol number. Researcher. Remediation type of hypothesis. Student. Denotes who stopped interaction tape for the purposes of stimulated recall. 251 SR#: I]: +IIIA1b: -IB4INFO: 252 Stimulated recall tape position number. Denotes an unimportant part of transcription was excluded. Denotes a period of silence on tape. Denotes some communication that could not be accu- rately transcribed. Denotes that step lb of route A of knowledge base III is known by student. Denotes that the rule required to process step 4 of route B of knowledge base I is unknown by student. 253 OBSERVATION NOTES DATE ‘ TIME IN TIME OUT INTERACTION START PROBLEM # Sample Exam 6 fil3/6/78 3:58 p 4:00 p l-lS 000 > No. 8 TASK GROUP 3 (D S T‘ # 1‘ O 55'? # I 760* 750 {05] x on He CH4; DES: BLACK FEMALE SAME AS 1-12 2.0 2.0 EXAM 6 PROBLEM N0. 8 4.00 A GAS CONTAINER IS DIVIDED BY A REMOVABLE PARTITION INTO TWO x 14 380 CH4 SEPARATE 2.00 LITER COMPARTMENTS. ONE COMPARTMENT IS FILLED WITH X 380 He HELIUM GAS AT 760 TORR AND THE X 760 OTHER IS FILLED WITH METHANE GAS AT 760 TORR. WHEN THE PARTITION X 16 p = 760 (2/4) IS REMOVED, WHAT IS THE PARTIAL PRESSURE OF THE METHANE IN THE X 25 2 3 5.00 CONTAINER? A. 760 TORR 3/2 B. ll4O TORR 3/5 C. l520 TORR ID. 330 TORR I _]E. 380 TORR WRITES POINTS QUESTION DESCRIPTION OF STUDENT BLACKBOARD STUDY GUIDE FORMULA TAPE POSITION # DE. TUTOR B STUDENT so OTHER F 'mue:aannms# [#1 *Omv-l II II II II \DIO’US II II II II Figure Fl. Observation notes for interaction l-lS. 254 INTERVIEW NOTES DATE TIME IN TIME OUT INTERVIEW # START PRACTICE PROBLEM Task Group 3 No. 2 3/6/78 4:00 pm 4:12 pm 1-15 045 O 3'; 0 K :3 S eh g; 3’ o M I A. ‘ a F i A I “I01“ ‘8 zY/I ' on 4 R A A :I- 8 (M I J 9. Figure F2. Student interview notes for interaction l-lS. 255 PROTOCOL ANALYSIS ’ANALYSIS BY DATE TIME IN TIME OUT PR # PAGE PSH 8/17/78 1-15 l of 9 TRANSCRIPTION SR #/S ACT INFERENCE PR # P LINE _LJ:_deJmnwseeemmmme. oz 2 3 I' The initial reaction was, I talked to her 08/] 2, 4 befOre, I still think she's smart. She still 1?: -III (Major 1-15 13 7-12 5 reminds me of the same person who was smart Problem) l-15 23 20-21 6 uhm. At the beginning of this one I felt a GEN: Dx H 7 little bad imposing on her that she was going 8 to take the exam right away, but she didn't 1. 9 seem to mind too much so I guess, that's why Cue Acquisition 10 that concession of not giving the stuff in the Prior Interaction 1-15 1 3-4 11 beginningl it wasn't done until the end. I H]: S is smart. 12 felt a little bad about taking up her time, GEN: Apt H 13 but that's all. I feels bad 1-15 l 6-8 14 imposing taping. 15 5' right? Am I drawing the whole thing 13.4 Sensitive to S 16 right? ‘ needs. 17 S has drawn: 18 19 He CHA 3. 20 OBS DEAL: IIIAl 1-15 1 17-21 21 2.0 2.0 22 23 I' I didn't read it yet. ------- 2.1 4. 24 I reading problem. Cue Acquisition 1-15 1’ 17-21 25 S' [ l S is correct l-15 1 26,28 26 1' OK, right, right. I happy that 1-15 2 4-8 7 5' I'm trying to draw [ ] here S is working and 23 1' Right, right, that sounds good. drawing diagram 29 5' So that's right, OK and this one is 760 30 torr. 3] S writes: OBS 32 760‘ above He 3 34 R' Any thoughts or feelings ... l7/R Figure F3. Protocol analysis for interaction 1-15. 256 PROTOCOL ANALYSIS ANALYSIS BY DATE TIME IN IME OUT PR 5 PAGE PSH 8/17/78 2 of 9 TRANSCRIPTION SR #/S ACT INFERENCE mm P LINE 1' Ohm, at the time, I did not know what the problem was dealing with ah, and she asked me if it was right and I'm trying to look at her book (laughter) and see, uhm, I guess I was happy that she was drawing little boxes, I like 22 boxes (laughter) uhm, I, I, I like people to take the chalk in their own hand and do it for 23.2 me, especially if I‘m not busy. That's it. ..s QKOCDNO‘U‘IOWNd 5' That's the pressure, right? 25 4.9 J —l 1' OK N S' This one is 760 torr. _1 u S writes: OBS ...: b 760 above CH4 _1 0'1 _l a" S explains that partition is removed. 5. ..l \l 5' So you're going to have four liters here? 085 Cue Acquisition 1—15 2 18-19 ...: m I' right S draws: 4.00 8.2 I wished S 1-15 2 22-24 ...: \O used original N 0 drawing. N ..a R' Any thoughts or feelings going through... 29/R N N 1' They don't seem very important, but what N U») I was thinking of - I was saying, boy I wish N D you didn't draw that box over here, I wish N U1 you would have put a hole in the wall. That's N 0‘! what was going through my mind (laughter). N \l R' At least you didn't draw that box over 00 here? IV to 1' Over again. I justwanted her to put a (A, 0 little tube. That's what I thought of. (.0 —l R' Between the two. N 1' Between the two boxes instead of re- be) u drawing. That was just, I remember that (a) a distinctly, but it's stupid. 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 2537 PROTOCOL ANALYSIS PSH -15 of 9 TRANSCRIPTI # S ACT NFERENCE PR # P LINE i l ressure u're i to 36.4 8.2 lf i I said 're i f t this rt walked this r ma 41.1 0.2 6. -15 3 1-5 Cue Inte retation -15 3 14-16 OBS EVAL: -15 3 20-2l ect H I t she had said that that she had sed ust lete s, but now she's i th what she fi ured out. It didn't seem so bad now. It hel me out with it. You know I s I don't feel bad if 1e 5 sometimes. It's OK with me, I do it too 1' OK 5' That's i to ive me this much of both of them. S writes: 760 08$ 7. 1' OK the reason halfed it was because DEAL: 165 the wa we have been doi these lems the IIICS sure will be the old sure which 760 times some correction factor. 258 PROTOCOL ANALYSIS ANALYSIS BY DATE TIME IN TIME OUT PR = PAGE PSH 8/17/78 4 of 9 TRANSCRIPTION SR #/S ACT INFERENCE PR0 P LINE P = 760 OBS I' I guess what I was thinking of was to re- 50/1 late it to the old problem. So it looked like something she did before. R’ The old problem? 51.8/R 1' Something that I helped her with. The mNO‘mbWN-J first problem that we helped tape was, not Q anything like this, but somehow I have the ..1 Q feeling or had the feeling at least that I ._J _a hadi she at least watched me do a similar J N pppblem like that with ratioing kind of stuff. _J La) J p l' The correction factor was, it went 56 14.8 DEAL: 1G1 4 14-15 ...J 01 from 2 liters to 4 liters. ..J 0‘ 5' So it's 1/2? —l \l I writes: —J m P = 760 1214) OBS DEAL: 165 4 17-18 —J No 5' Oh! I see 1/2, OK great. N O I at same time [that's why it was done] N ....I N N R‘ Any part thoughts or feelings at this ... 57.5/R IV (A) 1' Ah, ah, ah ---------------------- N h I remembered I was a little bothered by the 10. N U" fact that she just said 1/2. That was it. I Cue Interpretation 1-15 19 N Ch wasn't 100% sure he understood what I was H3: -IB4INFO 1-15 4 24-30 N \I doing. I guess I didn't think she didn't know (PV = K). 182 N (D it exactly, but kind of a nebulously knows. GEN: Rem H N \0 It bothered me a little bit. She just (a) C) jumped to 1/2. 00 ._a U») N 5' So, I had the right idea, I just couldn't 64.9 16.3 (A) to explain how I did it. So, if I had another Cs) .1: problem like that, I would just have to take 259 PROTOCOL ANALYSIS ’ANALYSIS BY DATE TIME IN TIME OUT PR # PAGE PSH 8/l7/78 1-l5 5 of 9 TRANSCRIPTION SR #/S ACT INFERENCE PR # P LINE 1 the original pressure times the correction 2 factor. That will give me the new, uhm, 3 pressure? 4 1' Right. 5 5' Let's say, if this was 3 and that will be 21.2 6 6. 7 I' That will be 5 because one of them was 2. 8 S draws new problem. OBS 9 ll. 10 2 3 DEAL: IIIAl 1-15 5 5-16 11 12 5' Yeh 5, that's right. 71 13 S draws: 22.2 14 15 5.00 OBS 16 17 5' Alright, 5 so see, a methane and methane 18 is. So this is methane, so I say 3 to 2. 19 S writes: 08$ 12, 20 3/2 Cue Interpretation 1-15 5 l8-20 21 I' No. EVAL: H3 1-15 5 27-30 22 5' You can't do that? Reject H3 1-15 5 31-33 23 GEN: REM H 1-15 5 31-33 24 R' Any thoughts or feelings going through... 74.5/R H4: -IIIAlb 1-14 4 1-8 25 I' I guess I - As all this was coming down 26 everything was fine until she said the 3 to 2, l3. 27 which again doesn't always, doesn't, she looked EVAL: H4 1-15 6 5 28 like she was bothered with it and I was happy Accept H4 29 that she was at least bothered with it, but it 30 didn't bother me that she did it cause as I 31 said, I used to do it that way and it's like 32 an oversight. Just dumb error, but you really 33 know how to do it kind of thing. 34 R' So it didn't really, it really didn't 260 PROTOCOL ANALYSIS . 8 PSH 1-15 6 of 9 TRANSCRIPTION SR # S ACT N RENCE PR 5 P LINE 5 ke ur ass tion or our feeli about her. 1.2 I' Ri ht. 1' It's i from 3 to 5. 5' That's ri ht 3 to 5 ri t oh' bo and u multi 1 this times that. S' ---------------- OK. S writes: I in uts introducto data. R' An other th ts or feeli I' Now or then? R' Then. I' Then no I don't think so. R' Ncw? I' I don't know 1a hter if she understood 14. hi I i seems like when I listen to Self learnin le far and that a. Leads too much 1-15 6 8-l9 n choose b. Poor inter- 1-15 6 21 1' it eems er pretation 1-15 6 -23 rt t it ri t. 11 f at if she ff i lf never did it. N—‘UkDmVO‘LHb 261 PROTOCOL ANALYSIS T'ANALYSIS BY DATE TIME IN TIME OUT PR # PACE PSH 8/18/78 1-15 7 of 9 TRANSCRIPTION SR #/S ACT INFERENCE PR # P LINE Introduction 46 R asks S for helpful part. 50.5 Helpful Part: 5’ He said the key word, correction factor Get and use l-lS and like I figured the problem out. I just correction factor l-lS 7 11-13 sort of mentally drew myself a little figure and I got the right answer, but I mummawN—J didn't know how I got it so if I were to do NED the problem again, I would have different _l 0 numbers, I wouldn't know how to do it. So he J ._-0 just said, you know, use a correction factor a N and showed me how to get the correction factor. J (.0 b R asks what S prefers differently. 52.2 ._.l U" 5' Not really anything, I just sort of, Prefers nothing ..l 0‘ I had a general idea of how to do the different —l \l problem, he just sort of, ---, showed me ..r (I? what, the part I was missing. I I provided fonmfla 7 20-21 ...: \O intuitively got the right answer, but he sort O of gave me more of a formula on how to do it. —J N N R asks for any other feelings. 62.2 w 5' Not, well you know, pleasant, you know he Pleasant N b helped find the right answer, that counts. N U1 0" R asks if 5 would return to tutor. 64.9 Return to tutor N \J S' Yup, uhm, he doesn't mess around, he I“ O) helps you with the problem and that's it, I sticks close to 7 28-29 and I had the same tutor before and I under- task OKO stood....how he explained it so I'm more likely to go back to someone who helps me to 32 understand it. 33 34 262 PROTOCOL ANALYSIS ANALYSIS BY DATE TIME IN TIME OUT PR 2 PAGE PSH 8/18/78 1-15 8 of 9 TRANSCRIPTION SR #5 ACT INFERENCE PR 1: P LINE 1 R shifts to 5'5 deficiency, asks what S did. 70 2 5' Yeh I did the problem, I just wasn't sure. S Diagnosis: 1-15 7 7-10 3 I drew myself a mental picture and intuitive- Needed a formula, 1-15 716-21 4 1y came out with the right answer, but I just algorithm, or 1-15 8 3-7 5 wanted to know how I could do it, I would rationale 6 do the same problem again using different 7 numbers and different figures. Researcher 1-15 7 2-7 8 Diagnosis 1-15 3 2 9 Practice Problem Task Group 3, No.2 83 -IGl,2 (IIICS) 1-15 326-30 0 S reads problem. 1-15 432-33 11 S draws picture: -IIIAlb 1-15 217 12 420 C02 H2 140 Partial solution 1-15 5 5-6 13 OBS reasoning for thi l-15 518-20 14 5 2 target question i 15 provided to S in 16 Study Guide,Sample 17 5' Now, I think the correction factor will be Exam 6 18 L 5 over 7 for (:02. 19 S writes and says: IIIAl l-lS 811-16 20 C02 H7 IIICS l-lS 819-21 21 L420 5/7) 2/7 ' 140 OBS 165 1-15 819-21 22 IIIC6 1-15 319-21 23 S' Then, he said take the original, uhm, 165 1-15 819-21 24 preSSure times correction factor 25 Correct set-up l-15 820-21 25 5 looks for calculator. 99 27 28 R'Qlihat does this 5/7 represent? 29 5' That's the correction factor, that's the 30 the total, uhm, in liters is 7 liters. wait I aminute. ----S rereading Mlem. This is 32 introduced into a container that has 7 33 liters. So Sever 7. and he said the times 34 the original pressure should give me 263 PROTOCOL ANALYSIS PSH TRANSCRIPTION a new ressure. R' OK wh not 7 over 5 or 2 over 5 or 5 over 2? 5' Hell I can't e lain it but that's not how do it. This is rt and this is the whole we're looki for rtial ress re. This is where there's rtialit and this i the whole. R' 0K 8' Are sa in am I R' OK. R rovides feedback. S said P decreases when V increases. R rovides ex lanation of correction f ctor S solves math and adds final S writes: TORR R. S' r R e lains what blem ks f r PRACT Calculate the total when 5 1i f s a 1i rr SR # S ACT INFERENCE PR 9 P LINE 1-15 9 16-18 APPENDIX G TUTOR PERFORMANCE MODELS 264 ANS: ASSUM: ATT FORCE: BET: CUE ACQ: CUE INTERP: DEF: Dx: ELAST COLL: ELECTRONEG: EQ: EVAL: GEN: HA: HD: HR: HYBRID: I: INTRO: KNOW: MOL: PROB: P-V RATN: QUEST: RELAT: S: STEP: SUB: APPENDIX G TUTOR PERFORMANCE MODELS Symbol Definitions Answer Assumption Attractive forces between molecules Between Active acquisition and solicitation of cues Interpretation of cues attended to Definition Diagnosis Elastic collisions between molecules Electronegativity Equation Evaluation of some hypothesis Generation of some hypothesis Aptitude Hypothesis Diagnostic Hypothesis Remediation Hypothesis Bond Hybridization Instructor or Tutor Introduce Knowledge base Molecule Problem or target question Pressure-volume rationale, the two-step procedure for developing a correction factor Question Relationship Student Step or operation of knowledge base procedure Substitute 265 Subscripts UNDERST: Variables MW: -IIIA1: IIIAI: —111~—.oo> <—iJU'U:3 266 Referring to gas A Referring to gas 8 Initial value of variable at Time 1 Final value of variable at Time 2 Total Understanding Density Molecular Weight Moles Pressure Gas Constant Temperature Volume The student doesn't know or remember step A1 of Knowledge Base III The student does know or remember step A1 of Knowledge Base 111 CUE INTERP GEN: HR mm mm s serene on s we: mow sreP cue IRTERP eew: w‘ 3 IS mm WRITE FACTOR E0 ‘2 I x. - % s rowers "3 T convent 2 IS S PROVIDES GENERAL KNOW STEPS ASK FOR SPECIFIC KNOW STEPS I II S PROVIDES wRRECT SPECIFICS u SUI VALUES SS CHECK UNIT OF ANS READ PROSLEN WRITE GIVENS YES ATTENDED TO S WRITING ? exeum 1’ umr " convent CUEINTERP eew a EVAL: Np Figure 61. S S PMIDES KNOW STEPS FOR UNIT CONVERT Tutorial performance model I-1. CUE INTERP SEN! No S REWDIATE mNFUSION Pnowoeoonneci'eonu” P| V' I PaVz CHECK ANS IS EXPLAIN ROUTE S c D ‘sumwowrooo ' mum C START 3 I C“ INTENP ASK S ATTE-"ED PM Qt“ ' .1 3E": N. CUEACOOENHID ' D s movmes mow Stern cue "new eew: "0 VIII?! ewew 0 n CONVERT °C 10 0K s convent: uwrrs cue "new EVAL: "Ii-”o cue INTER? “"1 ’V ' "“7 7 eew: w... sue VALUES a“: T“ nenmmee PV . nRT " owe courusiow so“: eon v on n A w _ no CUE IINTERP I7 YES Smwm NO MIDES NO SUB n - NW ' N 3° OIN- HR . IWCT next step, m o . I EVAL'. w 0 STEP ? (1101042): V res IS SO SI 3 INCORRECT cusses? ALTERNATIVE “'am": ””090" cue men ROUTE mow cue GEN: Na ' neweome coweusnow‘" mee PV - m 5' cue INTERP sun ‘- ron n EVAL: HR W J _ 4 sun VALUES “ u ” ssoLveso-{li- 90L" '0' v 23 M s on t cwecx s Pnovnoes 1mm ms WITH connect ms mow STEP s elves wnowc STEP u u u cue me» no— ounces? uwni' convent . s connects ennon eew o EVAL: no :1 u neweome coweusnow sue VALUES To‘ to CONVERT UNITS “TAIN CORRECT ANS Figure G2. Tutorial performance model II-l. *It was assumed that the tutor generated a diagnostic hypothe- sis, although there was no direct supporting evidence. CI! INTERP SEN: "D "R 269 Y. S MVIDES ”ML”? ‘ exrum IDEAL on s noun a remmo START ) IITERACTDNSSNALLVI l exeum ELAST cou. " new exeum POLARITY, '7 owoce IRTERACT,I we PARTIAL came: YES mmnze nuLee '° ATTmE OR D'OLES AND IDEALITY smear renune use " onoen IST DIPOLE OR NLARITY 2D NOL SIZE seem cuecx eacw " cwouce eon IDEAL exeum wow-amen F‘ usuw Rename a or come»? I WATION SS CUE RITERP EVAL : No FEATURES SS DEN. WITH IST CHOICE mun v mom 7 IATIIIATICALLA PV I «RT i . “VEPVinRT MV cowcLuoe ’ ZERO °K ‘ANS ZERO V S WES NAT" EXPLAIN no ICLP ’ II PROVIDE V-IDEAL RULE Go To MAW STICK NOL exeum POLARITY, m- TIAL cmnees,mo mw LEWIS DOT 1" mun SANSCORRECT S LACKS INDIE mow? YES CUE INTERP CRANE SEPARATION EV‘L I "a YES EXPLAINWTRY W IOL VES S LABELS “TRY ? °::.:°.°, T _J_“ RENEDIATE CONF USION ALL CIDICES DEALT WITH ? OOONTO NEXT woe “ CHOICE Go To SZ LAEL ”L KNETRY l CUE INTERP “N: "R EVAL1 I'bI‘h 4. m STICK IOL WA NT FACTOR ”R I SS S ASIO FOR ANOTHER EXAMPLE M INTERP 5’ SEN: Ha , N‘ EVAL: an ,HD mmouce CRITICAL '° reap nan “NZ Ho J- .. “AL WITH KW TASK c an I 78 S SAYS UNDERSTOOD CUE INTERP SEN: Ha EXPLAIN HOW TO USE PERIODIC TASLE 271 IKNT IF V N“ POLAR ONCES SWSEST SNALLEST ' SS SUSSEST SIALLES‘T IS NRRECT ANS s ASKS “our Locmou" or Atom neumve TO r . exeum w, EXCEPTION Figure G3. 70 S2 POINT WT CORRECT ANS ewe noomovm. wow-9‘ POLAR wot EXAIPLES TO neeuce w. I SS EXPLAIN NEW “I. ARE NONPOLAR EXPLAIN TO ELIIINAT o USE # AND SIZE W ATOIS SUNNARIZE RATIONKE. IDEAL SAS FEATURES swam-o nwo-“ canon. CUE INTE" .,l EXPLAIN eLecTnowee“ e PERIODIC TASL£,USE EXARPLE Tutorial performance model IV-l. S ASKSTOWMKPROST S WRITES SIVENS S USES RATIO FORMAT 272 (: SHIRT CUE INTERP SEN I H. S ASK eon S DIFFICULTY IS SUNNARIZE ROUTE C EXPLAIN 2 POSSIBLE . RGJTES mom PROS QES SS ASK FOR RAT IOHALE .___.L__ s succesrs nmo " RAT IONA LE PROVIDE ROUTE 3 '° RAT IONALE C m) Pnevms 1'“ amnoeo To INTERACT no 3 wnmne? wmc s 2 no S a convent: unm S cue INTERP cen e EVAL: no IO , wane cwens II M "3 ouune wrrn "° unn' convent "° noure c 9 none 1’ we: IS IT an 3 TO DETERNINE ya; a ‘6’? P neunonsme Wu“? ’ no ” s ewes wnone RATIONALE cue INTERP EVAL : n. WRITE PV 3 MIT WISH TO YES PM SETTER MRST? NO ASK eon vanueLe " RELATDNSHIP S ANS CORRECT S SAYS 4 S EXPLAINS MDI'FWT ROUTE S VS C IS CONVERT I‘CESSARV UNITS SAT P INCREASES "3 connect rncron ? no RENEDIATE CONFUSI PNVIDE MCT FACT"! ‘ wane srecmc ea 9' xznxp.‘ 273 26 SOLVE E0 32 N0 PROVIDE ADVANTAGES ANOTHER FACTOR? ll OF ROUTE a ‘ 33 S PROVIDES CORRECTION FACTOR 1 END 34 CHECK ANSWER T I 35 ‘ ’ as EXPLAINS OWN DIFFICULTY,L ,L 1 S [ROUTE 3 vs c I SUGGEST USE ROUTE C Figure G4. Tutorial performance model I-2. CUE INTERP SEN : I'I‘ C ASK FOR DIFFICILTY m HOW TO “SIN CUE Am NAJOR OR TRIVIAL DX women no nunon ? S DEF INES DIFFICILTY ? cue InTERP ' SEN: Ho Eual s PROVIDES mow STEP (V7 wewew ASK now To FIND PT ' cue Ace: cen Ano TEST: n, LIST GIVENS L S SUCCESTS TREAT CASES INKPENDENTLY 41 EXPLAIN OAS DIFFUSION“ To DETERNINE V7 PREVIOUS ASK FOR IEXT KIDWST TASK III PR“? suCCEST SINILARITY '7 To PREVIOUS TASK m ASK FOR VT on now chAnces OUEAOOTESTanOR een: "o IIIAIS NR "° S Ans CORRECT? YES ‘ PROVIW Km RELATED TO V CHANCE S IS INCORRECT CUE INTERP EVAL: Ho ASK FOR V1 PROS RETOR ICAL QUESTION? YES YES OR fin D ‘ PROVIDE VT VALue END S sees NISTAkE . II SUGGEST: IEXT STEP cue INTERP sen Ano EVAL: no l IS PROVIDE RATIONALE Asn FOR P1» L .. S ANS CORRECT 275 8 AI. Inca-IECT 33 Ann P} RAIATO P, OAS I SSI PROVIDE «new. :0 “M” m": ° "9' x . x . 1 cue InTERP I E y EVAL: HR ‘3 m c TO ATTEND " VIRITEPI WUEMSASA‘ TOVM,PEFFECT, Ano V-P RATn cue InTERP E‘L? No szITu EO " Asnnowmoomn " . cm Ace x. ' x; '3' TEST: n. 9' 4: man NO no AT'SIARTOPEID vannce,V-P . exPLAIn V‘ cnmee TASK m P ern PAC- YES Y. m 3 To V-P Ruu: 6° 7° cue InTERP F an: R. use exAIIPLE To ” ASK POR CORRELATIon “ exPLAIn V-P Rue FACTOR ST cue INTERP " no OER: "R I s ANS CORRECT? EVAL 5 Nu m 60 To 6 cue INTERP EVAL 3 "R no OR IS SS PREVIOUS no ALREADY no ALIEAOT “I" m It TASK III mo P-V exPLAIneo P-V ‘ p . p .ncm PROS .R mm P mm P F 1 YES YES m S4 REVI EW P-V RATN COTS W ASKMP-V RATN It cuAcO: ‘ cen TEST: natum? exPLAIn-CLARIPV s " InOORRECT PRoceouIE «men 9 no no ASK FOR PT '° cue Ace: een A TEST: R. a] wRITE eo P" . Pu -PACTOR Asn POR NEXT STEP “ (OR JUST HCTORI CW ACO TEST : NR SOLVE FACTOR EC WRITE EC 1 ASK PROSRESSIVE m A”UT V CHANCE S FACTOR CUE ACO TEST: HR TO REVIEW P - V RATN Pn’ Rx. ' FACTOR "I Figure GS. SOLVE “TOR EC 276 PRONPTSTocaIIICT " P-V RATN SI S ANS CORRECT SS ASK FOR RAT IONALE l S ANS CORRECT wRITE " 'T 'PPA i PPR 1 TS ASK Fm P-V RATN 77 PWT TO OSTAIN AN+ u WESTP +P L‘LW’P " “C D S PROVIDES PART I AL ANS I Tutorial performance model III-2. ASK FOR DIFFICULTY on WHAT IS KNOWN 277 CUE INTERP DEN: II. S SAYS lfiAL -PG.ARITT CUE INTERP IWL -E.ASTCQ.L oEN: II. TVAI . SI EXPLAIN IDEAL us " FEATURES : poun. ELAST oou., NOL INTENAcT CUE INTERP '° SEN: HR EVAL: HR s SAYS IINoEnsTooo '3 cIIE INTERP DEN: HR EVAL: Ha SUGGEST ANS IS ASK Fm LEAST IDEAL noon ANsNEn ANo '7 NATIONALE s ASKS IIvonooEN " BONDING S ASK Fm UNSTATED FEAllflE S CORRECTS ERROR ’1 CUE INTERP EVAL: "R BEGIN CHECK EACH ” CHOICE ma IoEAI. FEATURES MRIZE IDEAL OAS FEATURES: V O POLARITY J .. ASK NOL POLAR CUE INTERP SEN: H3 m o IVSZJCI EVAUHR m o IVDZJCI EXPLAIN INDIVIDUAL " DOND WRITY GoTo 42 LAKL ”L AS ANSWER SUGGEST mu am To “ szIoEN As ANS 278 EXPLAIN DIPDLE. “ VEDTDRE. AND EIEDTRoNEo ASK FOR CLAR IFICATION PRDVIDE RATIDNALE “ (ENALL V) PDR ooRREcT L :UDDEsT Isr KEY “ FEATURE INDNPDLARI CUE INTERP DEN N EWL: HR EXPLAIN IDEAL DA: ”5 FEATURES CI! ACO DEN: HR A:X PDR DDRREcT cmcfl : AN: ooRREDT? "3 : cuoosE: 27mm NoL A: PossIoILrTIE: CUE INTERP ’7 GEN: HR EVAL: "R EXPLAIN NDL " INTERACTIONS S DR” “2 AS EXANP:E1 ASK PDR DTIIER “ FEATURE ST : AN: CORRECT : SAYS DIVEN 40 NDNDATDNIc NDL . PICK :NALLE:T PROVIDE SPECIFIC ” EKANPLE (H2 AND N.) oil WEST CORRECT Ans :UNNARIzE RATIDNALE." IDEAL DAD FEATURES LN) : ASKS NDIN DETERNINE " cRITIcAL TENP fl DRAIN NN, INTERAcTIoN EXPLAIN H2 DDNDIND AND PARTIAL DHARDIE Figure 66. Tutorial performance model IV-2. LIST OF REFERENCES 279 LIST OF REFERENCES Allen, V. L., Feldman, R. 5., & Devin-Sheehan, L. Research on children tutoring children: A critical review. Review of Educational Research, 1976, 55, 355-385. Baumgart, N. Verbal interaction in university tutorials. Higher Education, 1976, 5, 301-317. Berliner, D. C., & Gage, N. L. The psychology of teaching methods. In N. L. Gage (Ed.), The psychology of teaching methods. N.S.S.E. 75th Yearbook (Part 1). Chicago: University of Chicago Press, 1976. Bessemer, D. M., & Smith, E. L. The role of skills analysis in instructional design_(Technical Note 2-72-50). Los Alamitos, California: SNRL Educational Research and Development, 1972. Block, H. J. (Ed.). Schools, society, and mastery learning. New York: Holt, Rinehart & Winston, 1974. Bloom, B. S. The thought process of students in discussions. In S. J. French (Ed.), Accent on teaching; Experiments in general education. New York: Harper, 1954. Bloom, B. 5. Learning for mastery. Evaluation Comment, 1968, lfi2). Brown, J., & Burton, R. Multiple representations of knowled e for tutorial reasoning. In D. G. Bobrow & A. Collins (Eds.§, Representation and understanding: Studies in cognitive science. New York: Academic Press, 1975. Brown, J., & Burton, R. Diagnostic models for procedural bugs in mathematics. Cognitive Science, 1978, gIZ), 155-192. Brown, R. L., & LeMay, Jr., H. E. Chemistry the centra1 science. Englewood Cliffs: Prentice-Hall, 1977. Clark, C. M., & Peterson, P. L. Teacher stimulated recall of inter- active decisions. Paper presented at the Annual Meeting of the American Educational Research Association. San Francisco, 1976. Clark, C. M., & Yinger, R. J. Research on teacher thinking. In P. L. Peterson & H. J. Nalberg (Eds.), Conceptions of teaching. Chicago: University of Chicago Press, 1978. 280 281 Collins, A. Processes in acquiring knowledge. In R. C. Anderson, R. J. Spiro, & N. E. Montague (Eds.), Schoolinggand the acqui- sition of knowledge. Hillsdale, N.J.: Erlbaum, 1977. Collins, A., Warnock, E. H., Aiello, H., & Miller, M. L. Reasoning from incomplete knowledge. In D. G. Bobrow & A. Collins (Eds.), Representation and understanding: Studies in cognitive science. New York: Academic Press, 1975. Collins, A., Warnock, E. H., & Passafiume, J. J. Analysis and synthe- sis of tutorial dialogues. In G. H. Bower (Ed.), The psychology_ of learning and motivation (Vol. 9). New York: Academic Press, 1975. Conners, R. D. An analysis of teacher thoughtgprocesses, beliefs and principles during instruction. Unpublished doctoral disserta- tion, University of Alberta, 1978. Daws, R. M., & Corrigan, B. Linear models in decision making. Psychological Bulletin, 1974, 81, 95-106. Edwards, N. The theory of decision making. Psychological Bulletin, 1954, 51, 380-417. Edwards, N. Behavioral decision theory. Annual Review of Psychology, 1961, lg, 473-498. Ellson, D. G. Tutoring. In N. L. Gage (Ed.), The psychology of teaching methods. N.S.S.E. 75th Yearbook (Part 1). Chicago: University ofChicago Press, 1976. Elstein, A. S., Shulman, L. S., & Sprafka, S. A. An analysis of medical inquiry processes. Final Report. East Lansing, Mich.: Office of Medical Education Research and Development, Michigan State University, 1976. Fisher, K. M., & MacNhinney, B. AV autotutorial instruction: A review of evaluative research. AV Communication Review, 1976, 25, 229-261. Gaff, J. G. Toward faculty renewal. San Francisco: Jossey-Bass, 1975. Gage, N. L. Theories of teaching. In E. R. Hilgard (Ed.), Theories of learning and instruction. N.S.S.E. 63rd Yearbook (Part I}. Chicago: University of Chicago Press, 1964. Gagné, R. M. The conditions of learning, New York: Holt, Rinehart, & winston, 1970. 282 Gagné, R. M. The learning basis of teaching methods. In N. L. Gage (Ed.), The psychology of teaching methods. N.S.S.E. 75th Yearbook (Part 1). Chicago: University of Chicago Press, 1976. Gaier, E. L. Memory under conditions of stimulated recall. The Journal of General Psychology, 1954, 59, 147-153. Gibbs, G., & Durbridge, N. Characteristics of open university tutors (Part 2): Tutors in action. Teaching at a Distance, 1976, No. 7, 7-22. Glaser, R. Components of a psychology of instruction: Toward a science of design. Review of Educational Research, 1976, 46, 1-24. Goldschmid, B., & Goldschmid, M. L. Modular instruction in higher education: A review. Higher Education, 1973, 2, 15-32. Goldschmid, B., & Goldschmid, M. L. Peer teaching in higher education: A review. Higher Education, 1976, 5, 9-33. Goldschmid, M. L. Teaching and learning in higher education: Recent trends. Higher Education, 1976, 5, 437-456. Greeno, J. G. Cognitive objectives of instruction: Theory of knowl- edge for solving problems and answering questions. In D. Klahr (Ed.), Cognition and instruction. Hillsdale, N.J.: Erlbaum, 1976. Hammond, K. R. Computer graphics as an aid to learning. Science, 1971, 112, 903-908. Hammond, K. R., & Summers, D. A. Cognitive control. Psychological Review, 1972, 19, 58-67. Harrison, G. V. Tutoring: A remedy reconsidered. Improving Human Performance, 1972, lfi4), 1-7. Heiss, A. M. The preparation of college and university teachers. Berkeley, Calif.: Berkeley Center for Research and Development in Higher Education, 1968. (ERIC Document Reproduction Service No. ED 029 844) Holley, A. 0. Tips for tutors. MATYC Journal, 1977, lljl), 8. Kagan, N. Influencing human interaction--eleven years with IPR. Canadian Counsellor, 1975, 212), 74-97. Kagan, N. P., Schauble, P., Resnikoff, A., Danish, S., & Krathwohl, D. Interpersonal process recall. Journal of Nervous and Mental Dis- orders, 1969, 148, 365-374. 283 Keller, F. S. Goodbye teacher. .. Journal of Applied Behavior Analysis, 1968, 1, 79-89. Kozma, R., Kulik, J., & Smith, B. Development of a guide for PSI proctors. Journal of Personalized Instruction, 1977, 514), 22l-226. Krutetskii, V. A. [Thepsychology of mathematical abilities in school children]. Chicago: University of Chicago Press, 1977. Lazar, R., Soares, C., Goncz, R., & Terman, M. Tutorial training for PSI proctors in the large-enrollment course. Journal of Personalized Instruction, 1977, 514), 226-229. Lewis, J. What is learned in expository learning and learning by doing? Unpublished doctoral dissertation, University of Minne- sota, 1976. Magoon, A. J. Constructivist approaches in educational research. Review of Educational Research, 1977, 51, 651-693. Mann, R. D., and others. The college classroom: Conflict, change, and learning, New York: John Wiley & Sons, 1970. Marland, P. W. A study of teachers' interactive thoughts. Unpub- lished doctoral dissertation, University of Alberta, 1977. McKeachie, W. J., & Kulik, J. A. Effective college teaching. In F. N. Kerlinger (Ed.), Review of research in education (Vol. 3). Itasca, 111.: F. E. Peacock, 1975: National Institute of Education. Teachingas clinical information processing. National Conference on Studies in Teaching, Report 6. Washington, D.C. (ERIC Document Reproduction Service No. ED 111 806) Nebergall, W. H., Schmidt, F. C., & Holtzclaw, Jr., H. F. General chemistry. Lexington, Mass.: 0. C. Heath, 1976. Neisser, V. The multiplicity of thought. In P. C. Wason & P. N. Johnson-Laird (Eds.), Thinking and reasoning, Baltimore: Penguin Books, 1968. Newell, A., Shaw, J. C., & Simon, H. A. Elements of a theory of human problem solving. Psychological Review, 1958, 55, 151-166. Newell, A., & Simon, H. A. The simulation of human thought. Paper presented at a program on CurrentETrends in Psychology, The University of Pittsburgh, 1959. 284 Nisbett, R. E., & Wilson, T. D. Telling more than we can know: Verbal reports on mental processes. Psychological Review, 1977, §&, 231-259. Peterson, P. L., & Clark, C. M. Teachers' reports of their cogni- tive processes during teaching. American Educational Research Journal, 1979, 15, 555-565. Postlethwait, S., Novak, J., & Murray, Jr., H. The audio-tutorial approach to learning. Minneapolis: Burgess Publishing, 1969. Radford, J. Reflections on introspection. American Psychologist, 19743 _2_29 245‘250. Raiffa, H. Decision analysis. Introductory lectures on choices under uncertainty. Reading, Mass.: Addison-Wesley, 1968. Resnick, L. B. On holding an instructional conversation. In R. C. Anderson, R. J. Spiro, & W. E. Montague (Eds.), Schooling and the acquisition of knowledge. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1977. Robin, A. L. Behavioral instruction in the college classroom. Review of Educational Research, 1976, 55, 313-354. Robin, A. L. Proctor training: Snapshots, reflections, and sugges- tions. Journal of Personalized Instruction, 1977, 514), 216-221. Schermerhorn, S., Goldschmid, M., & Shore, B. Peer teaching in the classroom: Rationale and feasibility. Improving Human Performance Quarterly, 1976, 511), 27-34. Schwab, J. J. The practical 3: Translation into curriculum. School Review, 1973, 51, 501-522. Shavelson, R. J. Teachers' estimates of student "states of mind" and behavior. Unpublished manuscript, Far West Laboratory for Edu- cational Research and Development, l976a. Shavelson, R. J. Teachers' decision making. In N. L. Gage (Ed.), The psychology of teaching methods. N.S.S.E. 75th Yearbook (Part I). Chicago: University of Chicago Press, l976b. Shulman, L. S. The psychology of school subjects: A premature obituary? Journal Research Science Teaching, 1974, 11, 319-339. Shulman, L. S., & Elstein, A. 3. Studies of problem solving, judgment, and decision making: Implications for educational research. In F. N. Kerlinger (Ed.), Review of research in education (Vol. 3). Itasca, 111.: F. E. Peacock, 1975. 285 Simon, H. A., & Newell, A. Human problem solving: The state of the theory in 1970. American Ppychologist, 1971, 55, 145-149. Slovic, P., Fischoff, B., & Lichtenstein, S. Behavioral decision theory. Annual Review of Psychology, 1977, 55, Slovic, P., & Lichtenstein, 5. Comparison of Bayesian and regres- sion approaches to the study of information processing in judg- ment. Qgganizational Behavior and Human Performance, 1971, 5, 649-744. Smith, E. L. Analytical concepts and the relation between content and process in science curricula (Technical Note 2-72-56). Los Alamitos, Calif.: SWRL Educational Research and Development, 1972. Smith, E. L., & Heller, P. A. A conceptual-task-strategy framework for knowledge representation in science education. Unpublished nanuscript, Michigan State University, 1978. Smith, E. L., & Sendelbach, N. B. Development and tryout of the science teacher planning simulation system (All-University Research Initiation Grant, Final Report). Unpublished manu- script, Michigan State University, 1977. Snow, R. E. Heuristic teaching, Stanford Center for Research and Development in Teaching, Third Annual Report. Stanford, Calif.: Stanford University, 1968, 78-84. (ERIC Document Reproduction Service No. ED 048 136) Stevens, A., & Collins, A. The goal structure of a Socratic tutor. Proceedings of the Association for Computary Machinery National Conference, Seattle, Washington, October 1977. Stockdale, D. L., & Wochok, Z. S. Training TA's to teach. Journal College Science Teaching, 1974, 5(5), 345-349. Sullivan, A. M. Research on teaching. Canadian Journal of Higher Education, 1975, 5(1), 1-11. Trent, J. W., & Cohen, A. M. Research on teaching in higher education. In R. M. Travers (Ed.), Second handbook of research on teaching. Chicago: Rand-McNally, 1973.