. : . .1132... 1.. ii 37193.37. §?.4.n¢...\l..i gun.¢§d..v.b. iii . I" 20-: I! o .bylfl..v-....c ; I: “3.3.3.8: vr! .2: .. t! 7...} I... 50“..." 1 .lfir. . I n \ . ' ,0" IL: - 1i,- . ‘ 2.1453; _ ' h at?“ 0' .‘ ‘V f}? 3 .‘ IVA..J ‘ .o ’flx.‘ \l',,a . -. f 0 '1. v‘.. \I.‘ I, z 0 —.\< I: imsc.‘ not. m... .lAvHI unfitov-\ .. Do... ‘ ‘ \ v10“ 3 ‘ II I. . .1. I On".’ I .nq(..l-; 3.1.... E -l 3‘ .‘I. IV {I}... . ”I ~XA 1. Ell ‘ .1! PW... .. ... L. ...u......> 1 pi \u. 45...! .v .1 ‘ A ti OI .Q‘! i. .1118.) Wham. Kuhn . u): l .fn.‘ 0‘ IIIh Lu\l\1l¢o~u;’ #0.? Vin- U 10"“ .1. ‘l o I ‘I‘IaQI: ..‘v‘x.ov.¢doolll‘~xlur2> i it 1:). I .. u...a.....tv...~.r. . 3.2 ....|.....v .v 93.1'\9542 .Jl I .1 ll .ltfldvt.£\. nov’. 6 . nunirfi. . . ..2...nmmt.. Iu . . . i! och. . I.‘c¢.. .. "odvl¢ .0 ..a n . . . ‘Bmhuumlidl Jo; M“ 17!] tn”. .rlu . . o . . CIOLD.‘ rrvulvd t b t l .o aAVuv ”#4. in ti: 'H" . [Kituqb .- oub‘anir’l 0:140 . .0. ,V| ‘ ~‘lv. II‘ I II 5‘ at‘ Q} .LHPt-lkvfilnu ¢ ‘t II. 1-.. - O 2 : o} ‘ lelvivh‘t.uo.ubvv .Ouqulvi 3."..nlv r’.§v.31 .«hthfl. . Z . . Q' | ‘ . (.1? .SLrfluil . .4. Ltiz. . . I. \ III rulv’h.‘ ‘0. LP. Illtcolw - vmrora ‘. I . . - nu - I .hfinhfiduvfifi .{wl .H‘luvsld".l~..:.‘n o .a .. h'l hm.- ...I.. I ‘1‘“ {3.1? u. . "P519500- u-nv‘. t . ‘ (I ‘hil m . . VI m n . Q , v 2.4 . 4. o.! . . . , . > .. «Ex 3 up; 1410.} J . t n‘ v‘.'lr,'>.t. . \1 v‘, "A .U 1211‘? V.‘ :1~ .L: .‘bl‘ugfijg . I‘ o’. bx" ‘ I .V ' . Muffilla' vgns v I. IE8 J l llllfllllllllllllll lllllllllll/lll 1293 00071 0677 LIBRARY Michigan State University This is to certify that the dissertation entitled Expert And Novice Instructional Developers: A Study In How Organization Of Knowledge/Experience Is Displayed In Problem Solving Performance presented by Paulette Dieken Lovell has been accepted towards fulfillment of the requirements for Ph . D . degree in Educational Systems Development Major professor Date November 121 1987 .MSU Lt an Affirmative Action/Equal Opportunity Institution O~ 12771 )V1531_J RETURNING MATERIALS: Place in book drop to LIBRARIES remove this checkout from your record. FINES will be charged if book is returned after the date stamped below. A ' o .‘ ‘2 : 7 . / 4' i gem 1' ' i ' I i ‘4 ) EXPERT AND NOVICE INSTRUCTIONAL DEVELOPERS: A STUDY IN HON ORGANIZATION OF KNOWLEDGE/EXPERIENCE IS DISPLAYED IN PROBLEM—SOLVING PERFORMANCE By Paulette Dieken Lovell A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Educational Systems Development 1987 Copyright by PAULETTE DIEKEN LOVELL 1987 ABSTRACT EXPERT AND NOVICE INSTRUCTIONAL DEVELOPERS: A STUDY IN HON ORGANIZATION OF KNOWLEDGE/EXPERIENCE IS DISPLAYED IN PROBLEM SOLVING PERFORMANCE By Paulette Dieken Lovell This study was a qualitative inquiry into how instructional developers organize their knowledge/experience while solving instructional development problems. The subjects were two expert and two novice instructional developers who verbally responded to a total of five problem solving tasks of two different types. Three of the tasks were complex instructional development problems and two of the tasks required card sorting. Five questions related to organization during problem solv- ing guided analysis of the data. These questions were related to sequence of problem solving, time spent solving each of the prob- lems, extent of detail produced, categories produced, and consis- tency of problem solving strategies across the five tasks. During data analysis, two strategies were used extensively to verify findings and thus strengthen their validity. First, findings were subject to search for not only evidence, but for counterevidence as well. Second, triangulation was used through data analysis consisting of multiple examples of more than one Paulette Dieken Lovell problem type and a variety of approaches to analysis. The approach to data analysis is described in detail throughout this study. In this study, expert instructional developers systemati— cally worked through complex problems, while novices used a more random approach. Both groups took more time before responding to the complex problems than they did before responding to the card sorting tasks. Differences between expert and novice groups regarding extent of detail included greater precision and sophis- tication, generation of multiple alternatives to solution, persis- tence of information gathering and greater fund of knowledge for experts. Expert instructional developers also displayed an under- standing of the relationships among component parts of the instructional development process and at a more abstract level. Consistently, experts were more systematic, produced both more data and more detail at a sophisticated level and noted awareness of relationships among process components. Novices used a random- like approach to problem solving, produced less data and less detail on a more concrete level and noted few relationships among components of the instructional development process. I dedicate this work to my parents. ACKNOWLEDGMENTS My family, friends and colleagues all own a part of this dissertation. They were patient and unwavering in their support and for that, I thank them. My graduate committee members were Cass Gentry (chairman), Bruce Miles, Peggy Riethmiller and Steve Yelon. Each is a uniquely gifted professional and in consort they are extraordinary. For their insistence on excellence, invaluable feedback and personal approach, I thank them. Any new graduate, including myself, could do no better than to emulate these four eminently qualified and caring professionals. vi TABLE OF CONTENTS Page LIST OF TABLES ............................................... x CHAPTER 1 .................................................... 1 Introduction to the Study ................................. 1 Introduction ......................................... 1 Statement of the Problem ............................. 1 The Purpose .......................................... 2 Rationale ............................................ 3 The Research Questions ............................... 7 Scope and Limitations ................................ 9 Definition of Terms ................................. 10 Plan of the Dissertation ... ......................... 12 Chapter Summary ..................................... 12 CHAPTER 2 ................................................... 14 Review of the Literature ................................. 14 Introduction ........................................ 14 Information Processing .............................. 15 Information Processing Theory ....................... 15 Sensory Stores ...................................... 16 Short Term Memory ................................... 16 Norking Memory ...................................... 17 Long Term Memory .................................... 18 Structure and Function of Memory .................... 18 Topics Related to Information Processing Theory ..... 21 Problem Formulation ................................. 22 Chunking and Schema Theories ........................ 24 Cognitive Strategies ................................ 27 Summary ............................................. 30 The Expert/Novice Literature ........................ 30 Studies ............................................. 31 Other Expert/Novice Literature ...................... 45 Expertise in Instructional Development .............. 50 Chapter Summary ..................................... 53 CHAPTER 3 ................................................... 54 Design of the Study ...................................... 54 Introduction ........................................ 54 vii EASE Research Questions .................................. 54 Qualitative Methodology ............................. 55 Use of the Verbal Protocol .......................... 60 Instruments ......................................... 60 Instructional Development Problems .................. 61 Card Sorting Tasks .................................. 65 Card Sorting Task-~Techniques ....................... 66 Card Sorting Task--Models ........................... 68 Subjects ............................................ 70 Procedure ........................................... 73 Analysis Procedures ................................. 74 Chapter Summary ..................................... 80 CHAPTER 4 ................................................... 82 Data Analysis ............................................ 82 Introduction ........................................ 82 Three Instructional Development Problems ............ 84 Individual Findings--3 Instructional ................. Development Problems .............................. 85 Expert 1 .................... ........................ 85 Expert 2 ............................................ 87 Novice 1 ............................................ 88 Novice 2 ............................................ 90 Analysis of the Data-~IBSTPI Standards .............. 91 Expert and Novice Performance--IBSTPI ............... 92 Summary of Findings--Three Instructional Develop ment Problems .................................... 97 Techniques Card Sorting Task ....................... 103 Findings ........................................... 105 Individual Findings-~Techniques Card Sort .......... 105 Expert 1 ........................................... 105 Expert 2 ........................................... 106 Novice 1 ........................................... 107 Novice 2 ........................................... 108 Summary of Findings--Techniques Card Sorting ....... 109 Models--Card Sorting ............................... 113 Individual Findings--Models Card Sort .............. 115 Expert 1 ........................................... 115 Expert 2 ........................................... 116 Novice 1 ........................................... 116 Novice 2 ........................................... 117 Expert and Novice Findings-~Models Card Sorting ....117 Chapter Summary .................................... 121 CHAPTER 5 .................................................. 127 Summary, Conclusions and Recommendations ................ 127 Introduction .......................................... 127 viii EASE Summary of the Study .................................. 127 Findings, Hypotheses and Recommended Questions ........ 128 Question 1 ............................................ 129 Finding 1 .......................................... 129 Hypotheses Generated ............................... 130 Recommended Questions .............................. 131 Finding 2 .......................................... 131 Hypothesis Generated ............................... 131 Recommended Questions .............................. 132 Finding 3 .......................................... 132 Hypothesis Generated ............................... 132 Recommended Questions .............................. 133 Finding 4 .......................................... 133 Hypotheses Generated ............................... 133 Recommended Questions .............................. 134 Question 2 ............................................ 135 Finding 1 .......................................... 135 Hypotheses Generated ............................... 135 Recommended Questions .............................. 136 Finding 2 .......................................... 136 Hypothesis Generated ............................... 136 Recommended Questions .............................. 137 Question 3 ............................................ 137 Finding ............................................ 137 Hypotheses Generated ............................... 138 Recommended Questions .............................. 138 Question 4 ............................................ 139 Findings ........................................... 139 Hypothesis Generated ............................... 140 Recommended Questions .............................. 140 Question 5 ............................................ 141 Findings ........................................... 141 Hypothesis Generated ............................... 141 Recommended Questions .............................. 141 General Conclusions ................................... 142 Chapter Summary ....................................... 144 APPENDICES APPENDIX A - Problems Presented to Subjects ............. 148 APPENDIX 8 - Bennett’s List of Techniques ............... 164 APPENDIX C - Models ..................................... 167 APPENDIX D - JID Competencies ........................... 179 REFERENCES ................................................. 181 ix DQNO‘M-fi LIST OF TABLES £399 Data Elements for the Three Instructional Development Problems .......................... 83 Data Elements for the Card Sorting Tasks ......... 84 Total Time Taken for the Three Problems in Minutes ................................... 101 Time in Minutes Taken Before Verbal Response to the Three Problems ........................ 101 Lines Produced in Responding to the Three Problems ..................................... 101 Instructional Development Vocabulary Used ....... 102 Sequence of Problem Solving ..................... 123 Total Time for Problem Solving .................. 123 Extent of Detail Produced ....................... 124 Categories Produced ............................. 125 Consistency Across Tasks ........................ 126 CHAPTER 1 Introduction to the Study 1mm This chapter discusses the problem and the rationale for the study. It introduces the research question, presents the scope and limitations of the study, offers definition of terms utilized and summarizes the contents of the remaining chapters. The pur- pose of this study is to investigate how expert and novice instructional developers organize their knowledge/experience when solving instructional development problems. W In recent years, instructional developers have been working to describe the state of their changing profession, its theoreti- cal underpinnings and to identify certain of its aspects which are in need of research. Lists of core competencies for the instruc- tional/training development professional have been developed (Task Force, 1981; The International Board of Standards for Training, Performance, and Instruction, 1986). Systems approach models have been described and compared (Trimby & Gentry, 1984). Practition- ers have been surveyed to determine whether they actually use the orderly, systematic models approach advocated in training and development textbooks and they have indicated that the extent to 1 which they do seems to depend on the practitioners, the industry and the size of the organization (Zemke, 1985). Bass and Dills state that '. . . better efforts must be made to chronicle what instructional developers do and why" (1984, p. 594). Silber (1981) advocates analysis of the characteristics of good instruc- tional developers to determine their skills and competencies. Yet, there is very little research to show how instructional developers actually go about solving the kinds of problems they encounter in the practice of their profession. A better understanding of what instructional developers do during problem solving--how they organize their knowledge/ experience during this process might provide the kind of evidence that could contribute to the efficiency and effectiveness of pro- grams of study in instructional development. The student of instructional development (novice) learns about models and tech— niques but these alone are not the essence of successful practice (Silber, 1981). Ne need to know more about how these models and techniques are used by the experienced instructional developer (expert) and to what extent other skills are important to the work of the skilled practitioner. W The purpose of this study was to investigate how expert and novice instructional developers organize their knowledge/ experience when solving instructional development problems. By looking carefully at the problem solving strategies used by instructional developers, we might specify more precisely the com- petencies which have been listed as skills essential to the prac- titioner, gain a better understanding of the application of the theoretical models and techniques taught to our graduates and begin to see more clearly how knowledge/experience affects the practice of the instructional developer. Rationale There are a number of reasons why it may be worthwhile to investigate the problem solving strategies of expert and novice instructional developers. One of the early lists of core competencies was developed by a special Task Force of experienced instructional developers. The third draft of that list was evaluated by over 200 Association for Educational Communication and Technology (AECT) members who responded to a questionnaire (Task Force, 1981). More recently, the International Board of Standards for Training, Performance and Instruction has published a comprehensive listing of competencies and their component behaviors. While these lists are important attempts at defining what instructional developers need to know how to do, they do not explain how the practitioner uses these competencies. The training of instructional developers includes study and practice in the use of various techniques as well as use of models ranging in complexity from the entire development process to mod- els specific to tasks within the development process. The assumption is that instructional developers either refine these models, synthesize a number of them or create new models when solving instructional development problems (Trimby & Gentry, 1984; Andrews & Goodson, 1980). Yet when surveyed, practitioners taking part in the Zemke study reported that they do not use a rigorous, full-scale systems approach in the way recommended by textbooks (Zemke, p. 107). Thiagarajan (1976) found that three approaches to three different instructional development problems produced desirable student performance. Components of instructional sys- tems development approaches such as analysis of instructional task, design of material, evaluation and revision based on feed- back were used, though the order in which they were used had been altered. Thiagarajan suggested that certain modifications of the systems approach (the analysis, define, & evaluate sequence) might be important to the success of a project. In a given situation, one sequence of model components might be more useful than another sequence. In a 1981 Symposium on Training Instructional Developers, Bratton, Markle, Nallington and Silber all discussed indications that the work of the instructional developer is far more than the application of a set of rules. They considered how underlying skills such as those required to work in an unfamiliar content area (Bratton), those needed to apply creative solutions to prob- lems (Markle), "generic skills” (Nallington) and cognitive strate- gies (Silber) are needed in order to be a component instructional developer. During the Symposium, Silber suggested that research should be directed to validating these underlying skills, explor- ing relationships between the theoretical models describing them and describing the components of underlying skills. To summarize, it is generally agreed that instructional developers need to have certain competencies if they are to be effective practitioners. These competencies may include the use of models, techniques and underlying cognitive strategy skills. One way to gain some information about these various compe- tencies of instructional developers would be to study their ver- balized reports of their thinking while solving instructional development problems. By studying the differences between the problem solving strategies of expert and novice instructional developers, one might 1) identify gaps between performances of expert and novice developers which could be used to guide deci- sions about what novice developers need to learn, 2) begin to see whether instructional developers use models and techniques and if so, how they use them and 3) gain some insight as to how the orga- nization of knowledge/experience is displayed during the problem solving process. While there appears to be very little pertinent research in the area of problem solving by the instructional developer, prob- lem solving studies have been conducted in teacher planning (Yinger, 1977), expertise in chess decision-making (Chase a Simon, 1973) and numerous studies in expert-novice problem solving in physics (Chi, Glaser, & Rees, 1982). Because knowledge/experience is important in the problem solving of instructional developers, the Chi et al. studies are particularly instructive in that a variety of approaches was used to find out how an organized knowl- edge base contributes to verbal reports of thinking during problem solving. Hhen Chi and his colleagues analyzed the verbal protocols (reports of thinking) of experts and novices, they found few quan- titative differences. Novices required less time than did experts to solve a series of problems but this was thought to be related to their higher error rate. The experimenters hypothesized that if the task had emphasized speed, the experts could have solved the problems more quickly than the novices. By focusing on infer- ences made during observed thinking, however, they found differ- ences in quality of thinking in terms of depth and breadth of responses. They further speculated that subjects mentally recon- structed a problem in the context of knowledge available for that problem and that solvers think about problems by using categories which direct their problem solving. They found, too, that novices categorized physics problems by surface structures (e.g., "rotational things") and experts categorized by physics’ fundamen- tal laws. Additional findings included observed differences in the ways that knowledge was hierarchically organized. Experts produced initial categories at a higher level of abstraction while novices’ initial categories were compatible with those found at lower levels of the experts’ hierarchy. Experts produced more complete information and demonstrated better understanding of pro- cedural knowledge (Chi et al., 1982). Nallington (1981) stated that instructional development skills can be inferred by observing a number of performances under similar conditions, using different performers. A study of the verbal protocols of expert and novice instructional developers might, therefore, show that like the physicists, they categorize and relate information differently and, in general, solve problems in a qualitatively different way. Through analysis of differences in the organization of expert and novice knowledge/experience dur- ing problem solving, several contributions could be made to the educational technology field. It could give better definition to the knowledge/experience instructional developers need in order to be effective practitioners and thus give some direction to those who must make decisions regarding what and how to teach novice instructional developers. Numerous theoretical bases form the foundation for this study including information processing, schema theory, and problem solving, as well as some literature related to the use of models and other skills in the practice of instruc- tional development. This study could contribute to these related bases. Finally, such a study would likely generate questions which could focus future research studies. W The central research question to be explored is: How is the organization of knowledge/experience displayed in problem solving performance observed in selected expert and novice instructional developers? In order to answer this question, the following more spe- cific questions were proposed: 1. How do expert and novice instructional developers dif- fer in the sequence they use to work through selected problems? 2. How do expert and novice instructional developers dif- fer in the time it takes to work through each of the selected problems? 3. How do expert and novice instructional developers dif- fer in the extent of detail they generate when working through selected problems? 4. How do expert and novice instructional developers dif- fer in the way they categorize selected problems into units? 5. How do expert and novice instructional developers dif— fer in consistency in sequence of problem solving, time spent working on the selected problems, extent of detail generated, and categories imposed across selected problems? This inquiry into the organization of knowledge/experience by expert and novice instructional developers during problem solv- ing, was approached by asking subjects (two expert and two novice instructional developers) to respond verbally to the same prob- lems. Three of these problems simulated situations an instruc- tional developer might encounter in practice. The other two prob- lems were card sorting tasks. Subject responses were audiotaped and the resulting typed transcripts constituted the research data. Analysis consisted of coding the data, generating and asking ques- tions that might lead to answers to the research questions and then searching the data for pertinent clues. This process is described in detail in Chapters 3 and 4. ti n In order to limit the scope of this study, the focus was on the organization of knowledge/experience as it contributed to the problem solving behavior of expert and novice instructional devel- opers. This study of the problem solving strategies used by four individuals did not focus on use of statistical methods in order to obtain generalizability to the entire population of instruc- tional developers. Instead, from a methodological perspective, the focus was on indepth qualitative analysis. What is learned about instructional developers in this study must, therefore, be viewed from the stance that the subjects may or not be representa- tive of the population of instructional developers because they were not selected by traditional sampling techniques. 0n the other hand, a large data sample from each subject (in the form of transcribed verbal responses) was analyzed from multiple perspec- tives and this formed the basis for observation of recurring pat- terns of behavior. In this way, qualitative information was gained and when compared with similar research studies in other fields can be used to begin formulating hypotheses about the orga- nization of knowledge/experience as it contributes to the problem solving behavior of expert and novice instructional developers. 10 The study investigated strategies over three selected instruc- tional development problems and two card sorting tasks so that conclusions drawn were not limited by the structure of a single task. Hhile every effort was made to be systematic and to faith- fully represent the data, the reader will find the research to be descriptive rather than experimental in nature. Because simulated problems were used, responses may have differed from those a subject would make under real circumstances. In a real situation, the instructional developer would respond partly on the basis of information gained through communication with the client. For this study, some control existed in that no additional information was given to the subjects during problem solving. This avoided the problem of unintentional organization being provided by the researcher. Other limitations, more specific to the data, are described in Chapter 5. Definition of Terms The following terms were used fairly extensively throughout the report of this study. Other pertinent terminology is defined, as appropriate, within the text. Instructional development: ". . . a systematic approach to the design, production, evaluation, and utilization of complete systems of instruction, including all appropriate components and a management pattern for using them" (AECT, 1979). 11 Novice: An instructional developer who has less than a mas- ters degree in the field of educational systems development, has taken at least one course in an instructional development program, is considering becoming an instructional developer and has no full-time work experience in the field of instructional develop- ment. For this study, the researcher’s committee used these cri- teria for selection of subjects. Expert: An instructional developer who holds a doctorate in educational systems development or educational technology, who teaches or has taught instructional development, who has at least five years experience in the practice of instructional development in business and/or industry and/or higher education, has published in the instructional development literature, and whose work is highly respected by her/his peers in terms of the instructional development processes used and the products developed. For this study, the researcher’s committee used these criteria for the selection of subjects. Verbal protocol: A transcribed record of audiotaped data elicited from subjects who have been instructed to "think aloud". Cognitive Strategy: “. . . the skills by means of which learners regulate their own internal processes of attending, learning, remembering, and thinking. . . cognitive strategies are largely independent of content and generally apply to all kinds“ (Gagne, 1985, pp. 55-56). 12 WW Chapter 1 explains the problem, the purpose and the ratio- nale for the study. Research questions are outlined and the approach to the study is briefly explained. Anticipated limita- tions are presented as are definitions of commonly used terms. Chapter 2 focuses on a review of literature. The major top- ics in this review are information processing, expert/novice stud- ies and instructional development literature related to the devel- opment of competency in the field. In Chapter 3, the methodology for this study is explained. Reasons for selecting a qualitative approach are discussed as well as the process of using the approach. The instructional develop- ment problems and card sorting tasks are explained in detail. Chapter 4 is a report of results of data analysis. Perfor- mance is described for each expert, each novice, and expert or novice groups for each research question and each problem posed. In Chapter 5, concluding hypotheses are formed in the con- text of limiting factors and recommendations are stated. Wm Documents describing competencies essential to the practice of this profession have been produced in an attempt to define what is important for the instructional developer to know how to do and efforts have been undertaken to help understand the application of models and techniques in the practice of instructional develop- ment. It is important to understand these aspects of the 13 profession in order to appropriately improve programs of study in instructional development and to continue to monitor and upgrade its practice. Because we do not know exactly how instructional developers solve the problems they encounter in the practice of their profession, this study proposes to investigate how expert and novice instructional developers organize their knowledge/ experience when solving selected instructional development prob- lems. CHAPTER 2 Review of the Literature ntr In Chapter 1, this study was described as being an inquiry into the organization of knowledge/experience of expert and novice instructional developers during problem solving. The methods of data gathering included problem solving and card sorting tasks to which each subject would respond verbally. Data would then con- sist of transcripts called verbal protocols and these would in turn, be analyzed from a variety of approaches. By investigating indicators of organization of knowledge/experience of expert and novice instructional developers, we might learn more about how instructional development practitioners go about solving the types of problems they are likely to encounter. A search of the literature produced no studies of differ- ences between expert and novice instructional developers. Three areas, however, appeared to offer promising perspectives toward investigation of the research question. These included the infor- mation processing literature and related studies from cognitive psychology, expert/novice studies from a variety of disciplines and the instructional development literature related to the devel- opment of competency in the field. This chapter surveys relevant 14 15 literature which appears most pertinent to a study of the organi- zation of knowledge/experience of expert and novice instructional developers during problem solving. We While the generally agreed upon components of information processing theory are presented here as distinct entities, it should be noted that this distinction serves the purpose of expli- cating the theory in general. It will be shown that in reality, the structure and function of these entities is somewhat less than certain. The purpose of this section of the review is to provide a basis for understanding the literature related to expert/novice studies. Infonmosion Erooossing Thoory Information processing theory is based on the hypothesis that human cognition is information processing. Cognitive process is a sequence of internal states which are transformed by a series of information processes. Information is considered to be stored in memories which have differing capacities and functions. These memories include sensory stores, short term memory and long term memory (Ericsson & Simon, 1980, p. 223). Some would include work- ing memory in this list (Shavelson, 1974; Anderson, 1982; Gagne & Glaser, 1987). 16 W The sensory stores are where it all begins. An environmen- tal stimulus activates receptors which transform the stimulus to neural information. This information enters and briefly remains in the sensory register. Selectively perceived information then enters short term memory (Ericsson a Simon, 1980; Gagne, 1985, pp. 72-73). mm It is generally agreed that short term memory can store lim- ited amounts of information for a limited time period. Miller (1956) concludes that short term memory can hold seven plus or minus two units of information. Current theories hold that by chunking units of information, for example, by grouping 21 numbers into seven groups of three numbers, the capacity of short term memory can be increased. Theories related to chunking will be discussed in greater detail later. According to Gagne and Glaser (1987) practice can increase the amount of information held in short term memory. when a problem exceeds the storage capability of short term memory, rehearsal and chunking are required. How- ever, such actions make demands on attention, which may have the effect of replacing one storage constraint with another. Ericsson and Simon (1984) propose additional assumptions about short term memory. 1) They assert that information enters short term memory in a sequential manner. 2) When verbalizing, the information vocalized is a verbal encoding of the contents of 17 short term memory. 3) Complex thoughts are not kept as whole entities in short term memory, but rather, are rapidly accessed from a long term memory network of information related to the com- plex thought. Wm Shavelson (1974) states there is no physiological basis for working memory, but he describes the entity as capable of storing information for hours or maybe days. Its structure corresponds to the sequence of the task at hand. Gagne and Glaser (1987) describe working memory as a subset of short term memory. In gen- eral, these authors appear to agree that input into working memory is related to information stored in long term memory. Gagne and Glaser indicate that the functions of working memory include retrieval and rehearsal. For retrieval, the contents of short term memory are compared with content in long term memory, matched, recognized and integrated with retrieved long term memory contents. Rehearsal is also conducted in working memory. Anderson (1982) indicates that when more items are held active in working memory, less information will be activated about each and recognition time will be slower. Initially, according to Anderson, novice subjects report feeling overwhelmed with complex tasks and that they are consumed with simply keeping up with task demands because they have no sense of its overall organization. With practice, they start to see the structure of the task, so working memory improves with practice. Practice until a skill 18 becomes automatic reduces the load on working memory since the components of the skill no longer need to be held there. 1W Shavelson (1974) describes long term memory as a permanent' entity with unlimited capacity. On that, there appears to be con— sensus. It is further, a complex network with a relational struc- ture. This structure as well as the functions of long term memory (and of short term and working memories) are still topics of con- siderable debate. It is to these discussions that we now turn our attention. Stnooturo ono Fonotjon of Memory From the viewpoint of Friendly (1974) the organization of memory is both structural and functional. That is, there exists a structure of components related to the format, arrangement and interrelationships among items in memory and functional components determine how the structure is used to retrieve information. He, Gagne and Glaser (1987) all support Tulving’s hypothesis (1972) that it may, in fact, be useful to consider two separate types of memory, episodic and semantic. Episodic memory consists conceptu- ally of personally experienced events occurring in particular places and at particular times. This would include memory of space, time, and events. Semantic memory would include proposi- tions, meanings, rules, procedures and organized domain specific knowledge. These two memories could be thought of as reflecting 19 the operations of different encoding, storage and retrieval pro- cesses operating on a common structural data base. MacKay (1982) describes memory as a hierarchically organized network of nodes, including conceptual systems, phonological sys- tems and muscle movement systems. Undefined connectors link these nodes into the network. According to MacKay’s theory, when a par- ticular node is activated, it primes all nodes connected directly to it. The strength of these connections varies, but practice (repeated activation) increases strength of linkages. Syntax nodes ensure that nodes are activated in the correct sequence, and timing nodes determine the temporal organization of output. The ability to generalize during problem solving depends on the exis- tence of shared nodes connected by multiple linkages. Likewise, Shavelson (1974) describes memory as a hierarchi- cal structure consisting of nodes (concepts) and links (rela- tionships among concepts). In his view concepts appear to be organized by a structure that allows inferences in storing and retrieving information. The goal of problem solving is to retrieve adequate and pertinent information from long term memory. The sequence of problem solving, according to Shavelson, consists of retrieval from long term memory and then deciding whether the information produced is what was called for. The context estab- lished by instructions is an important factor in the problem solv- ing process. Shavelson further states that clusters of concepts revealed in data may indicate an underlying organization of clus- ters of concepts in long term memory. 20 Still another approach to understanding the structure and function of long term memory is offered by Gagne and Glaser (1987). They view long term memory as consisting of networks of propositions. That is, content is organized in semantic proposi- tions like subjects and predicates. New concepts activate an interconnected network of potentially related concepts in long term memory. when more concepts need to be activated through searching the linking network, more time is required to retrieve needed information. Ericsson (1985) adds to the discussion on functions of long term memory. He contends that information stored in long term memory is not always retrievable. For meaningful information, storage in long term memory can be rapid. There is some evidence that practice leads to faster storage in long term memory and that memory skill can be improved by improving relevancy of tasks so that they can be encoded in a meaningful way by learners. Some of these ideas will be explored in greater detail throughout the review. To summarize thus far, there appears to be general agreement among various information processing theorists that 1) the components of long term memory are interrelated in some fashion, 2) long term memory is arranged in some sort of structure, be it hierarchical, semantic, episodic and/or otherwise and 3) the organization of long term memory is related to the ability to retrieve information. From this portion of the literature we might begin to ask questions about possible differences in the organization of memory 21 of expert and novice instructional developers. It is likely that the memories of both experts and novices consist of interrelated components which are arranged in some sort of structure and that arrangement is related to differences in their abilities to retrieve information. WW Various topics, while not always necessarily tied directly to information processing theory, seem both complimentary to the theory and pertinent to this study. Those topics include problem representation, chunking and schema theories and control strate- gies. Because this study is focused on problem solving, it might be useful to begin this section with an outline of a frequently cited classical problem solving model. Rossman (1931) delineates the following as steps of the problem solving process: 1. Need or difficulty is observed. The problem is formulated. Available information is surveyed. Solutions are formulated. Solutions are critically examined. New solutions are formulated. NOM‘WN New solutions are tested and accepted. According to information processing theory as outlined above, the first step in Rossman’s model would correspond to activity in the sensory store. Items 2 through 7 would correspond 22 to activity among short term, working and long term memories. Let us begin by assuming that the need or difficulty has been observed, thus focusing first on the formulation of the problem. W Consistent with information processing theory is Neijer and Riemersma’s assertion (1986) that the problem space is constructed first through encoding of the stimulus. An internal representa- tion of the problem space is constructed and knowledge relevant to the task is retrieved. Knowledge is then transformed and identi- fied as to extent of familiarity and usefulness. Finally, heuris- tics or other learned solution methods are applied. Anderson (1982), describes the process this way. General problem solving methods are applied with knowledge to generate task appropriate behaviors. These methods emerge in response to the instructions in the problem statement, and are used to break the problem into subproblems. Then subgoals identify what to search for in a match for solution from long term memory. The task is one of discrimi- nation and depends heavily upon what has already been learned and how it is organized. In both articles, readers are cautioned that internal representation can influence the problem solving process which follows. This caution is reiterated by Glaser (1984) and Gagne and Glaser (1987). They believe that the relation between the struc- ture of the knowledge base and the problem solving process is 23 mediated by the quality of the problem representation. The qual- ity, completeness and coherence of problem representation are determined by the extent of knowledge available to the problem solver as well as by the way that knowledge is organized. In other words, the way a problem is interpreted and solved is affected by one’s understanding of the problem. According to Gagne and Glaser (1987), individuals develop these representations while performing the given task. In part, the representations deal with the components of the tasks or how, for example, various involved concepts interact. Multiple schemata (perhaps networks) can be accessed in order to accomplish this. Domain knowledge affects the types of representations con— structed and when more knowledge is retrievable, more possible inferences can be generated. Furthermore, concepts retrieved may be constrained by prob- lem instructions (Shavelson, 1974; Ericsson & Simon, 1984). That is, even minor alterations in instructions to the performer of the task can result in a change in perception of the problem, and thus, a change in problem solving performance. Problem formulation could be an important factor to consider in this study. It may be that expert and novice groups will spend differing amounts of time thinking about the problems before beginning to verbalize their solutions. Instructions will be con- sistent across subjects and across problems, since there is evi- dence that alterations in instructions can result in a change in the perception of the problem. It may also be important to think 24 about how differing amounts of knowledge and its accessibility might affect differences in the performance of the expert and novice groups. Wham; During the earlier discussion of short term memory, it was noted that by chunking units of information, the capacity of short term memory may be increased. Chunks have been described as sub- patterns of information (Reitman, 1976) and as groups of pieces (Egan & Schwartz, 1979). Miller (1956), described the chunk this way: The contrast of the terms bit and chunk also serves to highlight the fact that we are not very definite about what constitutes a chunk of information. . . We are dealing here with a process of organizing or grouping the input into familiar units or chunks, and a great deal of learning has gone into the formation of these familiar units (p. 93). Nonetheless, there is some debate about the chunking hypoth- esis because one of its troublesome aspects is the difficulty involved in determining its boundaries. Chase and Simon (1973) used long pauses between verbal statements of chess players to segment verbal protocols for data analysis in a study of percep- tion in chess. They hypothesized that long pauses would corre- spond to boundaries between successive chunks. This hypothesis has since been further tested by Reitman (1976) and Egan and Schwartz (1979) who both conclude that memory chunks probably overlap and if interresponse times are meant to represent true chunk boundaries, the matter may not be so simple. Organizational 25 units of subjects are thought to overlap, so without knowing their organization prior to data analysis, it is unlikely that true boundaries of their chunks can be determined. The difficulty of precisely describing the boundaries of chunks aside, the theory of chunking has potential to aid in the understanding of information processing. Ericsson and Simon (1984) contend that since short term memory is limited, the amount of information subjects can hold there is related to how much information they can encode into a chunk (p. 185). Chase and Simon (1983) state the function of short term memory is to hold a chunk label that allows chunk content in long term memory to be located and evaluated. The label, then, allows recovery of ele- ments of the chunk from long term memory. They hypothesize that chess skill may be reflected in the speed with which chunks are perceived as well as the size of chunks in the memory task. From their data, they conclude that perhaps subjects recall their larger memory chunks earlier in a problem solving situation or it could be that recall interferes with short term memory causing large chunks to be broken into smaller chunks. There is evidence (Simon, 1974) that total learning time is proportional to the num- ber of chunks which need to be assembled. Dreyfus and Dreyfus (1986), dispute the entire notion of chunking. They believe it is more plausible that expert chess players recognize and respond to whole positions. . . for Simon chunks such as a standard castled king’s formation are defined independently of the rest of the position. A configuration that didn't quite 26 fit the description of a chunk, but in a real chess position played the same role as the chunk, would not count as such. But chess players can recognize the functional equivalence of configurations that don’t fall under a single definition. For example, in some cases a configuration would count as a standard cas- tled king’s formation even if one pawn were advanced, but in other cases it would not (p. 34). This argument brings us back to the issue of definition of chunk boundaries. Another way of looking at the organization of an individual’s knowledge is through schema theory. Schema the- ory, though not so concerned with the amount of information that can be held in short term memory, does offer an explanation of how that information may be held in and accessed from long term mem- ory. Glaser (1984) describes a schema as being a modifiable information structure. It represents generic concepts which are stored in memory. As incoming information is perceived and encoded, schemata from long term memory are retrieved and compared with information held in short term memory. The schema is then accepted, rejected, modified or replaced. According to Gagne and Glaser (1987) schemata consist of knowledge which is organized so that concepts and propositions are retrieved in terms of a more inclusive concept. Problem solving is based on knowledge, and people understand and think about new problems in terms of what they know about sim- ilar past problems. New information is more easily retained if possible organizational structures are deliberately linked to pre- viously learned information structures (Glaser, 1984). 27 Gagne and Glaser (1987) explain how the process works. When a new problem is presented, information regarding previously expe- rienced similar situations is used to interpret the problem. If information essential to interpretation is not available, another, more general schema will be accessed. This may cause solution- finding to be both more difficult and time consuming. It is the organization and structure of schemata that allow knowledge to be found in memory. Inability to find an appropriate solution to a problem may, therefore, be either due to lack of knowledge or lack of access to knowledge. The literature from chunking and schema theories forms the basis for the question in this study about how experts and novices differ in the way they categorize information during problem solv- ing. Wig; Numerous authors have explored yet another dimension to mem- ory which is called cognitive strategies. It is defined by Gagne and Glaser as processes of control that manage the process of attending, pattern recognition, learning, remembering and think- ing. It is thought to contribute to performance quality (1987, p. 66). Silber (1984, p. 520) suggests that this internal process is used by instructional developers in the following ways: 28 1. Gather, analyze and synthesize information from the problem into a coherent statement and evaluate it in terms of reality. 2. Develop a cognitive structure for the content related to the problem. 3. Analyze and restructure that content so that it retains both the integrity of the content and the learning and instruc- tional principles to be applied to it. 4. Evaluate this restructuring for adequacy and accuracy and restructure again, if necessary. 5. Translate the content into other forms of communica- tion. Harmon and King (1979) offer a more extensive listing of cognitive strategy skills. They are: use of models, use of rela- tionships between two or more variables, generation of hypotheses, thinking in abstract terms, awareness of one’s reasoning, verify- ing validity of conclusions, abstracting a pattern typified by a problem and use knowledge about the pattern to aid in solution, predicting, considering what exists as well as latent possibili- ties and coordinating two or more disparate referential systems. Artificial intelligence researchers have also explored this issue of cognitive strategy. The Hayes-Roth and Hayes-Roth model (1979) theorizes that data structure includes: 1. Executive processes that determine the features of planning. They allocate cognitive resources, prioritize, focus and control cognition. 29 2. Meta-plans help decide how to approach the problem, including definition, use of models and the evaluation of each. 3. Plan abstractions characterize the attributes of plan decisions. These attributes include intentions, schemes, tactics and strategies. 4. Knowledge base includes information that might affect the planning process. They contend that this planning is opportunistic, multi-direc- tional and that it may alternate between low levels of thinking to abstract levels as results of planned actions are extrapolated and updated. The Anderson model (1982) labels cognitive strategies (as defined here) as control productions. A control production speci- fies when a cognitive act should take place as well as what action should be taken. The production must match the information which is active in working memory. Pirolli and Anderson (1985) state that people rely on examples to aid in solution to new and diffi- cult problems. Analogizing new solutions from examples produces new productions that generalize to other problems. This study of expert and novice instructional developers is a study, in part, of patterns of behavior. The design, which is explained in Chapter 3, is qualitative and allows the researcher to begin to investigate the problem in a manner which goes beyond the counting of instances. In this study we should report what is seen, and note whether the evidence supports the notion that 30 instructional development competence involves more than the appli- cation of models and techniques. mm In summary, there is general agreement that problem repre- sentation is a critical first step in the problem solving process. It is affected by the extent of the knowledge base and how that knowledge base is organized for retrieval. The process of working with problem representation may involve comparison between the problem space and long term memory contents, breaking the problem into subproblems, establishing and prioritizing.problem solving goals and special mechanisms for accessing the long term memory network. It is also possible that information is chunked during prob- lem solving in such a way that short term memory is not over- loaded. It is difficult, however, to reliably establish bound- aries of chunks because they seem to overlap in memory. It could be that labels are used to match information from the problem to related old information in long term memory. It has been proposed that certain control features of memory establish how a problem will be solved, including higher level thinking skills. These control features are called cognitive strategies. N v i r ture To better understand how instructional developers organize their thinking during problem solving, we now look to literature 31 related to expert and novice performance. While the studies reported here cover a range of problem topics from skill in chess to teaching, numerous similarities in performance are noted. This review begins with a chronological look at research studies and ends with an accounting of some of the related theoretical litera- ture. Studies Chase and Simon (1973) report studies of the perceptual structures that chess players perceive. Three players ranging from master to novice levels were asked to reproduce a chess posi- tion in plain view and to reproduce a chess position after viewing it for five seconds. Pauses during glances to the stimulus posi- tion were used to segment the verbal protocol and the size and nature of segments were then analyzed. Expert performance was judged to be superior to novice per- formance for meaningful but not random patterns. This ruled out the possibility that expert performance could be attributed to superior memory capacity. The authors conclude that expert per- formance suggested a greater ability to perceive patterns in the structure of the various positions and thus, a greater ability to encode the problem by chunking. Chase and Simon (1973) indicate that one key to understanding expertise is its more immediate per- ceptual processing during the initial stages of a memory task. In their study, expert recall produced more and larger chunks, which were measured and bounded by pauses between 32 responses. Chase and Simon hypothesize that short term memory may be more than a linear list of unrelated chunk slots. It may, in fact, be that the expert has a way of organizing that allows more chunks to be held in short term memory. The Reitman study, (1976) of skilled perception in a game called Go, replicated the Chase and Simon study above, and exam- ined in greater detail the technique of partitioning recall on the basis of pauses between responses. Go is a two player board game where the object is to gain territorial control of units on a grid. There is a hierarchical nature to the pieces in Go board configurations (p. 339). A master Go player and a Go beginner were videotaped while performing two board reproduction tasks, one after five seconds of study time and one after an additional five seconds. This sequence was repeated until thesubject reproduced the entire pattern correctly. Successive glances at the stimulus were again used to reflect true chunk boundaries. The master Go player outperformed the beginner for meaning- ful but not random patterns. On further analysis, Reitman found that chunks tended to overlap. The technique of using pauses to bound the data into chunks was limited to patterns that could be partitioned into a linear set of chunks and to situations in which retrieval and overt recall of each chunk is completed before retrieval of the next chunk. According to Reitman, the pause between response time tech- nique is based on four assumptions: 1) elements of patterns are stored in chunks and each chunk has a label that refers uniquely 33 to its elements, 2) if chunks are part of a hierarchy, they are nested, thus no element or subchunk is a member of more than one chunk at a higher level, 3) all elements of a chunk are recalled before recall of elements of another chunk, and 4) pauses reflect recall of chunks at one consistent level in the nested hierarchy. Given the results of this study, Reitman suggests that before this technique can be used reliably, more research is needed to deter- mine the possible structural organizations experts may have, and how those structures are used during the problem solving task. This study of expert and novice instructional developers, by virtue of examining categories used during problem solving, con- tributes additional information to Reitman’s findings. Egan and Schwartz (1979) conducted three experiments to explore memory of symbolic circuit drawings using skilled elec- tronics technicians and novice subjects. The first experiment involved a skilled technician who was given copies of fifty electronic circuit drawings about three months before the recall task. During the task he was asked to indicate meaningful groups of symbols in each drawing by circling and labeling the symbols that served a common function. The sub- ject was told that functional units were allowed to overlap or be nested. He was also encouraged to give a detailed description. The subject reportedly attempted to systematically recall organi- zational units. Egan and Schwartz concluded from this that the subject’s recall was organized by the functional groups he had identified previously. 34 In the second experiment, six skilled subjects and six novice subjects were given practice trials and then asked to par- ticipate in tasks of meaningful recall, random recall and con- struction tasks. Half the subjects were given five seconds of study and half were given two seconds. The skilled technicians recalled more than did the novices for the meaningful tasks. Responses times within and between chunks were not appar- ently related to expertise, though on the construction tasks the skilled subjects related different chunks together. Skilled sub- jects initially produced larger chunks than did the novices. Technicians used the structure of the drawings, attempting to retrieve symbols systematically while novices did not. The third experiment used six electrical engineering majors and six high school students for subjects. After three trials, they were asked to complete three sets of two meaningful recall tasks and two construction tasks. Study times were increased for sets from 5 seconds to 10, and lastly to 15 seconds. The results of this experiment showed that increased study time aided in recall. Small, but significant increases in the size of recall chunks were noted for both groups. Egan and Schwartz hypothesize that since experts identified overlapping chunks, they may be capable of identifying a concep- tual category for the entire drawing and they may systematically retrieve elements using a generate and test process. Because novices have learned fewer chunks, they perceive and recall less 35 efficiently. The authors further speculate that chunks are con- ceptual rather than perceptual. If experts know the conceptual category, it would seem that they would “flesh out” details rather than remember entirely different and distinct parts. A study of components of skill in bridge was conducted by Charness in 1979. Twenty bridge players representing a range of skill levels were asked in a series of tasks to plan the play of a contract, bid rapidly and to recall briefly presented bridge hands. In the first task, subjects were read the contract and the opening lead. Following the 20th bid, they were asked to recall as many of the problem hands cards as possible. Their responses were tape recorded. On this task, master players achieved better solutions and they did so more rapidly than the less skilled play- ers. Charness noted that this may be due to expert players’ rec- ognizing the appropriate strategy for a hand. Their encoding of the problem may generate a plausible line of play. It could be that novice players lack skill in encoding the problem. The second task involved the projection of bridge hands onto a screen and required subjects to respond as quickly as possible with an accurate opening bid following the onset of a projected slide. Data from this portion of the study indicated moderate support for the idea that encoding differences characterize skill differences since age was not a significant factor in latency to respond. 36 The final task involved recall of pairs of structured and randomized hands after viewing each in a set of slides for five seconds. Unlimited time was allowed for recall. The procedure was repeated until either all cards were identified or until five trials had elapsed. Performance on recall of the randomized hands was unrelated to level of skill, age, frequency of play or number of errors on the first trial. Low and medium skill groups remem- bered fewer cards than did the high skill group for the structured hands. This finding is consistent with that of both the chess and Go experiments described earlier. Charness explained the results by stating that skilled players seem to have more patterns stored in long term memory. These can be activated to encode new hands. The ability to activate efficient problem representations may free processing capacity for search of long term memory. Charness reiterates earlier hypotheses that experts do not necessarily have a larger memory span. They may, however, have more patterns in long term memory which can be activated to encode new bridge hands. Experts see the problem differently. The dis- tinguishing difference between expert and novice groups seems to be the capability of experts to classify instances of problems into appropriate categories--to recognize patterns. Effective strategies may be associated with categories in that if pattern recognition doesn’t trigger a solution, it does help reduce the problem space so that it is more manageable. 37 In 1982, Chi, Glaser and Rees conducted a series of studies to investigate expertise in problem solving in physics. Verbal protocols were collected. The first study involved having subjects (two experts and two freshman physics majors) solve five physics problems. Solu- tion times, number of quantitative relations, chunks of equations and number of diagrams generated were measures used to gather quantitative data from the protocols produced. The novices solved the problems as fast as the experts, but produced more errors as well. There appeared to be no systematic differences in the number of quantitative equations generated by groups. No clear evidence was produced that experts generated chunks of equations. In fact, the novices generated a greater number of relations in close succession. While there were appar- ent individual differences in the number of diagrams generated, there were no significant differences for expert and novice groups. Chi et al. found the quantitative measures to be con- founded with individual differences and the strategies adopted by each of the problem solvers. Next, eight advanced Ph.D. students from physics and eight undergraduates who had completed a semester of mechanics were asked to categorize 24 physics problems on the basis of similari- ties in how they would solve them. Novices were found to catego- rize on the basis of surface structures (objects referred to in the problem, key physics terms or physical configuration). 38 Experts appeared to classify by physics principles or laws govern- ing solution of the problem. This finding was further borne out in a specially designed set of twenty problems to test the hypoth- esis that novices are more dependent on surface structures. The next study asked sixteen subjects to sort forty problems according to similarities in how they would solve them. Next they were asked to subdivide each group and to continue until they no longer wished to make any further combinations. The basic cate- gories of the novices corresponded to the subordinate categories of the experts. For the next study, four experts and four undergraduates were asked to review a chapter in aphysics text for five minutes and then to summarize out loud its important concepts. The book was available during this fifteen minute task. In this task, experts in general, made more complete statements about physical laws and appeared to have more complete information. To this point in their studies, Chi et al. observed that novices were deficient in three areas. They made errors only when they generated incorrect inferences or failed to generate them at all during initial encoding. Second, their knowledge bases seem to be organized differently from those of experts. And finally, they lack a certain fundamental knowledge of physics problems. For their next study, two experts and two novices were pre- sented with twenty prototypical physics concepts, and were asked to tell everything they could think of about each and how a prob- lem involving the concept might be solved. They were given three 39 minutes. It was found that some knowledge was common to subjects of both skill groups. They both demonstrated knowledge of physi- cal configurations and properties, but experts had additional knowledge based on physics laws. Experts’ repertoires contained explicit procedures and explicit conditions for their application, as well. The contents of expert and novice schemata, according to Chi et al., are different. Experts possess additional knowledge which may activate higher level schemata. Expert schemata contain more procedural knowledge as well as knowledge about the condi- tions of applicability. Next, two experts and two novices were to read a problem and think out loud about the basic approach they would take to solve the problem. They were to also state the problem features that led them to their choice. There were almost no overlap in the features mentioned by novices and experts. Novices mentioned lit- eral objects and experts identified features at a higher level of abstraction. Finally, six experts and six novices were asked to judge (using a 1-5 rating) the difficulty of a set of twenty problems. They were asked to circle key words that affected their judgment and to explain how the key words helped them reach their decision. In general, the experts were more accurate at judging the diffi- culty of the problems. Conclusions reached by Chi et al. based on this series of studies include: 4O 1. Experts seem to represent problems in terms of real- world mechanisms. This could be either a basis for solution gen- eration or a means for checking for errors. It does permit infer- ence about features and relations not explicit in the problem statement. 2. Experts use a forward working strategy which employs variables given in the problem. Novices work backwards from the unknown. It may be that experts have existing routines for cer- tain kinds of problem solving situations. It could also be that experts store subroutines for basic problem types as well as internal representations that permit inferences which allow the problem to be simplified. Their solution process could also be schemata driven in that problem representation accesses a reper- toire of solution methods. Novices, on the other hand, may be data driven, treating variables as literal symbols. Their schemata of problem types may be less complete. 3. During initial problem analysis the expert tries to understand the problem by constructing a representational network containing elements of the problem. The quality of this represen- tation is probably determined by the extent of the knowledge base available for that kind of problem and how it is organized. It could be that novices have poorly formed, incomplete or nonexis- tent problem categories for some problem types. Chi et al. sup— port the notion that problem schemata exist and are unifying knowledge that likens disparate problems with some underlying fea- ture. 41 For the next series of expert/novice studies we turn to the work of Gaea Leinhardt and Donald A. Smith. Leinhardt (1983) reports a study of routines in expert math teachers’ thoughts and actions. Two expert and four student teachers were observed over a three and one-half month period. Interviews were conducted, and videotapes of classes were made. Data were transcribed, coded and analyzed. It was found that experts used a well-specified but flexible agenda to manage their math classes. This included sequence of activities, general goals, alternative strategies and routines. They worked from a core of actions, a familiar grouping of behaviors, but used a large repertoire of activities. Experts’ structure of activity was described as being fluid. They noted and used information about student progress with the agenda. Novices were less able to obtain and retain information related to the agenda and had diffi- culty maintaining its control. Novices repeatedly changed the pattern of how things got done which produced a jerky, non-pro- gressive structure of activity. Also in 1983, Leinhardt reports a study of novice and expert knowledge of individual student achievement. Eleven novice teach- ers and eleven expert teachers were observed during reading instruction in classrooms for two years. From these observations, four experts and three novices were selected to use a standardized achievement test and report whether a child had sufficient instruction to get each item correct or not. Subjects were asked to think out loud on this task and verbal protocols were recorded. 42 These were coded and analyzed by themes which emerged from the data. The novices’ comments were more fragmented or simply didn’t deal directly with the problem. They seemed to have little grasp of why or what to do about data on how children will perform. They seemed driven by the goal to respond and thus developed no consistent plan of attack. Novices moved quickly to judgments. While they had knowledge necessary to complete the task, they appeared unaware of its relevance. Leinhardt states that the most powerful difference observed between expert and novice groups was how the task was conceived. For the experts it seemed that basic schemata may have been called up to assess the task, the curricu- lum and the students. Leinhardt and Smith, (1985) examined the relationship between expert teachers’ classroom behavior and their subject mat- ter knowledge. Four expert and four novice fourth grade mathemat- ics instructors participated in a three-year study. In addition to interviews, observations, videotaped lessons, planning and evaluation of lessons and fraction knowledge, subjects were also given card sort tasks on math topics. Transcriptions of the pro- tocols were the data base. Multiple forms of analysis were uti- lized ranging from determining consistent patterns to semantic net representations of the text materials. In their study, experts exhibited a more refined hierarchical structure to their knowl- edge. That is, their knowledge was more elaborate, with different levels of subject matter knowledge. Novices produced a more hori- zontal structure with shallow knowledge in more category systems. 43 Novices exhibited a less complete knowledge base. Leinhardt and Smith listed multiple representations, understanding the function of basic principles and multiple linkages across concepts as com- ponents of competency seen in their study. In a 1986 report of studies conducted at the University of Arizona, Berliner discusses differences between expert and novice teachers. His article is a review of findings produced during an ongoing project which studied teacher behaviors. It is interest- ing to note that subjects were labeled as expert, novice or postu- lant. These studies tend to confirm numerous findings from other portions of this review. In one study, a simulation was conducted where subjects were asked to look over records of a group of students’ tests, text- books, and other student information prior to teaching the class. They were asked to report their thinking. Expert teachers seemed to know what to expect in the way of knowledge and skills of these students, and virtually ignored some parts of the information pro- vided stating that they intended to negotiate their own relation- ships with the children. (Knowledge about students and how they can be expected to behave influences how subject matter will be presented and affects organization and management of the class- room). Novices, on the other hand, acted like they needed to make sense out of all the information provided. In another study subjects were briefly shown a slide of a classroom and asked what they had seen. The expert teachers made inferences about what was going on in the classroom. It appeared 44 that they were applying their knowledge base to make sense of the classroom. Novice and postulant groups described the slide of the classroom in a literal way, focusing on surface characteristics. The next study involved experienced and novice teachers of the gifted. Subjects were asked to respond to realistic scenarios about gifted children. Again, the experts responded to the prob- lem at a higher level of abstraction, while novices responded to surface characteristics. Other consistent findings reported by Berliner, include: 1. Experts appear to have quick and accurate recognition of patterns during initial stages of problem solving. This appears to act like ”schema instantiation". Recognition of pat- terns may decrease the cognitive processing load. 2. Experts tend to take longer to examine the problem, build representations or to think through initial strategies. 3. Experts seem more sensitive to task demands and to the social structure of the job situation. 4. Expert teachers appear to be opportunistic planners. They are quick to change strategies if indicated by the situation. 5. Experts are better at anticipating and generating con- tingency plans due to a better understanding of available options. Berliner concludes by saying that possession of a large store of domain-specific knowledge is characteristic of every kind of expert. He argues, as well, that the cognitive processes required for classifying problems and generating solutions are the 45 same for every kind of expert. Lengthy experience, according to Berliner, is part of expertise. WWW Numerous reviews of expert/novice studies have taken the various findings from research and begun to postulate about rela- tionships between those findings and various theories of cogni- tion. A summary of some of those more theoretical approaches to expertise follows. Problem representation was frequently cited in the expert/novice research as a key to performance differences. Chase and Ericsson (1981) determined that the principles of skilled mem- ory include encoding through meaningful associations, retrieval cues associated with memory encoding and practice which results in speeded encoding and retrieval processes. In 1984, Ericsson and Simon add that expert recognition processes are ”debugged" during extensive learning experiences. Furthermore, the expert has a systematic process available for search for solution. Glaser (1984) asserts that novices are limited by their inability to infer further knowledge from literal cues in the problem state- ment. For experts, these are tightly connected schemata. Novice schemata may lack knowledge of principles related to the problem at hand and their application. Gagne and Glaser (1987) focus on the ability of experts to chunk in complicated situations. Experts also start with more accurate hypotheses. Since problem 46 representation guides retrieval of appropriate solution proce- dures, these abilities are of critical importance. Anderson (1982) sees the process of novices’ search of problem space as one of trial and error and that of experts’ as more selective. He relates this ability to what he calls productions which are con- trols or specifications of what needs to take place. Dreyfus and Dreyfus (1986) characterize the process of problem representation as one of judgment. In their opinion, seldom is one complex prob- lem exactly like another. So, the individual contemplates the differences between situations and attempts to reduce uneasiness about those differences. S/he doesn’t calculate a solution strat- egy by formula. 4 Thus, while the apparent phenomenon of problem representa- tion is regularly cited as a probable difference in expert and novice performance, how it works remains unclear. It may be useful to briefly investigate an assertion of Ericsson and Simon (1984) that some perceptual and cognitive pro- cesses may be thought of as automatic. They state their belief that with increase in experience with a task, the task may move from being cognitively controlled to automatic status. This is not only a factor to be considered when discussing differences between expert and novice groups and what actually takes place during problem solving, but also may affect what is verbalized during reports of thinking. What is available for verbalization to the novice may not be available to the expert. 47 Dreyfus and Dreyfus (1986) state that for experts, "No rules or principles are used to arrive at conclusions" (p. 40). In their view, experience aids experts in perceiving similarity in situations. They develop a goal directed procedure of decision making and use problem patterns without breaking the problem into components. They intuitively organize the task during analysis. Experts are unaware of decision making as it occurs because it is accomplished not by rules, but by experience. Experts deliberate but do not calculate. They reflect on their intuition, grouping situations and actions. This results in a fluid performance. ”A portion of the mind is thus responsible for the fine tuning or disaggregation of current memories for more effective guidance of future behavior” (p. 40). Glaser, too, (1984) concedes that "effective thinking is the result of ’conditioning’ knowledge--knowledge that becomes asso- ciated with the conditions and constraints of its use“ (p. 99). In essence, expert knowledge of problem solving constraints and procedural conditions contributes to the effectiveness of think- ing. Glaser goes on to say, however, that expert behavior is more complex than stimulus response association. Expert knowledge is organized around principles and abstractions. These principles and abstractions are derived from knowledge of the subject matter and knowledge about the application of what is known. Most of the literature in this portion of the study has focused on how experts organize and use their knowledge and served as a basis for this study of expert and novice instructional 48 developers. Subjects for the studies were selected on the basis of certain criteria that would allow them to be grouped under either expert or novice categories. In reality, however, common sense tells us that an individual is neither clearly one nor the other. It is entirely possible for an individual to possess expertise in one area and not in another. An instructional devel- oper may, for example, be an expert in conducting interpersonal communications but know comparatively little about program evalua- tion, both of which might be reasonably grouped under expected capabilities of expert instructional developers. The issue of consistency of components of expertise is addressed in the study reported here. Some of the studies reviewed earlier (e.g. Ericsson, Berliner, Leinhardt) allude to varying levels of skill demon- strated by expert and novice subjects. Dreyfus and Dreyfus (1986) choose to explain the transition from novice to expert in terms of developmental stages. Stogg 1: Novioo. In this early stage of skill acquisition, the individual concentrates on recognizing facts and features rel- evant to the skill. Rules for deciding how to act on these facts and features are also learned. These basic rules ignore context so they can be recognized and applied regardless of the situation. So involved in following the rules, novices have no real sense of the overall task. 49 §tooo_z;__Aoyonooo_Boginnon. The advanced beginner gains practical experience in concrete situations and begins to note similarities between novel situations. This enables him/her to recognize and deal with previously undefined facts and to apply more sophisticated rules to context-free and situational factors. 513go_3;__gonootonoy. With more experience, the learner adopts a hierarchical view of decision making. The competent individual chooses a plan to organize a problem situation and then concentrates on the most popular elements. This simplifies the problem and improves performance. fitogo 4: Proficionoy. The proficient performer acts rapidly and fluidly, not always from reasoning. Important task elements stand out clearly. The proficient performer has experi- enced similar situations and memories of them are used to form new plans. Proficient performers still think analytically, but some- times they seem to have an intuitive grasp because of the similar- ities they see. §1§go_§;__fixoontioo. Experts do what comes naturally. Their skill has become a part of them and they are unaware of it when they are in the process of problem solving. This proposed range of expertise is explored in this study of expert and novice instructional developers. From the expert/novice literature, one senses agreement that experts exhibit superior performance during meaningful tasks, and 50 experts do not seem to have better memories than novices. It is also commonly thought that a major difference in the problem solv- ing behavior of these groups is due to substantial differences in problem representation, which affects the path taken to solution. This process of problem representation may be due to some sort of pattern coding in some kind of chunking process, which may even be automatic. Regardless of how problem representation occurs, results of studies repeatedly show that experts are better at what they do because their knowledge bases are more complete. They know when to apply specific procedures during problem solving. They have a large repertoire of options which they understand and know how to use. Experts are more systematic about their problem solving and their responses are indicative of a higher level of sophistication. For these reasons, comparisons between expert and novice problem solving performance are examined for time spent, categories stated, and detail of responses in this study of instructional developers. WWW Since early in the 1950’s, instructional developers have explored the notion of professional certification. The values of delineated competencies for the field have been enumerated by Bratton (1984). A competencies list can serve as a tool for the self-assessment and professional growth for experienced instruc— tional developers. It can provide a common set of concepts and vocabulary which can be used to improve communication among the 51 appropriate individuals and groups. A competencies list can assist in the development of preparation programs in the field, can serve as a basis for potential professional certification, can aid in the identification of qualified practitioners and can serve as a basis for defining instructional development. Today, a list of core competencies has been developed. These competencies are explained in greater detail in Chapter 4. Essentially they represent an attempt to describe what a group of experienced instructional developers believes that competent pro- fessionals in their field do. It includes skills ranging from the analysis of needs to the use of interpersonal communication skills. At the 1987 National Society for Performance and Instruc- tion Conference, it was reported that the list of core competen- cies is being used as a job aid, for formative and summative feed- back, as a checklist, for quality control, and as a process guide (Hutchinson, Shrock, Silber, & Stevens, 1987). In 1981, Wallington argued for consideration of what he termed generic skills. The systems and models approach, he said, emphasizes the cognitive skills of the developer. Wallington stated that competent instructional developers must, in addition to these cognitive skills, possess interpersonal communications skills, be able to work with unfamiliar content, extracting and assimilating them into a logical framework, solve problems, apply principles of the behavioral sciences and systematically search for related information. Instructional development is an extremely complex process because the developer is dealing with 52 both content, interpersonal communications and the instructional development process all at the same time. Wallington concludes that developers reduce variables down to a manageable number of viable plans of attack in order to handle the complexity. Silber (1981; 1984) also sees problems with using a cogni- tive approach as the sole basis for training of instructional developers. This approach focuses on overt activities, ignoring underlying mental processes during performance. The instructional developer must analyze and restructure information before applying models and techniques. Higher level skills approaches assume that the instructional developer needs to use conceptualizing and cre- ative skills as well as cognitive skills. He explains the process this way: One may, for example, master the rules of performing a task analysis, but if one does not possess the cogni- tive strategy underlying ID, one may not know when to use those rules, or one may not be able to assimilate and restructure the associated content into a usable synthesis or one may be unable to deal with the rela- tionships among the information presented, the task/content analysis procedures, and the relations to be expressed by the finished synthesis. The posses- sion of the underlying cognitive strategy may be the ’individual difference’ which explains varying levels of performance--and if it is, it is not a meaningless difference (1981, p. 528). Silber reports skepticism about whether these underlying skills are teachable. If they are, he urges that they be taught. If they are not, he urges that applicants to instructional devel- opment programs be screened for aptitude in these areas. In this study, we examine the verbal protocols of expert and novice 53 instructional developers to determine whether these underlying cognitive strategies are apparent. CW This review of the literature has attempted to focus on those areas most pertinent to a study of differences between expert and novice groups’ organization during problem solving. The information processing and related literature was presented to provide a basis for understanding the expert/novice studies which followed. It also provided the theoretical basis for rationale, design and analysis in this study. The instructional development literature focused on the competenCies of instructional developers and raised questions about how these competencies might be taught. The next chapter presents the methodological approach to the study of differences in organization of expert and novice instruc- tional developers during problem solving. CHAPTER 3 Design of the Study M51121! The design of this study was focused on investigation of how the organization of knowledge/experience is displayed in the prob- lem solving performance observed in selected expert and novice instructional developers. In this chapter, the research questions are stated. Next, the qualitative methodology used in the study is explained. Data- gathering instruments, subjects and procedures that were used dur- ing the study are described. Methods of data collection are explained later in this chapter. W The central question is: How is the organization of knowledge/experience displayed in the problem solving performance observed in selected expert and novice instructional developers? To answer this question, five more specific questions were pro- posed: 1. How do expert and novice instructional developers dif- fer in the sequence they use to work through selected instruc- tional development problems? 54 55 2. How do expert and novice instructional developers dif- fer in the time it takes to work through each of the selected problems? 3. How do expert and novice instructional developers dif- fer in the extent of detail they generate working through selected problems? 4. How do expert and novice instructional developers dif- fer in the way they categorize selected problems into units? 5. How do expert and novice instructional developers dif- fer in consistency in sequence of problem solving, time spent working on the selected problems, extent of detail generated, and categories imposed across selected problems? To answer these questions subjects were asked to verbally respond to three instructional development problems and to two tasks which required them to sort cards (on which were printed instructional development techniques or models) into categories of their own choosing. nooljtotjvo Mothooology The decision to use a qualitative rather than a quantitative methodology for this study was based on several factors. Numerous studies on expert and novice performance such as those of Chase and Simon, (1973), and Chi, Glaser and Rees, (1982) used other than classical or modified-classical experimental designs. They asked their subjects to think out loud while solving problems and then used a variety of approaches to analyze the data produced 56 during these sessions. According to Shrock (1984), when the num- ber of variables involved in studying a research question is large, the use of a classical or modified-classical experimental design poses problems related to control of variables and sam- pling. Furthermore, experimental design is not, by virtue of its reliance on quantifiable outcomes, suited to answering questions related to quality of performance. Shrock concludes, therefore, that classical or modified-classical experimental design is not the most appropriate choice for study of complex human phenomena as in instructional development, learning or the roles that instructional developers play. Finally, conversations with Clark (1986) regarding the problems of gathering and analyzing useful data to answer research questions in this study of differences between quality of performance of expert and novice instructional developers led to the conclusion that a qualitative methodology would be the most suitable method for gathering information to generate hypotheses. The following reasons led to that conclu- sion: 1. To take a quantitative approach to the question would be to lose the qualitative nature of the data. 2. An in-depth study of a small sample would provide the kind of data necessary to answer the question. 3. The question was one which sought to generate rather than substantiate hypotheses. As pointed out by Miles and Huberman (1984a), the range of types of qualitative methodologies is wide and most of this type 57 of research lies somewhere between the extremes of tight, pre- structured designs and loose, emergent ones. All too frequently, methodologies are not described in reports of the research and thus it is difficult to reconstruct and corroborate findings. The following description of the researcher’s stance on qualitative methodology and the choice of data analysis techniques should serve to avoid this pitfall. LeCompte and Goetz, (1982), Miles and Huberman (1984) and Glaser and Strauss (1965) agree that qualitative studies should be committed to a faithful, accurate rendition of events to the extent that another researcher using the same data, could repli- cate the study to verify its conclusions. They advocate the sys- tematic description of events and analyses, and the discovery and validation of associations among variables. This approach, they argue, increases reliability. Clark (1986), on the other hand, states that reliability in the rationalistic sense is not the issue in a qualitative study. From the rationalistic approach, establishment of high agreement among different raters would signal an end to the analysis. Instead, the qualitative approach uses a process of ”iterative sense-making” where the researcher continually seeks patterns of behavior through multiple approaches to the data in order to demonstrate validity, that what the researcher is seeing is indeed present. There is, in other words, no competing explanation for 58 findings. The evidence is documented, and counterevidence is pur- sued as well as concrete support for findings. Patterns of events are of major concern. In general, these two approaches (systematic description of everts and analyses and ”iterative sense-making“) are probably more alike than different. It would seem possible for the researcher to be systematic about analysis while at the same time, use an iterative approach, remaining open to additional data anal- ysis techniques as they are indicated. This combined approach was that taken by the researcher during this study. While analysis of the data is described in detail in Chapter 4, the following approaches were taken throughout all aspects of that analysis. 1. According to Miles and Huberman (1984), researcher bias can be controlled by remaining open to disconfirming evidence and verification of evidence. Negative cases help refine hypotheses (1984a). In this study, the questions formed the basis for analy- sis and findings were reported for subjects for each type of prob- lem (instructional development and card sorting tasks) and each type of analysis conducted. For each example reported, at least one other like example existed in the complete verbal protocol. Negative examples were noted. When conclusions were formed, any example not completely conforming to the conclusion was further considered in terms of competing explanations and these, in turn, were reported. Attention to disconfirming evidence such as dif- ferences in performance from one type of problem to another adds 59 assurance that artifacts are explained and not hidden in aggre- gated data. It is in this sense, that generalizability of results is affected by methods used in a qualitative study. In contrast, the aggregated mean, the foundation of experimental design dilutes the influence of the negative example (Shrock, 1984). 2. Triangulation is recommended by Guba (1981), Miles and Huberman (1984), and Shrock (1984). Triangulation is a confirma- tion of validity through the cross-checking of several sources. Different kinds of analyses are used to provide repeated verifica- tion of findings from the data, such as in this study where data are coded in two different ways, when two problem types are con- sidered or when individual performance is compared to group per- formance. The use of multiple cases (of each problem type) also provides information related to generalizability of findings in that one can be more certain of findings if they occur under dif- fering circumstances. In this study, five different problems (cases) were presented to the subjects. Their responses were ana- lyzed from a variety of approaches such as differences in vocabu- lary used, use of information given by the problems and instruc- tional development examples produced during problem solving. Fur- thermore, the research question related to consistency of subject performance across problems was posed to support the validity of conclusions reached. In essence, the more sources consulted, the more confidence can be placed in the explanation. Validity to the qualitative researcher, involves checking and rechecking to be sure that what is first seen is really there. 60 In short, the data for this study was first reduced by cod- ing in a way that would maintain integrity of the data while elim- inating material extraneous to the research questions. (See Chap- ter 4 for a more detailed explanation of this process.) Then reg- ularities, patterns, explanations, and possible configurations were noted, as were irregularities. New strategies for analyzing the data were added as questions were posed about the emerging regularities. Findings were then verified before final conclu- sions were drawn. rot The verbal protocol in this study, refers to a typewritten transcription of subjects’ verbal response to a set of problems. The verbal protocol is considered as evidence that certain cogni- tive structures and processes exist in the problem solving of sub- jects. When subjects verbalize during problem solving, they are performing the task while producing the verbalizations. The ver- balizations trace information which was attended to by the sub- jects and thus, indirectly trace part of their cognitive process (Ericsson 8 Simon, 1980, 1984). Instruments Two different kinds of instruments were used to gather data during this study; a set of three complex instructional develop- ment problems and two card sorting tasks. These instruments are discussed below. 61 Instnootjonal Qoyoloonont Enooloms Three typical instructional development problems were writ- ten with the intent of maintaining structural similarities across problems. These problems are listed below without specific details. Complete problems can be found in Appendix A. Enoolon_1. Imagine that you are chairperson of a committee whose five members are from the teacher education department. You are charged with the responsibility of submitting a proposal to the Dean for the design, implementation and evaluation of a new teacher education undergraduate course in tests and measurement. The course should provide students with experience in selecting appropriate methods of testing, writing test items, and interpret- ing scores on both teacher-made and standardized tests. Enablem_2- Imagine that you have a position in the Manage- ment Training Unit of a statewide food chain. You and the Direc- tor of Management Training are charged with the responsibility of submitting a proposal to the executive committee for the design, implementation and evaluation of a new course for trainees and experienced department and store managers. The course is to be entitled "Principles of Supervision" and should include the nature of management, planning, organizing, controlling, performance standards, communication, motivation and improvement of manager effectiveness. 62 Enoolon_§. Imagine that you are chairperson of a six member training committee in an optional company engaged in the produc- tion of optical precision instruments. You are charged with the responsibility of submitting a proposal to upper level management for the design, implementation and evaluation of a course for Shop Foremen in the operation of a lens grinding machine which is to be purchased. The course should include set-up, troubleshooting, simple maintenance and how to determine appropriate settings for machine operation. Essentially, the format for presenting each instructional development problem was the same. The introductory paragraph, in each statement explaining the task. There were eight information points in each problem. 1. Audience - consumers of the instruction. 2. Committee members - those with whom the instructional developer would be working to develop a proposal. ' 3. Duration of the proposed course - how much time would be allotted for instruction. 4. Prior educational experience of audience. 5. Sequence in a curriculum - what is taught before or after the proposed course. 6. Entering behavior of audience - their prior experience with the proposed content. 7. Time constraints - conditions under which the project will be completed. 8. Audience attitudes toward the course. 63 The closing paragraph was identical for each of the prob- lems. The attention given to maintaining the structural similarity explained above was due to research evidence (Mayer, p. 76) that '. . . subtle differences in the way a problem is presented could have vastly different effects on how a subject assimilates the problem and thus on problem-solving performance.“ This under- scores, as well, the reason for using two problem types to gather data for this study. The problems all focused on planning so that the subject would be faced with providing on overview of the entire instruc- tional development process. This was necessary in order to answer the question about the sequence followed during problem solving. The problems were realistic in that they portrayed true-to- life types of instructional development problems. The process needed to solve these problems was complex in a hierarchical sense. That is, according to Gagne and Briggs (1974), problem solving represents the most complex mental process and requires use of rules, concepts, discriminations, verbal associations or other chains and stimulus-response connections. The problems fur- ther allowed for analysis of sequence, time, extent of detail and category units chosen. The setting presented for Problem I was in higher education, for Problem 2 in a retail establishment and for Problem 3, in industry. These broad setting categories represent the three largest employment groups of practicing instructional developers 64 (Hutchinson & Rankin, p. 29). Problem 3 differed from the other two in that, in addition to cognitive content and affective fac- tors to be considered, the content to be delivered included psy- chomotor skills. In this portion of the study, subjects were asked to report their thinking aloud while solving the problems. They were not asked to theorize about their behavior because, according to Ericsson and Simon (1980), verbal reports are most valid and reli- able when the subject reports on the contents of short term mem- ory. Asking the subjects, therefore, to try and explain why they responded as they did would produce a retrospective report perhaps even based on conjecture and thus possibly compromise validity and therefore, reliability, as well. Related to the earlier discus- sion of validity, allowing for a suspected source of conjecture was ruled out in order to keep the data as pure as possible. For this section of the study it was important to gain an overview of behavior in a complex problem solving task. To interview the sub- ject in order to obtain the contents of short-term memory might influence the direction of the subjects’ thinking. Responses to these problems were analyzed with respect to the research questions related to 1) sequence of problem solving used by the subjects, 2) extent of detail produced in responses, 3) time taken to complete the task, 4) categories produced during problem solving and 5) consistency of performance across tasks. 65 W515}. Two card sorting tasks were written with the intent of main- taining structural similarities across problems. The format of the two card sorting tasks was, therefore, the same. Subjects were asked to sort cards into piles, to label their piles, to fur- ther sort and re-label piles, and finally, to elaborate on origi- nal pile labels. Subjects were instructed to verbalize their thinking while performing these card sorting tasks. These verbal responses were audiotaped. The purpose of these tasks, was to elicit further informa- tion about the sequence of problem solving used by the subjects (research question I), the extent of detail produced in responses (research question 2), the time taken to complete the task (research question 3), categories produced during problem solving (research question 4) and consistency of performance across tasks (research question 5). At this point, it should be reiterated that the three instructional development problems were similar in their construc- tion, as were the two card sorting tasks, so that performance dif- ferences could not be attributed to instructions to the subjects. Numerous approaches were taken to analyze the data over one set of three complex problems and another set of two card sorting tasks. This was done to confirm validity of findings. 66 W The names of 27 instructional development techniques were typed on three by five index cards. Subjects were asked to sort the cards into piles according to how they would use the tech- niques to solve instructional development problems. Thomas Bennett’s dissertation (1983) was the source of tech- niques used in this sorting task. Based on data he collected from a panel of field experts, sixty techniques (of 108) derived from a list compiled by Gentry (1980-81), were identified as being most important to instructional development. The list of sixty tech- niques became part of Bennett’s survey instrument intended for study of the competency level, level of use, value to instruc- tional development and the degree to which the techniques were being taught in Canadian instructional development and teacher education programs. For this study, Bennett’s original list of sixty (see Appendix B) was abbreviated to twenty seven. The rationale for abbreviating the list was initiated during pilot testing when it was found that expert subjects could complete sorting of thirty techniques in about an hour without becoming either confused or bored. Novices took considerably longer and reported some frus- tration. It also became apparent during pilot testing of complex problem-solving sessions that techniques most frequently mentioned were those applicable within a general, planning framework. Because later analysis would attempt to relate consistencies between sorting tasks and complex problem-solving tasks, the items 67 used for sorting should be congruent in terms of their fit with the complex problems. First, a number of items from Bennett’s list were combined. Flowcharting, task analysis, task description and decision tables were labeled "flowchart and/or task analysis". Computer search was subsumed by literature search. Interviews and/or observations included interviewing users, appraisal interview, authoritative opinion, observation interview and critical incidents technique. Field test included learner verification and revision. The Likert Scale was considered a subset of questionnaire. Because they had characteristics of being techniques for instructional design on the lesson level or because they were more useful for implementa- tion rather than planning, the following techniques were deleted: multi-image/multi-media presentation, story boarding, management by objectives, programmed instruction, role playing, standardized tests, micro-teaching, discovery technique, simulation (gaming), computer assisted instruction, behavior modeling, contract plan, Program Planning Budget System, linear programming, in-basket technique, cognitive mapping, shaping, information mapping, Instructional Analysis Kit, and mathetics. The final list is as follows: brainstorming, card sort, case studies, checklist, content analysis, cost-benefit analysis, criterion referenced measures, critical path method, Delphi tech- nique, discrepancy evaluation, feedback, field test, flowchart/task analysis, formative evaluation, force field analy- sis, function analysis, Gannt chart, interviews/observations, 68 literature search, long-range planning, needs assessment, nominal group process, objectives, PERT, questionnaire, summative evalua- tion and technical conference. mm For this task, subjects were asked to sort a set of fifteen five by seven inch index cards to which copies of instructional development models had been attached. There were two criteria for selection of models used in this study. The models selected were required to be applicable to at least a portion of each of the three complex problem-solving tasks. This requirement enabled comparisons between performance on complex problem tasks and sorting of models. The second crite- rion was that the models be representative of a range of types of models used by instructional developers. The actual choice of representative models within types was not felt to be critical since as numerous authors attest, instructional developers rou- tinely adapt models to suit their purposes (Andrews & Goodson, 1980; Gustafson, 1981; and Trimby & Gentry, 1984). To see how the models selected have been categorized by these authors and to see the ranges of types used for this study, refer to Appendix C. The fifteen models selected for use in this sorting task were from: Banathy; Blondin; Briggs; Davis; Alexander and Yelon; Gagne; Gentry; Gerlach and Ely; Hamreus (2); Havelock; 101; Lee; Lippitt and Nadler; Reigeluth and Merrill; and Romiszowski. Dur- ing pilot-testing it was found that sorting took longer for models 69 than for techniques. This was expected due to the component nature of the models. Therefore, the task was shortened from twenty models to fifteen so that fatigue would be reduced while retaining a large enough number of models to allow for categoriz- ing. The purposes of this portion of the study were twofold: 1. To find out whether instructional developers use a com- mon scheme to organize the models and techniques they apply in practice. For example, Trimby and Gentry (1984) have categorized models according to their usefulness in particular tasks. This relates to the research question about differences in how expert and novice instructional developers categorize problems into units. 2. To find out more about the extent of the instructional development knowledge base used by expert and novice instructional developers. This relates to the research question about differ- ences in the extent of detail produced by expert and novice instructional developers during problem solving. As in the instructional development problems described ear- lier, the format of the two card sorting tasks was identical so that comparisons could be made without contamination of the data from instructions. Again, multiple perspectives were taken to analyze the data so that greater validity of the findings would be assured. 70 5111219315 A list of potential subjects was generated, including both expert and novice instructional developers. From this list, two experts and two novices were selected to take part in the study, based on predetermined criteria. Experts were to be instructional developers holding a doctorate in educational systems development or educational technology. They were required to have experience teaching instructional development, have at least five years expe- rience in the practice of instructional development in business and/or industry and/or higher education, have published in the instructional development literature and be respected by col- leagues in terms of instructional development processes used and products developed. Novices were to be students of instructional development holding less than a masters degree in educational sys- tems development. They were to have taken at least one course in an instructional development program, to be considering becoming an instructional developer and have no full—time work experience in the field of instructional development. Two novice instructional developers from the Michigan State University Educational Systems Development program were nominated by and accepted as meeting the criteria for the study by the researcher and her committee. Novice 1 had a B.A. degree in anthropology, an M.S. degree in reading, and nine years of teaching experience. She had taken one general graduate course in instructional development and a course in information handling. She had no work experience in 71 instructional development other than that which occurs by virtue of being a classroom teacher. Novice 2 had a 8.8. degree in general science and secondary education. For 13 years, he worked as a seasonal state employee in the parks system. He had taken three graduate courses in the Educational Systems Development sequence; all of them computer- related courses. We had also taken a graduate class in educa- tional psychology. Novice 2 had no work experience in instruc- tional development. Two expert instructional developers were also nominated and agreed upon by the committee and the researcher. EXpert 1 had a B.A. degree in English, an M.L.S. in Library Science, and a Ph.D. in Educational Systems Development. She taught library science for ten years. Expert 1 had experience teaching instructional development and as a private consultant to business and industry. She had published in the instructional development literature and was highly regarded as a professional. Expert 1 had approximately five years of experience as an instruc- tional developer. Expert 2 had a B.A. degree in English/Speech followed by two years teaching at the community college level. He had an M.A. degree in Communication and a Ph.D. in Instructional Development and Technology. He taught instructional development and had 10 years experience as an instructional developer at a major univer— sity. Expert 2 had published in the instructional development literature and was highly regarded as a professional. 72 Small sample size is not atypical in qualitative studies. It has been used in numerous studies where thought processes have been traced (Yinger, 1971; Chi et al., 1982). There are, however, almost no guidelines in the literature for determining sample size in studies of this nature. Yinger and Clark support the use of small samples on the basis that hundreds of observations may be found in the protocols of subjects (1982), Miles and Huberman (1984) state that sampling is affected by research questions and practicality. They advocate sampling across people, settings, events and processes. This approach was used in this study in that four subjects’ performance was sampled for various events which took place while solving five different problems of two dif- ferent types. Yinger and Clark further report (1983) that it is important to use a variety of methods in the study of problem _solving behavior due, in part, to the interactive effects between experimental task and the problem solver. The selected design for this study provided a very large number of observations and allowed for control of interactive effects between subject and problem since three complex problems and two sorting tasks were presented. By posing two problem types each with multiple prob- lems having similar structures, it would be possible to make stronger statements about findings. We would be more confident about the effects, for example, of problem instructions on the subjects. Granted, there is a trade-off between loss of experi- mental control such as that offered by a random sampling of sub- jects and ability to faithfully represent the experimental task 73 through use of appropriate methodology. Had a large random sample been used, resulting weaknesses of the study would include speci- ficity to the problem solving task, for a different task may pro- duce an entirely different type of response. In that light, study of the researcher’s questions seemed most appropriately carried out with fewer subjects but with more problem solving tasks. Emmim From the agreed upon list of subjects the top four individu- als were contacted and asked to participate in the study. All four agreed. Data was gathered from subjects individually, in a small conference room in the presence of the researcher. For each of the instructional development problems, meetings were spaced at least two weeks apart. This was done in order to minimize the possibility that subjects would recall the details of similarities among the structures of the problem tasks. The two sorting tasks were completed at the convenience of the individual subjects, but on two separate occasions in order to minimize fatigue. For each of the five sessions, a written copy of the problem or the sorting task was provided. Each session was audiotaped. No feedback was given to subjects during the instructional devel- opment problem solving tasks because it was felt that experimenter responses could affect the direction of the subjects’ problem solving. During the card sorting tasks subjects were prompted only whenever they neglected to identify a card they were sorting. 74 This was done to ensure that as much data as possible would be recorded. Prompts were limited to the phrase, ”Please remember to think out loud", because probes for types of information that sub- jects do not possess or probes that provide alternatives may force subjects to produce reports that are inferential or not related to the actual thought process (Ericsson & Simon, 1980). W The first step in analysis was to code the data so that it would be in a more manageable form. Each paragraph was considered to be one unit of analysis. Initial partitioning to paragraphs was completed by the transcriber. The resulting verbal protocols were then checked by the researcher by listening to the audiotapes while reading the transcribed verbal protocols. When the topic was changed by the subject, this was considered a signal for para- graph change. The initial coding standard used was "Competencies for the Instructional/Training Development Professional" compiled by the AECT Task Force on ID Certification in 1981. There were sixteen categories listed as competencies in that article (see Appendix D). Each paragraph was first read and a judgment was made as to the general focus of its content. This served to check whether the proposed coding scheme might be workable. Next, each paragraph was coded against the "Competencies“ standard, and recoded again three weeks later to check for coding reliability. Using the system recommended by Miles and Huberman (1984) for checking reliability in coding during qualitative data analysis 75 Reliability - numbeLsomect number correct + number incorrect yielded an overall reliability of 94%. Analysis then proceeded on the basis of the second coding of the data. From this coded data, two analyses were conducted: 1) The number of lines verbalized by each subject for each code was counted. One word in any line of the verbal protocol data was counted as a line. This initial analysis of the data was con- ducted to show for which competency each subject produced the greatest amount of narrative. It would further show the range of categories used by subjects and whether any of the competencies were suggested only by the expert group or only by the novice group. Information from this portion of the analysis was used to help answer questions about differences in how the experts and novices categorized and about the extent of detail they produced during problem solving. 2) Coded data was used to note relation- ships stated by subjects between the various competencies used by instructional developers. This data yielded information about how instructional developers categorized information and the extent of detail they produced during problem solving. Following this segment of analysis, the uncoded protocols were checked to find out what information given in the problems was used by the subjects and if so, in what sequence that informa- tion was used. The purpose of this step in the analysis was to gain information related to research question about sequence of problem’solving. 76 To gather information about the research question related to differences between expert and novice groups in the time needed to solve the problems, verbal protocols were timed. Timing began when the tape recorder was turned on and ended when the subject said his/her final word. Timing was also conducted to see how much time elapsed between the start of the tape recorder and when the subject first began talking. Finally, the total number of lines produced by each subject for each problem was counted. Uncoded verbal protocols were also analyzed to find out dif- ferences between expert and novice groups in the instructional development vocabulary used while solving each of the problems. The standards used for this analysis were the International Board of Standards for Training, Performance, and Instruction (IBSTPI) list of sixteen instructional design competencies (1986) and the AECT Glossary of Terms (1979). If the term appeared in one of these documents, it was counted as instructional development vocabulary. This search of the data produced information about the research question related to extent of detail used by expert and novice groups during problem solving. In 1986, the International Board of Standards for Training, Performance, and Instruction published 1notnootionol_floojgn_§onoo; e r s. In this document are listed sixteen core competencies which are proposed as standards for the instructional development practitioner. Under each of the competencies is listed a set of more specific performances which are subdivided into conditions, behavior, and criteria. 77 Over the years, a group of professionals has worked to develop the IBSTPI list which evolved from the AECT Task Force on ID Certification (I981). The lists, while similar, differ in sev- eral important aspects. The sequence in which competencies appear is changed. Some vocabulary, such as 'Write Statements of Learner Outcomes“ in the AECT document is referred to as “Write statements of performance objectives" in the IBSTPI document. “Sequence Learner Activities” and "Determine Instructional Resources (Media) Appropriate to Instructional Activities" appear only as core com- petencies in the AECT document. Likewise, ”Develop the perfor- mance measurements” and ”Design the instructional materials” appear only as core competencies in the IBSTPI document. The IBSTPI list of competencies is more detailed in that with each core competency is listed not only specific performances, but those performances are further subdivided into conditions, behav- ior and criteria. The last analysis conducted on this portion of the data com- pared the uncoded verbal protocols to the IBSTPI list of instruc- tional design competencies. Initial codings were disregarded at this point so that all of the data could be considered, especially competencies appearing in paragraphs as secondary to the focus of the paragraph as coded. The IBSTPI list was used as a checklist, which was suggested by Dr. Kenneth Silber at the 1987 National Society for Performance and Instruction Conference. According to Silber, some business and industrial training departments use the list to aid in the evaluation of training programs. However, 78 rather than simply checking whether or not a particular competency was indicated in the verbal protocol data, those indications of competencies were noted under the core competency categories in the document. It was felt that this would enable a better assess- ment of the quality of the data, since by this time it appeared that while both groups might, for example, discuss writing objec- tives, there was a notable difference in the sophistication with which they did so. This segment of the data was analyzed to help answer the research question about differences between expert and novice instructional developers in the extent of detail they pro- duced during problem solving. To prepare to analyze the data produced during the tech- niques and models card sorting tasks, the verbal protocols were first outlined, by eliminating all but those words or (numbers designating models) which appeared on cards, and those which defined or labeled techniques or model categories. Following this step, the outlines were diagrammed to illustrate the hierarchy of subjects’ organization of the cards. Next, definitions of tech- niques and models category labels were collated from each of the verbal protocols. Analyses consisted of six major steps: 1) The number of techniques (or models) cited by each sub- ject as being unfamiliar was counted and then compared across the expert and novice groups in order to help answer the question about extent of detail produced by subjects. Techniques were viewed as nodes, or concepts. In the case of models, labels for 79 groups of models were viewed as nodes, or concepts. Models were viewed as subnodes. Definitions were viewed as links which con- nected subcategory nodes to their category labels. The node-link approach was used to help conceptualize the data in a hierarchical fashion. 2) For techniques, the definitions given by subjects were compared with Bennett (1983) for accuracy and then compared across groups. For models, definitions were compared with a diagram of the corresponding model for accuracy. Only the information dis- played in the diagram was used (as opposed to any available narra- tive) to make this comparison. Again, the definitions were viewed as links which connected subcategory nodes to their category labels. This analysis was conducted to gain information about how subjects categorized during the task. 3) Definitions were then compared across expert and novice groups to determine extent of detail generated during definition of techniques and models. These first three steps were conducted to help answer the research question related to whether expert and novice groups produce differing amounts or levels of detail during problem solving. 4) Next, the diagrams were compared with category labels to find out how cards had been categorized and whether cards had been correctly categorized under labels as they were defined by the subjects. This process was an if-then analysis. If a label node was linked to a subcategory technique (or model) then the 80 subcategory should fit the defining attributes of the link. Con- trasts between groups were noted. This analysis attempted to find out if the data contributed to an answer to the research question related to differences between expert and novice groups in the way they categorized during problem solving. 5) Original verbal protocols were examined individually and then across groups to determine broad problem solving sequences. This analysis was conducted to gain information about the research question related to differences between expert and novice groups in the sequence of the problem solving strategies they employed. 6) The audiotapes were timed. Total time spent on the task was measured from when the tape recorder was turned on until the subject said his/her last word. Time to formulate the problem was counted from when the tape recorder was turned on until the subject said his/her first word. The number of lines produced by each subject for each problem was counted, as well. One line was counted for any line containing at least one word. This data analysis examined differences between expert and novice instruc- tional developers regarding the time spent to complete the task. Wm Two expert and two novice instructional developers were asked to think out loud while they solved three complex instruc— tional development problems and two card sorting tasks. Their responses were audiotaped and transcribed. These transcriptions 81 were the data base used for a qualitative analysis of differences in organization of expert and novice instructional developers as demonstrated while solving selected problems. Two recurrent themes were present in this analysis. The first of these was the ongoing search for disconfirming evidence. While patterns and possible explanations unfolded, negative evidence was noted. Sec- ondly, triangulation was used as the major mechanism for data analysis in that three instructional development problems and two card sorting tasks were presented to subjects and the data from these was analyzed from multiple perspectives. These themes are the two major ways that were used to strengthen the validity of findings in this study. Results of this analysis are found in Chapter 4. CHAPTER 4 Data Analysis Wm In this chapter, results of data analysis are discussed. The central question to be answered is: How is the organization of knowledge/experience displayed in the problem solving perfor- mance observed in selected expert and novice instructional devel- opers? In order to answer this question, the following more spe- cific research questions were proposed: 1. How do expert and novice instructional developers dif- fer in the sequence they used to work through selected problems? 2. How do expert and novice instructional developers dif- fer in the time it takes to work through each of the selected problems? 3. How do expert and novice instructional developers dif- fer in the extent of detail they generate when working through selected problems? 4. How do expert and novice instructional developers dif- fer in the way they categorize problems into units? 5. How do expert and novice instructional developers dif- fer in consistency in sequence of problem solving, time spent 82 83 working on the selected problems, extent of detail generated, and categories imposed across selected problems? Tables 1 and 2 provide an overview of the various elements contributing to the data analysis. Forms of data refers to the verbal protocols as either coded or uncoded and left in their original form. The numerous approaches to data analysis were referred to as indicators. The final column, question, references the reader to the corresponding research question. Table 1 Wfor the WWW. Form of Data Indicator 1 Question Coded data AECT competency categories 3. detail 4. categories 1. sequence Category relationships 4. categories 3. detail Timing/number of lines 2. time Uncoded data Vocabulary 3. detail Problem information used 1. sequence IBSTPI Standards* 3. detail 4. categories 1. sequence *group data reported 84 Table 2 h a 'n T Form of Data Indicator Question Uncoded data unknowns 3. detail accuracy 3. detail labels 4. categories definitions 3. detail sequence 1. sequence timing/number of lines 2. time First, the data is analyzed in terms of individual subjects and then by expert and novice groups for performance during instructional development problem solving. The same reporting sequence is repeated for each of the two card sorting tasks. Finally, a summary of all of the data is presented and at that time, the question related to consistency across tasks is addressed. Innoo Instrootjonal Devolooment Probloms Data was collected from subjects on three separate occasions during solution of instructional development problems. The set- ting for the first problem was higher education, the second was retailing and the third setting was industrial. Subjects were asked to verbally respond to the three typewritten problems. Their responses were audiotaped. After the audiotaping began, no interviewing or probing was conducted by the experimenter. The audiotapes were then transcribed and served as the verbal protocol data for this portion of the study. 85 The data was first coded according to ”Competencies for the Instructional/Training Development Professional” (AECT, 1981). A list of these competencies is found in Appendix D. The number of lines produced by each subject was counted to begin to determine extent of detail. Coded data was also used to note relationships stated by subjects among the various competencies used by instruc- tional developers and the sequence in which competencies were applied. Additional analysis of the data consisted of timing the subjects to determine how long it took each of them to complete the complex problem solving tasks. Lastly, the uncoded protocols were analyzed to gather data related to problem information and instructional development vocabulary used while solving these problems. Individual findings related to these different analyses follow. v --3 r ti nal v ment r em Differences were noted between expert and novice groups with regard to categorizing during problem solving, sequence of problem solving and extent of detail produced as well as time to solve the problems. Export 1 For problem 1, Expert 1 produced the most detail explaining how she would plan and monitor the instructional development pro- ject. Problem 2 was focused on determining whether the project was appropriate for instructional development and for problem 3 she produced the most lines explaining needs assessment of the 86 problem. For problems 1 and 2, more than twice the number of lines were produced in the above mentioned AECT competency cate- gories than in the next highest category. For problem 2, Expert 1 included more categories in her discussion than did any other sub- ject. Expert 1 did not deal with the sequence of learner outcomes, the sequence of learner activities, or interpersonal communica- tions (AECT competencies) directly or in a manner that caused any paragraph to be coded as such. Expert 1 made numerous references to relationships among components of the instructional development process. For the most part these references were at a level which indicated a sophisti- cated understanding of the process and were often explicitly stated. For all problems, Expert 1 ranked second in both the number of different instructional development vocabulary terms used and the total number of instances of use of that vocabulary. Only two instructional development terms were used by Expert 1 in all three problems. In general, Expert 1 used problem information by relating it to some other aspect of the instructional development process. She frequently offered multiple solutions to aspects of the prob- lem, as generated. For problem 1, Expert 1 produced 269 lines of data in 24 minutes, 44 seconds. It was 6 minutes before she first spoke. For problem 2 she produced 176 lines of data in 21 minutes and 13 87 seconds. She first spoke after 5 minutes, 33 seconds. For prob— lem 3 Expert 1 produced 203 lines of data in 18 minutes, 35 sec— onds. It was 1 minute, 28 seconds before she first spoke. am: In all three problems, Expert 2 produced the most detail explaining how he would evaluate the instructional development project described in the problem. Evaluation, for all three prob- lems consumed more than twice the number of lines than in the next highest category. For problems 1 and 3, Expert 2 included more AECT competency categories in his discussion than did any other subject. Expert 2 did not deal with sequence of learner outcomes or interpersonal communications (AECT competencies) directly or in a manner that caused any paragraph to be coded as such. Expert 2 made numerous references to relationships among components of the instructional development process. These refer- ences indicated a sophisticated level of understanding of the pro- cess and were often explicitly stated. For all problems, Expert 2 ranked highest in number of dif- ferent instructional development vocabulary words used and in total number of instances of use of those words. Eight instruc- tional development terms were used by Expert 2 in all three prob- lems. It was noted that these words (in the order of their first appearance)--objectives, goals, resources, instructional strate- gies, pilot, evaluation strategies, evaluation data, recycle-- 88 roughly conformed to a linear model of the instructional develop- ment process. In general, Expert 2 related problem information to some aspect of the instructional development process. He frequently offered multiple solutions to various aspects of the problems. For problem 1, Expert 2 produced 236 lines of data in 33 minutes, 56 seconds. He first spoke after 4 minutes, 10 seconds. For problem 2 he produced 126 lines of data in 19 minutes, 59 sec- onds. It was 6 minutes, 21 seconds before he first spoke. Expert 2 produced 149 lines of data in 23 minutes, 5 seconds for problem 3. He first spoke after 6 minutes, 19 seconds. Noyiog 1 For problem 1 and 3, Novice 1 produced the most detail explaining how she would plan and monitor the instructional devel- opment project. For problem 2 she focused on analyzing the char- acteristics of the setting described in the problem. In all three problems, Novice 1 produced more than twice as much data for these AECT competency categories than for the next highest category. For problem 2, Novice 1 included fewer categories than did any other subject in her discussion. Novice 1 did not deal with determining the appropriateness of the projects for instructional development, needs assessment, writing statements of learner outcomes, sequencing learner out- comes, (AECT competencies) directly or in a manner that caused any paragraph to be coded accordingly. 89 The protocol data produced by Novice 1 showed some under- standing of relationships among components of the instructional development process. Frequently, however, the references she made to these relationships were either implicit or related strongly to management of the process or to issues of program structure. For each of the problems, Novice 1 ranked third in the num- ber of different instructional development vocabulary words used as well as in the total instances of use of those terms. No instructional development terms appeared in all three protocols of Novice l. Novice 1 used problem information, in the main, to work out solutions to the mechanics of course delivery. She was able to cite problems, but rarely offered a variety of possible solutions to aspects of the problem. Novice 1 tended to reach conclusions quickly without consideration of multiple variables. She fre- quently reiterated her solutions to selected structural aspects of the problems. Novice 1 produced 140 lines of data in 20 minutes, 14 sec- onds for problem 1. She began speaking after 1 minute, 36 sec- onds. For problem 2, she produced 68 lines of data in 11 minutes, 3 seconds. She first spoke after 1 minute, 4 seconds. Novice 1 produced 103 lines of data in 12 minutes, 59 seconds for problem 3. She talked after 1 minute, 11 seconds. 90 119119.11 For problem 1, Novice 2 produced the most detail explaining how he would assess learner/trainee characteristics. In both problems 2 and 3 he focused on analyzing the characteristics of the setting described in the problem. In problem 2, Novice 2 pro- duced more than twice as much data about analysis of setting than for the next highest AECT competency category. It should be noted that the range of number of lines per category was lowest for Novice 2. The most lines he spoke for any category was 17 and the least was 4. For problems 1 and 3, Novice 2 included fewer cate- gories than any other subject in his discussion. Novice 2 did not deal with determining if the project was appropriate for instructional development, specifying instruc- tional strategies, sequencing learner activities, or determining media (AECT competencies) directly or in a manner that caused any paragraphs to be coded as such. Novice 2 showed some evidence of understanding relationships among instructional development components. However, his refer- ences were few, usually consisting of a short statement lacking detail. For the most part, Novice 2 concentrated on issues of program structure when stating relationships. For all problems, Novice 2 ranked lowest in both different instructional development vocabulary words used and in total num- ber of instances of use of those words. No instructional develop- ment terms appeared in all three protocols of Novice 2. 91 Problem information was used by Novice 2 mainly with respect to the delivery of the proposed course. He cited problems at times, but offered few solutions. Novice 2 tended to reach con- clusions quickly without consideration of multiple variables. For problem 1, Novice 2 produced 50 lines of data in 15 min- utes, 17 seconds. It was 2 minutes, 24 seconds before he first spoke. In problem 2, he produced 43 lines of data in 17 minutes, 37 seconds. It was 3 minutes, 41 seconds before he started to speak. Novice 2 produced 31 lines of data in 14 minutes 1 second for problem 3. He first spoke after 3 minutes, 48 seconds. f -- s In 1986, the International Board of Standards for Training, Performance, and Instruction (IBSTPI) published Instnootjonol m en ie : The n ar . In this document sixteen core competencies are proposed as standards for the instructional development practitioner. From a methodological standpoint, the IBSTPI document was considered to offer a unique opportunity (because of its comprehensiveness) to further investigate qualita- tive aspects of the problem solving of expert and novice instruc- tional developers related to the research questions about their differences in categories used, sequence of problem solving and extent of detail produced. Because much of the analysis to this point in the study had resulted from abbreviated or coded forms of the verbal data produced by the subjects it was decided to use the IBSTPI document to help substantiate or refute findings reported 92 in the previous section about individual subjects’ performance while solving the three instructional development problems. For each subject and for each of the three instructional development problems, uncoded verbal protocols were compared against the IBSTPI Standards. One competency at a time, verbal protocols were compared with the performances listed under the IBSTPI competencies. All instances of competencies appearing in the verbal protocols were listed. Next, summaries were made for each subject for each core competency as well as for expert and novice groups. ' This summary of the data reports expert and novice group performances for each of the IBSTPI competencies. N ° P fo an e-- TP 1. ”Determine projects that are appropriate for instruc- tional design: Experts - Only experts attended to determining whether the project was appropriate for instructional design. Both did so for all three problems. 2. ”Conduct a needs assessment: Experts - Only experts tried to determine discrepan— cies between what is happening and what should be happening. In their discussions about needs assessment they both included strategies for data collection, description of how decisions are made based on the data, and considered both organizational resources and constraints and requirements of information needed 93 to diagnose the problem. In all problems, both experts asked a variety of questions in their attempts to conduct a needs assess- ment. Theirs was a persistent, systematic approach. Novices - Only in problem 3 did both novices note there may be reasons for discrepancies. Their responses were not detailed in this regard. 3. “Assess the relevant characteristics of learners/ trainees” Experts - Both experts related learner characteristics to design specifications and determined strategies for data col- lection. Aspects of learner characteristics were not only cited, but were detailed and sometimes related to some other part of the instructional development process. For example, complaining was related to motivation by Expert 1 and importance of learner char- acteristics was related to content by Expert 2. Novices - Both novices addressed learner characteris- tics and focused mainly on prerequisites and experience. Neither related this aspect of instructional development to any other aspect. 4. ”Analyze the characteristics of a setting: Experts - Experts used words like ”resources and con- straints” to describe concerns related to analysis of the setting. Their definitions of the setting included organizational philoso- phy, pre-selected instructional design methods, people resources, 94 time, money, equipment and space and facilities. They both pro- duced an extensive list of examples under each of these cate- gories. Novices - Novices both included people resources, time and space and facilities in their analysis of the setting. Their examples listed under each of these categories were numbers of students, location of training and personnel. 5. "Perform job, task, and/or content analysis" Experts - Both experts related job, task, and/or con- tent analysis to needs assessment. For example, Expert 1 stated that it depends on needs assessment data, and Expert 2 stated that needs assessment data is used to delimit the content before devel- oping cognitive, affective and psychomotor goals. Novices - While novices did attempt to discuss job and content, they did not relate them to needs assessment data. Rather, they essentially accepted the content of courses as sug- gested in the problem. Their analysis of job, task and/or content focused on strategies such as surveys, interviews and brainstorm- ing to determine content. 6. "Write statements of performance objectives” Neither the expert nor the novice group produced com- mon patterns in this competency. 7. "Develop performance measures" Experts - Again, experts pointed to relationships among instructional development components. For example, Expert 1 related the development of performance measures to data gathered 95 during needs assessment and Expert 2 related this competency to objectives, pilot testing, review and revision. Both experts also discussed the cyclic nature of evaluation at this level. Novices - Both novices focused on strategies for developing or obtaining performance measures. Examples included surveys, pre and post-tests, questionnaires, and standard mea— sures. There appeared to be a basic understanding about perfor- mance evaluation, but some confusion about its difference from formative and summative evaluation as well as its role in the development process. 8. "Sequence the performance objectives" Neither the expert nor the novice group responded to this aspect while solving the problems presented. 9. ”Specify the instructional strategies“ Experts - Only the experts related the specification of instructional strategies to other aspects of the development process such as timeframe, budget, needs assessment, resources, objectives and evaluation. They also mentioned its sequence in the process as predicated upon background information or following development of goals and objectives and analysis of resources. 10. "Design the instructional materials” Neither the expert nor the novice group responded to this aspect while solving the problems presented. 11. ”Evaluate the instruction/training" Experts - Both experts discussed the development of formative and summative evaluation plans. These discussions were 96 related for example, to assessment of needs and statements of objectives. The cyclic nature of evaluation was discussed as well. Novices - As in competency 7, novices seemed confused about the when, where and why of evaluation. The purposes of for- mative and summative evaluations seemed less than clear in their verbal responses to the problems, as did their distinctions between performance and program evaluation. Again, their focus was more on selection of evaluation strategies. 12. ”Design the instructional management system" Experts - The management system was frequently related to other aspects of the instructional development process such as factoring it in with design, resources, and needs assessment as well as its position in the sequence of the instructional develop- ment process. Pilot testing was advocated by both experts in their responses to problem 3. The specific concerns cited by experts were also given a context. For example, reasons were cited for attending to student backgrounds, location of machine was said to affect on the job training possibilities, and a train the trainer model was noted as having some limitations. Novices - Novices did not relate the system’s design to other aspects of the instructional development process. Their concerns focused on constraints posed by the problem such as class size, meeting times, numbers of sites, numbers of students, number of sessions and geography. 97 13. 'Plan and monitor instructional design projects” Experts - Both experts proposed a process which included tasks, timetables and human resources. Novices - Only Novice 1 proposed a plan for monitoring the process and it focused on agendas for numerous meetings. The last three competencies did not appear in the protocols of either the expert or the novice group. 14. “Communicate effectively in visual, oral and written form" 15. "Interact effectively with other people" 16. “Promote the use of instructional design" -u.. . 'no'n- --T. -: r- 101 l I-v:lo- : Prop -m At this point, results from coded and verbal protocol data and findings from the IBSTPI analysis were compared. Evidence of patterns which were persistent across all three problems and all forms and indicators of data were noted. Following is a summary of the grouped data for expert and novice instructional developers’ responses to the three instruc- tional development problems. 1. Experts exhibited a wider range of competencies than did novices. Only the experts spent time determining whether a problem was appropriately designated as one which could be solved by the instructional development process. They also discussed instructional strategies and related their selection to other aspects of the instructional development process, which the 98 novices did not. These findings contributed information to help answer the questions about differences in categories used and extent of detail displayed in those categories. 2. When citing numerous relationships among components of . the instructional development process, experts were explicit and produced fairly sophisticated replies. Novices, on the other hand, cited fewer relationships among instructional development components, were less explicit in their descriptions of these relationships and tended to tie a great number of them to either the management of the project or the structural constraints of the proposed program. Some examples follow: Regarding analysis of setting‘ Expert 1 - Find out if we have control over class size- -time distribution affects what can be done in terms of strate- gies--find out if committee taught this kind of class—~use them as resources. Expert 2 - Look at resources (facilities, production equipment, personnel)--look at constraints (people currently work- ing, where are they, do they have to drive)—-revise goals and objectives accordingly. Novice 1 - Work out training schedule--want full and adequate instruction, yet don’t want off production too long--come up with how many sessions--get coverage during absence--consider training schedule--maybe keep after hours for pay. Novice 2 - Look at how much time is involved-~try to keep class sizes small. 99 These findings contributed additional information about dif- ferences in extent of detail displayed by expert and novice instructional developers. 3. Experts exhibited a more systematic approach to the solution of problems than did the novices. Their approach essen- tially included problem analysis, design of instruction and evalu- ation. Extent of detail produced for each of these components, however, differed between experts. Novices took a more random- appearing approach to the problem solution. These differences are related to the questions about sequence of problem solving and extent of detail produced by expert and novice instructional developers. I 4. Experts offered numerous strategies as approaches to various aspects of the problems. They asked a variety of ques- tions, generated many examples and often provided the context for their decisions. Novices, on the other hand, generated few strategies and tended to come to quick conclusions before consid- eration of pertinent variables. The detail they produced tended to focus on surface characteristics stated in the problem. For example: Expert 1 - Get answers to some background information-- why managers and employees complain--want to find out if really a training problem-~get documentation about why they complain and what’s been done to alleviate the problem--find out what needs assessment has been done-~find out where they came up with the time frame--find out why they decided this was the way to go 100 (Coded as determining if the problem was one of instructional development). Expert 2 - Submit draft for review by others--get their feedback--review feedback-~identify possible changes-—incorporate improvements-~implement pilot section--identify information needed--collect feedback from students--collect committee observa- tions-~get outside person to debrief students-~use data to iden- tify changes--make a collective decision--continue collecting feedback for a year--. . . revise (Coded as evaluation). Novice 1 - Formative and summative evaluation are needed--Decide whether to use a standard measure--decide whether to use a pretest and if so, whether to design one or is one avail- able (Coded as evaluation). Novice 2 - Browse around stores to see how they’re functioning--find out the location of sites, their distribution and number of stores--balance accordingly (Coded as analysis of setting). This information contributed data for the question about extent of detail displayed by the subjects. 5. Both experts used more time and produced more lines of data than did the novices. With the exception of one instance for Expert 1, experts used more time before verbally responding to the problem (see Tables 3, 3.1 and 3.2). 101 Table 3 n r h Problem 1 Problem 2 Problem 3 Expert 1 25 21 . 19 Expert 2 34 20 23 Novice 1 20 11 13 Novice 2 15 18 14 Table 3.1 u- i1 _ - T .-n ::fo - -r9. 1- .. ,- . : r-- P o-lem Problem 1 Problem 2 Problem 3 Expert 1 6.0 5.5 1.5 Expert 2 4.0 6.5 6.5 Novice 1 1.5 1.0 1.0 Novice 2 2.5 3.5 4.0 Table 3.2 o n o m Problem 1 Problem 2 Problem 3 Expert 1 269 176 203 Expert 2 236 126 149 Novice 1 140 68 103 Novice 2 50 43 31 102 This data is related to the question about time taken to solve each of the instructional development problems. 6. Experts used more instructional development terms and used them more frequently than did novices (see Table 4). Table 4 Vocabulary N1 N2 E1 E2 N1 N2 Objectives Goals Resources Instructional Strategies Pilot Evaluation Strategies Evaluation Data Recycle/Revise Needs Assessment Timeline Media Budget Feedback Delimit Content Constraints Management Plan XXXXX XXXXXXX XXXXX XXXXX E - Expert N - Novice 103 The vocabulary terms used by expert and novice instructional developers provided information about extent of detail found in the verbal protocols of subjects. 7. Experts used a more persistent approach to solving the problems. Expert 1 was, in every problem to a large extent con- cerned with the role of background information and Expert 2 went into great detail explaining evaluation. Never was the problem accepted at face value by experts as was the case for novices. These findings were related to the questions about categories used and extent of detail produced. To conclude this segment of the analysis report, it should be emphasized that three different instructional development prob- lems were used during data collection and that a variety of approaches was used to analyze this data. This was done first so that patterns could be observed across problems and second to find out which patterns would also persist over a variety of approaches to analysis. The seven items just listed are initial patterns which persisted across problems and various approaches to analy- sis. i i T One of the basic reasons for conducting the techniques card sorting task (and the models card sorting task which appears later in this chapter) was to seek confirming or disconfirming evidence. In this case, it was considered important to find out whether the patterns just reported would remain consistent given a different 104 type of problem. Hence, the two sorting tasks were used to see if behaviors would remain consistent across sorting tasks and then whether they would also remain consistent with behaviors displayed during the instructional development problem solving sessions. To analyze the data produced during the techniques card sorting task, the verbal protocols for this task were outlined by eliminating all but those words which appeared on cards, and those which defined or labeled techniques or categories. Following this step, the outlines were diagrammed to illustrate the hierarchy of subjects’ organization of the cards. Next, definitions of tech- niques and category labels were collated from each of the verbal protocols. Techniques cited by subjects as being unfamiliar were counted and definitions of techniques were checked for accuracy. These results were compared across expert and novice groups to determine extent of detail generated during definition of tech- niques. Next, the diagrams were compared with category labels to find out how cards had been categorized and whether cards had been correctly categorized under labels as they were defined by the subject. This analysis attempted to find out if the data con- tributed to an answer to the research question related to differ- ences between expert and novice groups in the way they categorized during problem solving. Verbal protocols were examined individu- ally and then across groups to determine broad problem solving sequences. This analysis was conducted to gain information about the research question related to differences between expert and novice groups in the sequence of the problem solving strategies 105 they employed. Finally, the audiotapes were timed and the number of lines produced by each subject for each problem was counted to assess how much data was produced by expert and novice instruc- tional developers and to find out how long it took them to com- plete the techniques card sorting task. finding; Analyses of the data related to the techniques card sorting task indicate some differences between performance of expert and novice instructional developers on this task. Differences were noted in sequence of problem solving, extent of detail produced and consistency between category definitions and the cards sorted beneath category labels. iv 1 indin s--T ch i ue or First, data will be summarized for individuals. Then group data will be reported and contrasts noted for each of the research questions. HELL}. Expert 1 produced accurate definitions for techniques. Most definitions were detailed. All techniques were not defined by Expert 1, but that was not the task assigned. Many definitions contained examples of or reference to instructional development. Expert 1 designated no techniques presented as unfamiliar. She erroneously (according to her definition) sorted critical path 106 method under the topic of research and criterion referenced mea- sures as a partial model. She did not conform to the exact sequence of the problem as directed in the instructions. During parts 2 and 4 of the instructions for the task, (Appendix A), she essentially declined to re-sort piles she had already established. -Her response indicated a perception that further sorting would only yield superficial data such as techniques suitable for group versus individual tasks. During 35 minutes: 25 seconds she pro- duced 319 lines of data and 45 seconds passed before she first verbally responded to the problem. EXDEJLLZ Expert produced accurate definitions for techniques. Most definitions were detailed. All techniques were defined by Expert 2 even though that was not the task assigned. Many definitions contained examples of or reference to instructional development. Expert 2 designated two techniques presented as unfamiliar. These were function analysis and Gannt chart. Expert 2 did not conform to the exact sequence of the problem as directed in the instructions. Instead, he quickly assessed the task as one which could illustrate that these techniques would appropriately be used during different stages of the instructional development process. He then proceeded to define the techniques in the context of stages of the process. Essentially, Expert 2 intertwined labels with definition of techniques using needs assessment, formative evaluation and summative evaluation as three of his six original 107 category headings. His elaboration during part 2 of the task con- sisted essentially of a short explanation of the label followed by a listing of the techniques under the label. During 39 minutes: 57 seconds, he produced 328 lines of data and 5 seconds elapsed before he began to verbally respond to the problem. At the onset, Expert 2 stated that he would sort the cards according to instruc- tional development stages and then proceeded to do so for the remainder of the time spent to complete the task. Holden Novice 1 produced several inaccurate definitions for tech- niques as well as several definitions which could be termed not specific enough to pick a technique, given its defining attributes. Overall, her definitions were haphazard in the extent of detail produced. Her references to instructional development were few and those were related to the three problem solving exer- cises in this study. A number of techniques were not defined by Novice 1, though that was not the task. She labeled criterion referenced measures, discrepancy evaluation, force field analysis, function analysis and nominal group process as unfamiliar. Novice I appeared to experience difficulty with labels and the defini- tions which needed to match the attributes of the techniques sorted underneath. Objectives did not match her definition of theoretical planning, PERT did not match evaluation, and Gannt chart and critical path method did not match analytical tools as defined. She stated that she realized her labels, evaluation and 108 application overlapped. Novice I worked completely within the structure provided by the problem. The problem solving sequence she used was that proposed by the task. During 34 minutes: 19 seconds, she produced 302 lines of data and took 12 seconds before beginning to verbally respond to the problem. m2 Novice 2 produced several inaccurate definitions for tech- niques as well as several definitions which could be termed not specific enough to pick a technique, given its defining attributes. Overall, his definitions were haphazard in the extent of detail produced. A number of techniques were not defined by Novice 2 though that was not the task. He stated that he did not know what Gannt chart, nominal group process, PERT, function anal- ysis, criterion referenced measures, critical path method, Delphi technique, discrepancy evaluation and force field analysis were. His references to instructional development were few and many of his examples seemed to be cited from classroom experience and one from the problems in this study. The original labels produced by Novice 2 were nonspecific--reference based work, group work, back- ground and general. His definition of case studies did not match the attributes he listed for reference based work. His definition of background was actually three definitions which therefore did allow the techniques sorted underneath to match. Novice 2 gave a definition for general that closely matched a general model for instructional development even though it was not labeled as such. 109 Novice 2 worked completely within the structure provided by the problem. The problem solving sequence he used was that proposed by the task. During 55 minutes, 26 seconds he produced 231 lines of data. It was 45 seconds before he began to verbally respond to the task. WW 1. Experts produced most accurate definitions. Following is an example from the data. Cost benefit analysis - "a generic term for such tech- niques. . . which assist the decision-maker in making a comparison of alternative courses of action in terms of their costs and effectiveness in attaining some specific objectives” (Bennett, 1983). Expert 1 - analysis to decide if project is worth doing--could do as figuring out cost and decide if benefit. Expert 2 - towards end-~evaluation tool hasn’t used it himself--comparing alternative approaches--see if benefit of one exceeds benefits of the other--cost not only dollars but also effort, personnel etc. Novice l - have your protocol, a set procedure and look at data-~data evaluated in terms of procedure. Novice 2 - what you are attempting to teach someone-- could have been used in grinding machine problem with brainstorm- ing eg. talking to students about importance or value of instruc- tion on machine--may be performed by group but more a summary of 110 data--could be used in technical conference or brainstorming ses- sion to some up with final or refined ideas. 2. Experts indicated fewer unknown techniques. 3. Experts produced more detailed definitions. For exam- ple: Interviews/observations - "interviewing users-~tech- nique to elicit information that is known only to users of a prod- uct or system in question. Observation interview - method to define a task, ana- lyze a job, or perform needs assessment or evaluation, whereby the investigator observes and questions an interviewee at the work site while the practitioner performs the activities under investi- gation' (Bennett, 1983). Expert 1 - can be used in needs analysis--can be used to decide if someone can perform what they’re supposed to accord- ing to objectives--going to individuals asking specific questions. Expert 2 - front end--part of needs assessment--data collection strategy used to conduct data analysis--can be of potential recipients of instruction--can be of people requesting instruction-~additional methods are literature search and consen- sus conference. Novice 1 - gathering information. Novice 2 - could be verbal-~could be more than two peo- ple interviewed or interviewing--can receive feedback from this-- use for implementation and final program evaluation. 111 This example shows how some definitions were very broad (Novice I). It also shows how while Novice 2 produced quite an extensive definition, his initial attributes were structural rather than indicative of underlying principles. It is also interesting to note that Novice 2 suggested using the interviewing technique during problems one through three for the purpose of gathering data. Another example shows how experts described a technique unknown to both novices. Nominal group process - "method to generate and priori- tize ideas regarding problem-solving, job performance improvement, etc. whereby each member of a study group generates ideas that are listed before the group, ranked and valued (1-5), and finally pri- oritized” (Bennett, 1983). Expert 1 - bring people together--depends on size of group and how much involvement you want individuals to have-- forces everyone to be involved. Expert 2 - use for divergent thinking to get ideas out and elicit lots of information--could make a case for outcome forcing convergence-~its strength is use to generate a number of possible solutions--also to have group prioritize set of needs or outcomes. These first three findings all contribute data to support conclusions related to the question about extent of detail pro- duced by subjects. 112 4. Experts’ correspondence between label nodes, defining attributes and subnode techniques was more accurate. Labels were more specific and mutually exclusive as defined. This finding relates to the questions about categories used and extent of detail produced. 5. Experts modified the problem solving sequence by fail- ing to complete at least one segment of the instructions and thus completed the task in a slightly different manner from the novices who followed instructions completely. This information is related to the question about the sequence of problem solving used by sub- jects. 6. Experts used more instructional development examples. Novices tended to use examples from the preceding problem solving exercises (Novice l) and from the classroom (Novice 2). This pro- vides more information about the extent of detail produced by sub— jects. 7. Experts produced more lines of data than did the novices: Expert 1 - 319 lines Expert 2 - 328 lines Novice 1 - 302 lines Novice 2 - 231 lines 8. There was no consistent difference between groups in total time to complete the task: Expert 1 - 35 minutes Expert 2 - 40 minutes 113 Novice 1 - 34 minutes Novice 2 - 35 minutes 9. There was no consistent difference between groups in time lapse before beginning to verbally respond to the task: Expert 1 - 45 seconds Expert 2 - 5 seconds Novice 1 - 12 seconds Novice 2 - 45 seconds Each of the last three findings contributes data related to the question about time spent to complete the task. Was This second card sorting task was intended to further con- firm or disconfirm evidence of patterns noted in the techniques card sorting task. To analyze the data produced during the models card sorting task, the verbal protocols were outlined by eliminating all but those numbers which appeared on cards, and those words which defined or labeled models or categories. Next, the outlines were diagrammed to illustrate the hierarchy of subjects’ organization of the cards. Finally, definitions of the models and category labels were collated from each of the verbal protocols. To analyze the data, unfamiliar models, as labeled by sub- jects were counted. Each definition given by each subject was compared with a diagram of the corresponding model for accuracy. These results were then compared across the expert and novice 114 groups. Definitions were then compared across expert and novice groups to determine extent of detail generated during definitions of models. These steps were conducted to help answer the research question related to whether expert and novice groups produce dif- fering amounts or levels of detail during problem solving. Then, the diagrams were compared with category labels to find out how cards had been categorized and whether cards had been correctly categorized under labels as they were defined by the subjects. Contrasts between groups were noted. This analysis attempted to find out if the data contributed to an answer to the research question related to differences between expert and novice groups in the way they categorized during problem solving. Protocols were examined individually and then across groups to determine broad problem solving sequences. This analysis was conducted to gain information about the research question related to differ- ences between expert and novice groups in the sequence of the problem solving strategies they employed. Finally, the audiotapes were again timed. Total time spent on the task was measured from when the tape recorder was turned on until the subject said his/her last word. The number of lines produced by each subject was counted, as well. One line was counted for any line of text containing at least one word. This analysis of the data examined differences between expert and novice instructional developers regarding the time spent to complete the task. 115 -- l a r Analyses of the data related to the models card sorting task indicate some differences between performance of expert and novice instructional developers on this task. As in the techniques card sorting task, differences were noted in sequence of problem solv- ing, extent of detail produced and consistency between category definitions and the cards sorted beneath category labels. First, data will be summarized for individuals. Then group data will be reported and contrasts noted for each of the research questions. BELL]. Expert 1 produced accurate definitions for the models. Most definitions were detailed. Most models were defined by Expert 1, as well, though that was not the task assigned. Some definitions contained examples of or reference to instructional development experience as well as information not found on the diagrams pro- vided. Expert 1 designated no models as unfamiliar. As in the previous techniques sorting task, she did not conform to the exact sequence of the problem as directed in the instructions (see Appendix A). Instead, Expert 1 first sorted cards, defining model attributes as she put them into piles. After definition, she gave label names, declined to further sort cards saying it would just be superficial, and then elaborated on original category labels as directed. During 23 minutes, 48 seconds Expert 1 produced 242 116 lines of data. She paused for 1 minute, 1 second before beginning ,to verbalize her solution to the problem. Elm Expert 2 produced accurate definitions for the models. Most definitions were detailed. A few models were defined by Expert 2 even though that was not the task assigned. Some definitions con- tained examples of or reference to instructional development expe- rience, as well as information not directly available from the diagrams of the models. Expert 2 designated no models presented as unfamiliar. He deviated from the exact sequence of the problem as directed in the instructions when he declined to complete the task of elaborating on his second sort of labels. Otherwise, he followed the problem sequence. During 22 minutes, 18 seconds Expert 2 produced 165 lines of data. He paused for 44 seconds before beginning to verbalize his solution to the problem. At this point, he stated his intent to sort the cards according to level of focus, from micro to macro, and then proceeded to do so. £91134 Novice I produced accurate definitions for the models though all were based on the structural characteristics of the models and contained information that was readily available by simply looking at the diagrams. She used no references to instructional develop- ment experience. Novice 1 identified eleven models as unknown, but guessed at the purpose of at least three of those eleven. Novice I worked within the structure provided by the problem. 117 However, she completed only the first two tasks, after which she stated there was no point to continuing. During 8 minutes, 3 seconds she produced 59 lines of data. She used 19.69 seconds before beginning to respond verbally to the problem. 1mm Novice 2 produced several inaccurate or overlapping defini- tions as well as several definitions which could be termed not specific enough to pick a model, given its defining attributes. For the most part his definitions lacked detail, attending essen- tially to more superficial attributes. Novice 2 used no refer- ences to instructional development experience. He identified no models as unknown. The labels produced by Novice 2 were often nonspecific--organizational training, transitional, traditional, step system, traditional feedback, educational feedback and prob- lem solving. The Gagne model did not fit his definition of tradi- tional and his definitions for problem solving and educational system were almost identical. Novice 2 worked completely within the structure provided by the problem. The problem solving sequence he used was that proposed by the task. During 55 min- utes, 27 seconds, he produced 158 lines of data. Novice 2 spent 21 seconds before beginning to verbalize his solution to the prob- lem. i din s-- od l r ortin 1. Experts produced more accurate definitions. Following is an example from the data. 118 The Banathy and Davis et al. models were categorized together (among other models) by both experts and by Novice 2. Novice 1 listed these among unknown models. Expert 1 - instructional development models--all use systematic approach to developing some specific instruction--some more detailed-once problem determined, with various people, ana- lyze audience characteristics-~depends if you try to follow a model or the typical way which is not always ideal. Expert 2 - instructional systems models-~larger, more general focus--can be used to guide development of major courses, curricula. . . get into identifying instructional events, develop- ment of materials, test--. . . how to revise clerkship, medical rotation or entire course. . . .--follow three steps: definition, setting up specifications; development, putting things together and implementation, dissemination or evaluation, trying out and seeing what worked. Novice 1 - traditional models--all generally have feed- back at end except 5 (Banathy) a few have it occasionally within structure--5 has throughout. . . for most part all are straight lines-~start with objectives and end with evaluation--then have feedback. 2. Experts indicated no unknown models. Though Novice 2 indicated no unknown models, some of his definitions would indi- cate that he was not familiar with all of the models. For exam- ple, Novice 2 did not recognize the Havelock model as being useful 119 for implementing change, nor did he note that Gagne’s model was instructional. Novice I listed eleven models as unknown. 3. Experts produced more detailed and specific defini- tions. Their definitions went from a general to a specific level and were indicative or knowledge beyond that which could be assumed from the diagrams of the models. Novice descriptions, on the other hand, focused on knowledge that could be assumed from the diagrams of the models. For example: Expert 1 - Broad organizational development models-- used for problems within organization, business, industry or higher ed--try to get real cause early, not just symptoms--then decide best route (training program or something else to solve)-- if training most appropriate then get into instructional develop- ment models and get specific objectives plus whole sequence of figuring out what you want to do and try it out. Expert 2 - Micro level models--least comprehensive-- description of relationship between learning process and instruc- tional events--. . . focus is psychological--. . . follows through very well known, validated approach to teaching--use when design- ing particular instructional sequence. Novice I - Developmental support models--different com- ponents--information handling, budget resource allocation and oth- ers-~dividing development functions, support functions, whereas the others were assessment, prototype evaluation. . . models-- dividing it up into two chunks (Gentry model). 120 Novice 2 - Step system models--large steps with feed- back within steps, but none designed until the end between steps-- go through major steps with minor steps. These first three findings all contribute data to support conclusions related to the question about extent of detail pro- duced by subjects. 4. Experts’ correspondence between label nodes, defining attributes and subnode techniques was more accurate. Labels were more specific and mutually exclusive as defined. This finding relates to the question about categories used and extent of detail produced. 5. Experts slightly modified the problem solving instruc- tions. Novices did not. This information is related to the ques- tion about the sequence of problem solving used by subjects. 6. Experts used instructional development examples. Novices used none. This provides more information about extent of detail produced by subjects. 7. There were some similarities in the category distinc- tions made by experts. They both sorted the Gagne model into a category by itself and both agreed its focus was instructional. They also, on the first sort lumped the Blondin, Hamreus, Banathy, Davis, and IDI models together and both at that point called them instructional development models. The Gentry, Lippitt and Nadler and Havelock models were sorted into a pile called broad organiza- tional development by Expert 1 and called macro focus by Expert 2. There were no patterns of similarity in the sorting categories 121 produced by the novices. This finding relates to the question about categories used by subjects. 8. Experts produced more lines of data than did the novices: Expert 1 - 242 lines Expert 2 - 165 lines Novice 1 - 59 lines Novice 2 - 158 lines 9. There was no consistent difference between groups in total time to complete the task: Expert 1 - 24 minutes Expert 2 - 22 minutes ‘Novice l - 8 minutes Novice 2 - 55 minutes 10. There was a slight difference between groups in time lapse before beginning to verbally respond to the task: Expert 1 - 1 minute Expert 2 - 30 seconds Novice I - 20 seconds Novice 2 - 21 seconds Each of the last three findings contributes data related to the question about time spent to complete the task. Climax! Overall, responses of individual subjects were consistent in terms of the research questions. There were, however, a few 122 notable differences among subjects and to these, we now turn our attention. For Expert 1, solving the instructional development problems was always punctuated with a need to base decisions on information gathered on various topics and in various ways. Expert 2 was very systematic when solving all five of the tasks. For the first three problems, his method was quite linear and con- sistently focused on evaluation. For the sorting tasks, he selected a scheme for organization and then proceeded accordingly. Novice 1 tended to repeat herself during the problem solving tasks. On the models sorting task, she completed only a portion of the problem before asking to stop working. Novice 2 consis- tently produced the least amount of data with the least amount of elaboration. These differences aside, expert and novice groups consis- tently exhibited some similar behaviors across all five problems. Again, each of the findings was compared across two kinds of prob- lems--three instructional development problems and two card sort- ing tasks. What is reported next is only that which proved con- sistent across all five problems. A summary of those behaviors appears in Tables 5-9 below. B£§§§££h_Q!§§LiQD_l- How do subjects differ in the sequence they use to work through selected problems? 123 Table 5 W Experts Novices Problems 1-3 Analyze, design, Random-like evaluate sequence-- More systematic Sorting Tasks Deviated from the Followed problem problem directions-- directions-- No other consistent No other consistent sequence noted sequence noted. R e rc e i n . How do subjects vary in the time it takes to work through each of the selected problems? Table 6 e r l v Experts Novices Problems 1-3 Used more time-- Used less time Produced more lines Produced fewer lines of data of data Sorting Tasks No consistent No consistent differences in differences in total time or before initial response-- response-- Produced more Produced fewer lines of data lines of data W. 124 How do subjects differ in the extent of detail they generate when working through selected problems? Table 7 MW Experts Novices Problems 1-3 Sorting Tasks Used more 10 terms-- Generated multiple strategy alternatives-- Deliberated on solution alternatives Persistently gathered information Produced more detailed and accurate defini- tions, fewer unknown techniques and no unknown models Used 10 examples Used fewer ID terms Generated few strategy alterna- tives-- Quickly accepted solutions Relied upon problem for information Produced less detailed and accurate defini- tions, more unknown techniques and more unknown models Used fewer 10 examples WM. 125 How do expert and novice instructional developers differ in the way they categorize selected problems into units? Table 8 BMW Experts Novices Problems 1-3 Sorting Tasks Used more comp- etency categories --Related problem information to ID process; explained --Provided detailed and precise responses --Noted relationships among components; at abstract level Provided detailed precision in responses --Noted relationships among components; at abstract level Used fewer comp- etency categories --Used problem information unques- tioningly --Lacked detail and precision in responses --Noted some rela- tionships among com- ponents at concrete level Lacked detail and precision in responses --Noted relation- ships among components; at concrete level 126 Booonnon_ooo§11on_§. How do expert and novice instructional developers differ in consistency in sequence of problem solving, time spent working on the selected problems, extent of detail gen- erated, and categories imposed across selected problems? Table 9 MW Experts Novices Sequence Systematic Random-like Time More data Less data Detail More detail Less detail Sophisticated Concrete, often level lacking Categories Relationships noted Few relationships noted Chapter 5 will focus on describing various conclusions made based on these findings and their implications for further research. CHAPTER 5 Summary, Conclusions and Recommendations Introduction In this chapter, a summary of the previous chapters is pro- vided. Next, each research question is presented, followed by a summary of pertinent findings both from this study and from those reviewed in the literature. Hypotheses based on findings are described as are recommended questions for further study. The chapter ends with a section devoted to general conclusions which are related to the research design used to conduct this study. W In this study, two expert and two novice instructional developers were asked to verbally respond to three instructional development problems and to two card sorting tasks. Their audio- taped responses formed the data for this inquiry about how the organization of knowledge/experience is displayed in the problem solving performance observed in selected expert and novice instructional developers. Some of the ways that this organization was thought to be manifested included: sequence of problem solv- ing, time taken to solve each of the problems, extent of detail produced during problem solving, ways of categorizing the problems 127 128 into units and consistency across all five problem solving tasks. Indicators of these behaviors were analyzed in order to answer the central research question. The qualitative research methodology used in this study focused on two themes. The first theme was that confirming and disconfirming evidence of how expert and novice instructional developers displayed their knowledge/experience must be sought. Emerging patterns of behavior were systematically checked by using this strategy. The second theme was that of triangulation. Three instructional development problems and two card sorting tasks were presented to subjects. These were analyzed from multiple perspec- tives. This cross-checking approach provided additional assurance that observed patterns of behavior were enduring and independent of problem type or narrowness of analysis. 'n H he nd 0 n d In this section, the findings, hypotheses generated from them and recommended questions for future study are presented by individual research question. Pertinent research findings from the literature are related within the following segments, as they contribute to the development of findings, hypotheses generated and recommended questions. It should be noted that hypotheses cited are presented as possible options. This was done in order to refocus the reader to the nature of this study which was to generate hypotheses for the- ory development. While every attempt was made to ensure validity 129 of the results, the hypotheses derived from them require addi- tional research for verification. It should further be noted that even though each question is presented individually with pertinent findings, hypotheses and recommended questions, it has not been established that any of these is independent from any other. This study focused on locat— ing‘repeated patterns, and findings were developed in an additive fashion. In that respect, the boundaries of any hypothesis given for a particular finding are such that overlapping may occur with another finding. The central research question answered in this study is: How is the organization of knowledge/experience dis- played in the problem solving performance observed in selected expert and novice instructional developers? Following are the research questions, findings, hypotheses generated and recommended questions for future study. Question 1 How do expert and novice instructional developers dif- fer in the sequence they use to work through selected problems? LindjmJ From the data it appears that experts systematically worked through complex problems, covering the phases of analysis, design and evaluation. Novices used a more random approach to finding a solution to these problems. 130 These findings support Leinhardt’s (1983) assertion that experts work from a well-specified but flexible agenda. Experts use a more focused approach to problem solving. The findings also agree with findings from the work of Chi et al. (1982) that experts know more procedures than do novices and they also know more about conditions for application of those procedures. There- fore, it is not surprising that in this study, experts seemed to know what to do and when to do it. As in the Charness study (1979), expert instructional developers in this study seemed to be able to recognize appropriate solution strategies. W Differences in expert and novice instructional devel- opers’ approaches to problem solving are due to the differences in their instructional development experi- ence. Anderson (1982) concurs with this hypothesis. Student instructional developers learn about the process of instructional development, but may not have gained a significant amount of expe- rience using the process, particularly in the solution of complex problems. Since experts use what they know about similar past problems to solve new problems (Glaser, 1984), it is likely that the experts in this study are not an exception. In fact, they pointed out examples from their experience from time to time while responding to the problems posed in this study. 131 W From this study and others, it would seem useful to further investigate the role of meaningful practice in the problem solving of instructional developers. Examples of potential research questions in this regard might be some of the following. Why is it that novices know about instructional development but they do not seem to apply what they know? Can this be improved and accelerated with meaningful prac- tice? Will, for example, practice with a variety of complex prob- lems improve novice performance with new problems? If so, when is the appropriate time to introduce this type of practice? m1 Experts worked within a general instructional develop- ment sequence to solve the problem and provided vari- ous details as they worked through the problem. Egan and Schwartz (1979), found similar behavior in their study of an electrical engineering expert who reportedly attempted to systematically recall organizational units rather than to hap- hazardly solve the problem. They hypothesized that experts iden- tify a conceptual category for the problem posed to them and then systematically retrieve category elements. limihesjsjenmted Instructional developers use various cognitive strate- gies during problem solving. 132 In this case, experts used a pattern (a very general model) to aid the solution process. Harmon and King (1979) listed use of models as one example of a cognitive strategy. WW; Further study of the use of cognitive strategies by instruc- tional developers doing problem solving is indicated. Does the use of a basic instructional development model serve as a sort of cognitive strategy? If so, what is the rela- tionship between cognitive strategies and problem representation? Another question that comes to mind is whether experts perceive pattern types across instructional development problems. These are questions that need to be answered in further research. finding 3 Novices followed instructions completely for the sort- ing task while experts did not. Experts’ responses to instructions were comprehensive and seemed almost to anticipate what would be asked on the next page of instructions. Sometimes the completeness of their responses was such that responses to the next set of instructions was reportedly redundant to the experts. W121! Problem instructions affect performance of subjects during their solution of instructional development problems. The complex problems, in this study, required subjects to form their own organization to a solution plan and the sorting 133 tasks were more structured. One portion of the problem was to be completed by subjects before continuing to the next portion. For the complex problems, it seemed as though novices were not able to work systematically but the instructions for the sorting tasks enabled them to do so to a greater extent. As indicated by Erics- son and Simon (1984) performance of subjects in this study was probably affected by problem instructions. WW To what types of instructional development problems do both expert and novice instructional developers respond systematically? If a large sample were used to study the role of problem instruc- tions, would expert and novice instructional developers respond differently to complex problems than to card sorting tasks? It seems likely from this study that instructions and prob- lem type influence the direction of problem solving. Enough evi- dence of this occurrence may now be available to the researcher who wishes to experimentally manipulate problem instructions and measure their effect on problem solving. findingi Even though novices appeared to know about the process of instructional development, they did not apply much of what they knew while solving the three complex problems. W When expert and novice instructional developers ini- tially read an instructional development problem they focus on different aspects of the problem. 134 or When novice instructional developers initially read an instructional development problem they focus on task demands. or When novice instructional developers initially read an instructional development problem they have no overall sense of the problem at hand. Available knowledge and the way it is organized further influence the internal representation of a problem (Gagne & Glaser, 1987). In this case, it did not entirely appear to be a lack of knowledge that was to blame for differences in perfor- mances between the two groups. Both novices demonstrated they knew something about the sequence of the instructional development process. They did not, however, apply what they knew when asked to solve complex problems. Anderson (1982) hypothesizes that novices are consumed with problem details to the extent that they do not get a good overall perspective on the problem. W At this point, it would be useful to isolate to what extent differences between expert and novice groups are due to failure to access knowledge and how this relates to problem representation. How do instructional developers conceptualize complex prob- lems? When asked to diagram what they believe to be the essential elements of a set of problems would the elements of those problems be different for expert and novice instructional developers? What do novices know but fail to access during problem solving? 135 0mm How do expert and novice instructional developers dif- fer in the time it takes to work through each of the selected problems? mm There was no real consistent pattern to subjects’ behavior regarding the time spent to read the problems and ”think" before responding verbally. All subjects took time to read the problems and "think" before responding verbally. For the complex problems, the experts (with one exception) spent more time before responding. There was no pattern distinguishing groups for sorting techniques, but experts again took more time before responding to sorting models. All subjects took less time before responding to the sorting tasks than they did before the complex problems. n t d Time between reading an instructional development problem and verbal response to that problem is used to build representations of the problem. or Time between reading an instructional development problem and verbal response to that problem is used to think through possible solution strategies. Like Berliner’s subjects (1986), the experts in this study usually took longer to examine the problem. Gagne and Glaser (1987) hypothesized that this time was used to construct represen- tations of the problem or to think through solution strategies. 136 332W We could learn much about the process that expert instruc- tional developers use during problem solving were we to know what they are thinking when first presented with a complex problem. This is a promising area for future research in instructional development. Potential questions to ask related to this set of conclu- sions are: What do expert and novice instructional developers think about when first given a problem to solve? On what do they focus their thinking? i in Total time spent to solve the complex problems was greater for the experts. The sorting tasks were inconclusive in this respect. Experts always produced more lines of data. , These results conflict with those of Charness whose subjects solved the problems more rapidly (1979) and with those of Chase and Simon who found novices to work as quickly as experts (1973). W The type of instructional development problem pre- sented, affects the amount of time needed to solve the problem. Even though problem solving in the games of bridge and chess may be viewed as complex, it could be that the visual nature of these skills is significantly different from the types of skills needed to solve complex instructional development problems. 137 e n t' n This could be investigated by posing a large number of dif- fering types of problems and measuring subjects’ time to solution. A potential research question in this regard might be: is the difference between expert and novice instructional develop- ers in time to solution for different types of instructional development problems? ion How do expert and novice instructional developers dif- fer in the extent of detail they generate when working through selected problems? anoing For the complex problems, the differences between expert and novice groups regarding extent of detail included use of more instructional development vocabu- lary and with greater detail and precision for experts. Experts persistently gathered information, deliberated alternatives and considered multiple strategies to a solution, whereas novices relied on the problems for information and quickly arrived at and accepted a solution. For the sorting tasks, experts produced detailed defi- nitions and used numerous examples from instructional development to illustrate. They knew more techniques and models as well as when and how to use them. Novices produced more superficial, sometimes inaccu- rate definitions varying in the extent of detail. They used few instructional development examples, illustrating rather from classroom experience and/or the complex problem tasks used in this study. They knew fewer techniques and models and less about when and how to use them. These findings are supported by the Chi et al. studies (1982) where novices produce more errors and experts produce more complete information. Experts have a large repertoire of 138 knowledge/experience upon which to draw (Leinhardt, 1983), readily access contingency plans and seem to work from a large store of knowledge so they almost know what to expect (Berliner, 1986). Like Berliner’s subjects, the novices in this study seemed to be driven by a goal of responding, making quick judgements and fail- ing to use knowledge they often possessed. W194 Novice instructional developers lack fundamental instructional development knowledge. or While solving instructional development problems, novice instructional developers fail to access the instructional development knowledge they possess. or While solving instructional development problems, novice instructional developers access more general schemata when information is not available for what— ever reason. Chi et al. (1982) support the notion that novices lack fun- damental knowledge. Gagne and Glaser (1987) state that novices may fail to access important knowledge or that they access more general schemata when information is not available. Bespmendedjuestmls Further study is needed to determine differences in the level of detail generated by expert and novice instructional developers during solution of instructional development problems. Examples of potential research questions are: How is the expert’s store of knowledge best acquired? At what point in their 139 training do novices begin to "put it all together?” Are there distinct stages in the movement from novice to expert instruc- tional developer as Dreyfus and Dreyfus, (1986) would suggest? If so, can instruction be appropriately matched with those stages? 01195119134 How do expert and novice instructional developers dif- fer in the way they categorize selected problems into units? anojngs In the complex problem tasks, experts delineated and explained numerous relationships among competency com- ponents at an abstract level whereas novices noted some relationships but at a concrete level. Experts’ explanations were detailed, explicit and at times related to instructional development experiences. Novices’ explanations lacked detail and precision, they used fewer competency categories and used infor- mation given in the problem unquestioningly. For the sorting tasks, experts worked from general to specific labels, provided explicit and detailed defi- nitions and used labels that were mutually exclusive. Novices worked from both general and specific labels, produced some inaccurate and extremely broad defini- tions and labels and some labels were not mutually exclusive. As in Chi et al., (1982) expert and novice groups produced some common knowledge, novices categorized by surface structure, and their basic categories were often subordinate categories used by the experts. In agreement with Leinhardt and Smith (1985) was the finding that experts display a more refined structure of knowledge and show multiple linkages among categories of informa- tion. Experts seem capable of making inferences from the data, 140 while novices seem more bound to literal interpretations (Berliner, 1986). W Expert instructional developers use what they know of similar instructional development problems to solve new problems. Gagne and Glaser’s work (1987) formulates this conclusion, as well. Again, Harmon and King’s notion of cognitive strategies appears in the wide use of relationships among components by experts (1979). It is as though experts perceive problem patterns (Chase a Simon, 1973), and classify problem types (Charness, 1979). Woes We need to begin searching for ways to find out how novices can be helped to move more efficiently from a concrete level of understanding to one based on the principles of instructional development. It would also be useful to know more about the con- figuration of linkages among categories of information used by expert instructional developers. Examples of related research questions are: Over time, do novice instructional developers progress through developmental stages ranging from concrete to abstract levels of thinking? If so, can movement across these stages be advanced? How do expert instructional developers conceptually link categories within their knowledge bases? 141 Question 5 How do expert and novice instructional developers dif- fer in consistency in sequence of problem solving, time spent working on the selected problems, extent of detail generated, and categories imposed across selected problems? Ending; Overall, experts consistently produced a richness of detail not observed in the protocols of novice instructional developers. They worked at a higher level of sophistication, demonstrating an understand- ing of principles and relationships among instruc- tional development components. Experts made more inferences from problem statements, and they had a larger repertoire of information which they were able to describe more accurately and succinctly than did the novices. These findings were not surprising. In much of the related research, similar findings were observed. For example, Chi et al. (1982) noted that expert and novice responses to physics problems were qualitatively different. Leinhardt and Smith (1985) found that expert teachers exhibited a more elaborate knowledge struc- ture than did novice teachers. i n r When multiple cases of different problem types are posed to expert and novice instructional developers, certain behaviors persist despite the differences in problem type. Bessmeodefluestjm This study raises more questions than it answers and now requires both replication for validation and experimentation for theory building. 142 These consistencies provide a rich source for new questions about expertise in instructional development. Among these might be: If this study were replicated with different subjects, how would the findings differ? Would it make a difference if the same subjects were used, but the problems were changed? Would the results remain consistent if still another problem type were introduced? W Some additional points which surfaced during the course of the study that may require further consideration in future studies of this nature are now discussed.~ 1. Some of the findings may be attributed to individual differences. While controls were used to examine individuals and to report in conclusions only the findings which did not appear to reflect a personal preference, that possibility cannot be ruled out. These findings show that it is possible that each expert has an expertise within instructional development, and the novice data suggests that levels of expertise may be more pronounced in the more formative stages. It is entirely possible that one or more of the subjects has a preference, for example, for verbal versus visual communication. That, of course, could affect the results of the largely verbal and the more visual task portion (models sorting) of this study. 2. There did not appear to be any indication that males and females consistently used different approaches to problem 143 solving. However, sequencing evidence would not rule out that the males used a more linear, step by step approach. 3. There is no way of knowing how much of the data pro- duced resulted from guessing, particularly in the complex problem solving tasks. During card sorts, various comparisons pointed to the guessing factor, since some terms that were not defined as unknown were incorrectly defined otherwise. 4. Protocols from Expert 1 may reflect her research in instructional development models and her recent experience working as a consultant to a company having similar problems to those described in Problem 3. This was purely coincidental, and because it was mentioned by Expert 1, it becomes a consideration in the evaluation of data from the first three problems, particularly for the expert instructional developers. 5. Expert I consistently stated that lack of response to her questions was a source of frustration. Her decisions were dependent, she stated, on what she learned during questioning clients during instructional development interactions. Certainly, this is not a factor to be taken lightly. Interaction with the subjects in this study was ruled out in order to remove the possibility of the researcher biasing the direction of responses. A study in a real setting or using a set of simulated responses should be undertaken and compared with this study to find out how responses are changed due to client-consultant interactions. 144 Chapter Sommory Expert instructional developers used a systematic and per- sistent approach, working within a general instructional develop- ment sequence to solve the problems posed in this study. They used more instructional development terminology, cited many instructional development examples and produced more detail in their responses than did the novice instructional developers. The experts considered multiple solution alternatives and demonstrated a keen awareness of the interrelatedness of components of the instructional development process. They also knew more about mod- els and techniques and how to appropriately apply them. The quan- tity and quality of information produced by the experts exceeded that of the novices. Overall, in contrast to the novices, expert instructional developers in this study used a more systematic problem solving sequence, produced more data and more detail at a higher level of abstraction and acknowledged the relationships among components of the instructional development process. This study attempted to begin to find out what instructional developers do and how they do it, by examining qualitative differ- ences between experts and novices in the field. It is hoped that what was learned here about the differences in sophistication between groups can be used to further the work in describing com- petencies of instructional developers for the workplace, and more important that it can be used as a benchmark for study of 145 improved strategies for enabling the transition from novice to expert instructional developer. 146 APPENDICES 147 APPENDIX A Problems Presented to Subjects 148 PROBLEM 1 Imagine that you are chairperson of a committee whose five members are from the teacher education department. You are charged with the responsibility of submitting a proposal to the Dean for the design, implementation and evaluation of a new teacher education undergraduate course in tests and measurement. The course should provide students with experience in selecting appropriate methods of testing, writing test items, and interpreting scores on both teacher-made and standardized tests. As you think about this proposal, you reflect on what you know about the situation: 1. About 400 students are enrolled in the teacher education pro- gram. It is likely that several class sections will be offered each term. 2. All of the committee members have experience in tests and measurement. That experience ranges from graduate course work in measurement to consulting with standardized testing firms, research in criterion-referenced testing, research in norm-referenced testing and authoring a textbook about exam- writing skills. 3. The course will be required for 2 quarter credits. 4. Students are not required to take prerequisite courses in statistics. 5. A course in tests and measurement is being taught at the graduate level, but in a different department in the college of education. 6. Students receive an introduction to tests and measurement in their first teacher education course. 7. Committee members were appointed by the Dean who has freed your schedules so that your committee can meet on a regular basis. 8. Students are grumbling about the pending additional course. At this point as committee chairperson you want to prepare an organizer before your next meeting so that you have a clear idea of the tasks which need to be done in order to complete the pro- posal. Please describe what you think needs to be done, and why what you have stated is important to the development, implementa- tion and evaluation of the course. 149 PROBLEM 2 Imagine that you have a position in the Management Training Unit of a statewide food chain. You and the Director of Management Training are charged with the responsibility of submitting a pro- posal to the executive committee for the design, implementation and evaluation of a new course for trainees and experienced department and store managers. The course is to be entitled “Principles of Supervision“ and should include the nature of man- agement, planning, organizing, controlling, performance standards, communication, motivation and improvement of manager effective- ness. As you think about this proposal, you reflect on what you know about the situation: 1. About 750 current full-time managers and manager trainees will be expected to take the course. The course will be offered several times a year at six different sites. 2. The Director of Management Training has a 4-year business degree, five years’ experience with the food chain, and had reached the position of store manager. She is, therefore familiar with store operations. 3. It is possible to arrange schedules so that the course can meet one day a week for eight weeks. 4. Due to institution of a new policy last year, trainees hired after the policy was adopted must hold a 4-year business degree. Therefore, you can expect that at least some trainees will have a business degree. 5. Almost all trainees and experienced managers have attended a one-day orientation of new employees. 6. Some experienced managers have been with the food chain for years and have worked their way into management positions based on job performance. These managers are not required to hold a business degree. 7. You and the Director of Management Training were hired as permanent employees because the Personnel Director was unable to devote the time necessary to undertake this task. 8. Managers have reported that employees are unreliable, and day-to-day operations do not allow adequate time for train- ing. Employees complain that the managers don’t know how to run a store. 150 At this point, you want to prepare an organizer before your meet- ing with the Director of Management Training so that you have a clear idea of the tasks which need to be done in order to complete the proposal. Please describe what you think needs to be done, and why what you have stated is important to the development, implementation and evaluation of the course. 151 PROBLEM 3 Imagine that you are chairperson of a six member training commit- tee in an optical company engaged in the production of optical precision instruments. You are charged with the responsibility of submitting a proposal to upper level management for the design, implementation and evaluation of a course for Shop Foremen in the operation of a lens grinding machine which is to be purchased. The course should include set-up, troubleshooting, simple mainte- nance and how to determine appropriate settings for machine opera- tion. As you think about this proposal, you reflect on what you know about the situation: 1. Initially, there will be twelve trainees, and if the program is successful, 36 additional people will be trained during the next year and a half. 2. The committee in addition to yourself, includes a science graduate, the Head of Research and Development, the Head of the Personnel Department, a Shop Supervisor, and the Head of the Production Department. 3. It is expected that in order to gain optimum proficiency, the group of Shop Foremen will need to spend about six weeks in the course. 4. A few Shop Foremen hold a bachelor’s degree in either arts or science. 5. All Shop Foreman have attended a two-year management training program sponsored by the company. 6. The Shop Foremen have no experience with this new machine. 7. Because the company exists primarily to manufacture and sell its products, training will not be allowed to interfere unduly with production. 8. The Shop Foremen are reluctant about this training program. In their past experience with the company, they felt the courses were too long, they had no opportunity to participate in the structuring of the course and were given no signifi- cant on-the-job training. At this point as committee-chairperson you want to prepare an organizer before your next meeting so that you have a clear idea of the tasks which need to be done in order to complete the pro- posal. Please describe what you think needs to be done, and why 152 what you have stated is important to the development, implementa- tion and evaluation of the course. 153 SORTING TECHNIQUES Here is a stack of 27 cards, arranged in alphabetical order. On each card you will find the name of a technique which is used by instructional developers. 1. Please think outloud while you sort the cards into piles according to kinds of tasks you would need to address while solving instructional development problems. If this is the first time you have heard about a particular technique, the card(s) should be put in a separate pile. Remember, think outloud. 154 For each pile, give me the label of the pile and your rea- son(s) for including each card in the pile. 3. 155 Please further sort the cards in each pile into additional piles if you are able to do so. sort. Again, think outloud as you 4. 156 For each new pile, give me the label of the pile and your reason(s) for including each card in the pile. 157 Let’s go back to your original pile labels. For each label, I am going to ask you to tell me everything you can think of about that label and how a problem involving use of that par- ticular concept might be solved. For example, one of your labels is . Tell me everything you can think of about and give an example of how you might use to help solve an instructional development prob- em. 158 SORTING MODELS Here is a stack of 15 cards. On each card you will find a model which is sometimes used by instructional developers. 1. Please think outloud while you sort the cards into piles according to kinds of tasks that you would need to address while solving an instructional development problem. If you have no idea what to do with a particular model, put the card in a separate pile. Remember, think outloud. 159 For each pile, give me the label of the pile and your rea- son(s) for including each card in the pile. 3. 160 Please further sort the cards in each pile into additional piles if you are able to do so. Again, think outloud as you sort. 4. 161 For each new pile, give me the label of the pile and your reason(s) for including each card in the pile. 162 Let’s go back to your original pile labels. For each label, I am going to ask you to tell me everything you can think of about that label and how a problem involving use of that par- ticular concept might be solved. For example, one of your labels is . Tell me everything you can think of about and give an example of how you might use to help solve an instructional development prob- ems. 163 APPENDIX B Bennett’s List of Techniques 164 TECHNIQUES From: Bennett, Table 4.2, p. 57 Multi-Image/Multi-Media Presentation Feedback Needs Assessment Brainstorming Story Boarding Questionnaire Long-Range Planning Field Test Flowcharting Management by Objectives Bloom’s Taxonomy Checklists Literature Search Programmed Instruction Formative Evaluation Role Playing Sequencing of Objectives Summative Evaluation Standardized Tests Case Studies Computer Search Micro Teaching Task Analysis (Task Description) Content Analysis Interviewing Users Discovery Technique Appraisal Interview Criterion Referenced Measures Simulation (Gaming) Computer Assisted Instruction Cost-Benefit Analysis Behavior Modeling Authoritative Opinion Program Evaluation Review Technique Contract Plan Gagne’s Taxonomy Program Planning Budget System Linear Programming Learner Verification and Revision Likert Scale Technical Conference 165 Critical Path Method Observation Interview In-Basket Technique Cognitive Mapping Krathwohl’s Taxonomy Delphi Technique Shaping Card Sort Function Analysis Information Mapping Discrepancy Evaluation Instructional Analysis Kit Decision Tables Critical Incidents Technique Nominal Group Process Stake Model (Evaluation) Force-Field Analysis Gannt Chart Mathetics 166 APPENDIX C Models Classification of Models Used for Models Sorting Task Riegeluth and Merrill Blondin Gerlach, Ely, and Melnick Banathy Briggs and Wager Havelock IDI Lippitt and Nadler Gagne Davis, Alexander and Yelon Gentry Instruction, teaching, learn- ing (Trimby & Gentry, 1984) Organizational development (Gustafson, 1981) Classroom (Gustafson, 1981) Small-scale lesson, course, module development (Andrews & Goodson, 1980) Product development (Gustafson, 1981) Teaching instructional devel- opment (Gustafson, 1981) Design (Trimby & Gentry, 1984) Adoption (Trimby and Gentry, 1984) Systems development (Gustafson, 1981) (Organization) (Lippitt & Nadler, 1979) Instruction, teaching, learn- ing (Gagne) Classroom (Gustafson, 1981) Management systems framework (Gentry) 168 Lee Needs assessment (Trimby G Gentry, 1984) Romiszowski Course, curriculum, systems (Romiszowski, 1981) Hamreus Maxi-management Mini-client communication (Gustafson, 1981) Bibliography for Models Used Banathy, G. H. Instnootionol_§y§1om§. (1968). Belmont, CA: Fearon Publishers, Inc. Blondin, J. Development leadership. (1977). In Aooooonont_ono i n- in r s. Manila: Southeast Asia Instruc- tional Development Institute Briggs. L. J.. a Wager. H. (1979). 111W tng_ooojgn_oj;1n§1nnot_on (2nd ed.) Florida State University. Davis, R. H., Alexander, L., G Yelon, S. L. (1974). Loonning r c r tion. New York: McGraw-Hill. Gagne, R. M. (1985). [he oonditions of learning (4th ed.). New York: Holt, Rinehart and Winston. Gentry, C. G. (1980-1981). A management framework for program development techniques. n me t, 1, 33-37. Gerlach, V. C., & Ely, D. P. (1979). [goonjng and nooia: A sys- tomotio_ooonooon (2nd ed.). Florida State University. Hamreus. 0- (1970). www.- oomont. Teaching Research Publication: A division of Oregon state system of higher education, 1, 16-18. Havelock. R. G. (1973) WWW ludmmm Englewood Cliffs. Educational Technology Publications. Instructional Development Institute. (1971). lDl_MQd£l- UCIDT. Lee, W. S. (1973). The assessment, analysis and monitoring of educational needs. £doootiono1_1oonnolooy, 8, 28- 32. 169 Lippitt, G. L., & Nadler, L. (1979). Emerging roles of the training director. In Bell, C. R. and Nadler, L. (eds.) Ino cllent;sonsultant_handbook. Houston: Gulf. Reigeluth, C. M., & Merrill, M. D. (1979). Classes of instruc- tional variables. £dooationol_1oohno1ogy, 3, 5-24. Romiszowski. A. J. (1981). Des1en1ng_1nstruct12nal_sxstems- New York: Nichols. 170 A MANAGEMENT FRAMEWORK C J a o ' DESIGN RESOURCE “00""0" mronmanom acouismou Pacxacmc "‘Nmmc ALLOCATION NEEDS nuns: H COMM”“'C‘"°N NEW/oak ‘ ‘RSONNEL msuunnow K EVALUANON ' Leaoensmp vacuums G OPERATION M F L Development Functions Support Functions (Gentry, 1980-81) management {ollow-up steps necessary to reinforce problem solution 7 Provide consultation {or menament on oveluetlon end teview o! payout 3 Helps eeunine the lent-lento end that 6 Explore: ”motion team to Implant plan management the training plane (Lippitt et al., 1979) CONDITION VARIABLES METHOD VARIABLES OUTCOME VARIABLES 171 ‘What ts' W I. Design Definition - problems (— - solutions al.. No Approve . \lr Yes '| 2. Development Production, validation. use (4 in pilot projects 'rototyp'c system No Approved. ‘1’ Yes 3. Dissemination Reproduction and application to full-scale projects , I Workmg rystetn No Approved . I W 0 What should be (Romiszowski , 1981) SUB) let-MATT Ell INSYIW “(CT-HAT!!! SW? CHARACTERISTICS COALS Win mull” csenna Referee-ts “we! hos we) “any Melee Comma Orientation The Item-elem Sm- Seep-"in Sella-oops: Tasha-Mont 9924122: + v‘ 4r . ORGANIZATIONAL STRATEGIES DELIVERY STRATEGIES “MAME!“ STIATEGY My et lems-the Wing Mutton Smite Depot 0! Burnett“ 0t delivery etrateglee General-ties Convent, Practice Special c-pe-‘uu- 0t ergantsettonal etratentee Instances teem Practice necked-tal Aspen lee-re Keeping ' Stu-curd Stet-glee Cub Per management Gretel-l Selection Synthesis»; Per eyetent Wt 3 Mia; Motivation latrinetc bubble [ l I Y LEARNER OUTCOMES HSYIUCTIONAL HS"- SPONSOINO INSTI. WSW OUTCOHLS tuttou outcoucs Ulectlven-a Accuracy franeIer Lun- m Lani anon- S Retention lanes-r7 ante ate-wean o-te Hackney M Afloat . spa-er Learning tune Anna-l D m Appeal (Reigeluth ot e1., 1979) 172 Amman .ocwmov .3289... $553.1: L2... tout... 5...... 53.9.51: s... En... 8.3.9.5. 0... 3.3.2.?! 8:9... 8:1. I: .5858... .o 2:96 .3.- 353. .9 III... Frock. «Ex-8oz . n. 95K... .o 2.2. one c2222 soc-:5 . 35.52:... 95333. .838. 0523.... 0056.0..3 9:205 . 3.36:6 9.2.3. 9.5.6... . .253... 3.3.53 2: 9.2.33... . 003%.ch .2... .o :3... 3.326... 05.958 ”9:850 o... .o .053. 9.752.... 6.228 9.230. 5.23:...“ . N” . . . n _ A 1 _ n.<>w.¢hwc 025:0 _ .zmzmocOuzmfl . _ 02529.3..— . w0<¢0hw 2.... Op $.wa H$250”va F— . 20.»..w0twa w>.»0w..wmj_ . >102w2 02.5.0.5 Oh 4(>m.¢hwm / _ .6258..me / mmw2h¢m4< uZO....Zw.:..< hzw>w 4(20303mth. wmwOOmn. OZ_Zm