,l 1 42:5 . :2" 4 Uza‘lsxm: - .- "3 'p 2 ‘Wt‘ -. fig’sfil‘t'lgfi , { vi 1' 23334444441 “ 95%;! D V - 32232 : 4M I}; w: . . w- <- _:.. .4 - .cumgaz... r". fl.-- - _ -. , 332:3... . 3.11.29 war." it'u.‘ ,-v»4~v I : 31'. 4 v. "i u~t“"' I" :3 “ £33 «Iv—:xr‘n-II .qu. “V.V :r;- ‘3 ‘1 $1: 2’ 5-" -0 . .. - .. 1—3.: ”3.5.3:;- arvw- " -... .N _ .. qu' .., 4. '"Wsfiwm’fi X" ' x . ' ‘V .. {v a A. . We tr. ‘32 m 1 \' 0 4.5.1160 n-a . W 15?; Rb 3,: 4 ‘ .u“ ,g- ,, 41“: . “.4 ...J .4.»» A 19,3 ' 5 u‘ 0“" I n J“ .-o.-- ".-4» . '1 31~ 0. 15,. .« "Inc: «7 Jun... ‘ a. _‘ h“ -..- ~3‘u:'_..-. u.._ _;; , - an... ~, . '.-‘ . ,,,z;,-..., 4r" '4? v :3“ ’. wu’u 2"" 33: 7-“ 7'» . . “:11. '43:. n 5319223?“ ‘ '41Sg‘vgt‘qa 1:3”. 4. ‘ , . 1 ¥‘.:1N ' . - . l I 4.6!: {1:53.3- 11’;v!"\ 4"“ " _ ‘ fivh‘th’ mu {‘L . ‘0 ’1 _~ I 't . ‘l H. .3 '1 “4?: I 1;“ 4“ \n D 3.» ,, , -4. , .‘W -... ”4:31;:‘3 .. ‘3 31’11‘ ._.... m‘ - ((ir‘fl‘w .‘w '.--.._.\- , ‘ m ..r ._ . hi: 954'. A 1.: v ‘ .o :'-;' b ' t‘ .93,“ 5:14 “9? a ' :4 ‘ 1‘: .5 - v- ' "li Q0- .4- 1‘: '1 C V‘ " ‘ ,4 3- $3 -v.— .- no- C¢¢-n~' '. .¢:->.4v~4 1 .e‘u‘. co. ,4 -..- .4-- .. H .‘JL'J. v ""34. i- 5;. "’2 ' \ 4 3 .9. file; 't‘f"?‘-§ ~ ‘ ‘..| u! M- :‘M’. " “$1: ~ . 1.: l y 1 1331:. ‘v‘ :10; .42 :3 ‘l 5, .z I" ‘ in“; ‘ ., ‘ h“ M!” ,2‘::.. .. < ... . .9 -.- ,— vfj‘afl‘; w "f. W . :7. 3;: 0.-. . ‘- arr-:3“? ... T“ --u r. 4“ o." - . ,'.'. .. ‘- _.—- ' .c-- ,4.-. " dv" - 41‘! £1 1} Niki $44., \ l'.‘ lg v 5-: ... .. . .1. 4-5“: - . . L ".15. '. 4 ,0 ,3; ~.«i- .4311; ' @355 I: L'l n,- W 313 l 4 ‘ n J | I I A .5 h A 4%” MA 9‘ . «z ”2“»! 5.14” '1 El : n .; as: 6.1" I w :lfifififiza "a . ' ...‘- n, $35... .- 33’???“ .4.‘ ‘5 "£3 «.33.: ‘15" w." i. Egg?!" 4.— 1.1: 04-" . '. n "3 “ 1‘ : M4. 8 “431.48%" ’ 'H ' z .4. . IUIWHI’lHlllllHlllllllllllllIll!HIHIIIHIHI 9 31293 01568 7662 J ' LIBRARY I Michigan State University This is to certify that the dissertation entitled THE EFFECTS OF INQUIRY METHOD, INFORMATION SHARING AND INTRAGROUP CONFLICT ON GROUP DECISION MAKING presented by Dennis J. Devine has been accepted towards fulfillment of the requirements for Ph.D . degree in Psychology I / MMrofessO Date January 10, 1997 MSU is an Affirmative ActiOn/Equal Opportunity Institution 0—12771 - . ...‘ . PLACE It RETURN BOX to remove We checkout from your record. TO AVOID FINES return on or bdore date due. DATE DUE DATE DUE DATE DUE MSU Is An Affirmative Action/Emmi Opportunity Inetitwon Wm: THE EFFECTS OF INQUIRY METHOD, INFORMATION SHARING AND INTRAGROUP CONFLICT ON GROUP DECISION MAKING By Dennis J. Devine A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Psychology 1 997 ABSTRACT THE EFFECTS OF INQUIRY METHOD, INFORMATION SHARING AND INTRAGROUP CONFLICT ON GROUP DECISION MAKING By Dennis J. Devine Strategic decisions in organizations involve complex, cross-functional issues which have long-term implications for the viability of the organization, and they usually involve groups of individuals from various areas of the organization. However, groups have been shown to be susceptible to process loss, which is viewed in this study as a failure on the part of group members to share information or adequately utilize information which is shared. Techniques designed to reduce process loss in group decision making have foCused on stimulating controversy through the use of inquiry methods such as Devil's Advocacy (DA) and Dialectical Inquiry (DI). This paper offers an integrative model of group decision making which focuses on group-level information processing, and examines the impact of several methods for structuring group discussion on group process and performance. In general, the inquiry methods used were found to have little impact on group process or performance but there was some support for the process relationships in the model. Two group-level variables, cognitive ability and task knowledge, were the best predictors of group performance. The discussion offers a revised model and elaborates on future research needed to further specify a model of group decision making in ill-structured contexts. ACKNOWLEDGMENTS Over the two years spent working on this project, there were times when I felt I would never finish. I’m sure I’m not the only one. Although the despair and self- loathing that sets in as the months slip by must be common to those pursuing Ph.D.’s (oh, I told myselfI wouldn’t do this), the shared misery brings little comfort. Dissertations remain personal, dirty little guerrilla wars where motivation is ambushed daily and good intentions are fragged by more hedonistic desires. Still, the war is now over and, according to MSU, I won. I would like to take the time to thank a few of those who helped in the war effort, especially those that are still missing. To begin with, I would like to thank the members of my dissertation committee -- Dan Ilgen, Neal Schmitt, Steve Kozlowski and Rick DeShon (the Joint Chiefs?). They not only provided a great deal of helpful advice and constructive criticism on this project but also, during my tour of duty at MSU, provided me with four different role- models of how to be a good researcher. I learned something important about being an I/O psychologist from each of them. I also owe a debt of gratitude to my buddies in the I/O program -- Dave Whitney, Stan Gully, Jen Hedlund, Eleanor Smith, Matt Smith, Field Marshal Herr William von Rogers, David “The Mind” Chan, Ken Brown, Dan Weissbein, and Dave Waldschmidt. These comrades supplied generous quantities of technical advice, iii 'a. v 4‘ Maximum" ' ‘1 - sympathetic speculation and emotional support during the long process -- usually, I’m sure, when they were wishing they had chosen the V8. (Note to self: In the future, beware of graduate students hanging out in computer rooms looking bewildered.) It’s hard to imagine doing a dissertation - or even going through graduate school -- without them. Thanks, guys. Thanks also to Mom, Dad, Cathy and Jim. What good is a Ph.D. without a family to lord it over? Just kidding, of course (I think). Finally, I would like to thank my wife for putting up with many late nights, lots of meaningless rambling, a good deal of distracted listening and a fair amount of absentmindedness over the two years it took to complete this project. I would also like to thank her for putting up with the late nights, meaningless rambling, distracted listening and chronic absentrnindedness. (By the way, I don’t think it’s going to get any better). N o warning would have been sufficient, but I couldn’t have asked for more love and support. She even got in on the end of preparing this manuscript and, in only a short time, learned how comically surreal the formatting process really is. Thank you, Julie. iv TABLE OF CONTENTS List of Tables .................................................................................................................... vii List of Figures .................................................................................................................. viii INTRODUCTION .......................................................................... 1 Strategic Decision Making ................................................................................ 2 Strategic Decision Making In Groups ............................................................... 4 Process Loss in Strategic Decision Making ............................................................... 5 A Conceptual Framework for Process Loss ...................................................... 5 Restricted Information Sharing ......................................................................... 8 Poor Information Integration" ..................................................... 16 Summary: Process Loss In Group Decision Making ..................................... 23 Controversy in Strategic Decision Making .............................................................. 25 Controversy-Based Interventions in Individual Decision Making ,,,,,,,,,,,,,,,,,, 28 Controversy-Based Interventions in Group Decision Making ,,,,,,,,,,,,,,,,,,,,,,,, 43 The Forgotten Role of Synthesis ..................................................................... 51 Integration ................................................................................................................ 55 A Process Model of Group Decision Making ................................................. 55 A Prescriptive Model of Group Decision Making .......................................... 61 Summary ......................................................................................................... 66 METHOD ...................................................................................................... 68 Participants ...................................................................................................... 68 Task Overview ................................................................................................ 68 Research Design .............................................................................................. 73 Procedure ................................................................................ 78 Measures ......................................................................................................... 83 Rater Training ................................................................................................. 90 RESULTS ......................................................................................................................... 94 Overview of Results Section ........................................................................... 94 Group Attrition ................................................................................................ 94 Quality of Manipulations and Measures ......................................................... 97 Process Hypotheses ....................................................................................... 109 Inquiry Method Hypotheses .......................................................................... 1 13 Summary ....................................................................................................... 116 EXploratory Analyses .................................................................................... l 18 DISCUSSION ................................................................................................................. 120 Study Contributions ...................................................................................... 120 Study Limitations .......................................................................................... 121 The Process Model ........................................................................................ 126 Dialectical Inquiry in Group Decision Making ............................................. 131 Biased Information Sarnpling and Hidden Profiles ...................................... 133 Future Directions .......................................................................................... 134 Conclusion .................................................................................................... 138 Appendices APPENDD( A SOUTHEAST AIRLINES, INC.: A BUSINESS SIMULATION IN THE AIRLINE INDUSTRY .......................................... 140 APPENDD( B Instructions to Groups in the Consensus-Seeking Condition ...................................................................................................... 172 APPENDIX C Instructions to Groups in the Traditional Dialectical Inquiry Condition .......................................................................................... 173 APPENDD( D Role Instructions Provided to the Vice-Presidents Involved in the Dialectical Inquiry Process (TDI & SDI conditions) ,,,,,,,,,,, 174 APPENDIX E Instructions Provided to Groups in the Synthesis Dialectical Inquiry Condition ........................................................................ 175 APPENDIX F Role Instructions Provided to Vice—Presidents in the Synthesis Role (SDI condition) ..................................................................... 176 APPENDD( G Information Sharing Check-List Measure ,,,,,,,,,,,,,,,,,,,,,,,, 177 APPENDD( H Observer Rating Form: Intragroup Conflict ,,,,,,,,,,,,,,,,,,,, 187 APPENDIX I Questionnaire Items: Intragroup Conflict ,,,,,,,,,,,,,,,,,,,,,,,, 188 APPENDIX J Observer Rating Form: Process Facilitation ,,,,,,,,,,,,,,,,,,,, 189 APPENDIX K Observer Rating Form: Controversy ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 190 APPENDD( L Observer Rating Form: Controversy (Recoded) ,,,,,,,,,,,,,, 191 APPENDIX M Task Knowledge Measures .............................................. 192 APPENDIX N Questionnaire Items: Implementation Quality ,,,,,,,,,,,,,,,,, 197 LIST OF REFERENCES ................................................................................................ 198 vi LIST OF TABLES Table 1 - Empirical Studies of Controversy-Based Decision Aids ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 42 Table 2 - Information Provided to Vice-President Positions in "SouthEast Airlines" ,,,,,,, 71 Table 3 - Role Assignments by Study Condition ............................................................... 77 Table 4 - Sequence of Events in "SouthEast AirlineS" ...................................................... 79 Table 5 - Cell Sizes for Study Conditions and Statistical Analyses ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 96 Table 6 - Interrater Reliability Estimates for Process Measures ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 99 Table 7 - Multivariate Analysis of Potential Confound Variables ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 106 Table 8 - Means, Standard Deviations, Intercorrelations and Reliabilities for Measured Variables .................................................................... 107 Table 9 - Hierarchical Moderated Regression Results for Hypothesis 1 ,,,,,,,,,,,,,,,,,,,,,,,,, 110 Table 10 - ANOVA Summaries, Hypotheses 6-8 ............................................................ 115 Table 11 - Cell Means and Standard Deviations, Hypotheses 6-8 ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 115 vii LIST OF FIGURES Figure 1 - The Moderating Effect of Intragroup Conflict .................................................. 59 Figure 2 - A Process Model of Group Decision Making ................................................... 62 Figure 3 - A Prescriptive Model of Group Decision Making ............................................ 65 Figure 4 - Path Estimates for the Process Model ............................................................. 117 Figure 5 - Information Sharing x Intragroup Conflict Interaction ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 128 Figure 6 - A Revised Model of Group Decision Making ................................................. 130 viii INTRODUCTION Groups and teams are ubiquitous in society and increasingly employed by organizations in the form of production teams, quality circles, committees, advisory boards, task forces, project groups and semi-autonomous work groups. A strong and enduring interest in group and team performance has generated a voluminous literature and a number of reviews (e.g., Cartwright & lander, 1968; Steiner, 1972; Shaw, 1981; McGrath, 1984; Levine & Moreland, 1990; Bettenhausen, 1991). As a result, we know a great deal about the factors that determine their effectiveness (Hackrnan, 1990). Recently, a distinction between "groups" and "teams" has been made by several reviewers (e.g., Salas, Dickinson, Converse, & Tannenbaum, 1992; Ilgen, Major, Hollenbeck & Sego, 1993; Saveedra, Barley & Van Dyne, 1993). However, the distinction has not been honored by most of the literature which has tended to use the terms interchangeably (Ilgen et al., 1993). As a result, the terms "groups" and "teams" are used synonymously in this paper to refer to sets of (1) three or more individuals who (2) interact around a shared goal and (3) are interdependent in some fashion (Steiner, 1986). Although groups and teams are charged with many different tasks and responsibilities within organizations, one of their most common fimctions is to make decisions and solve problems that arise. Typical decision situations involve the division of work assignments, the sequencing and coordination of activities, purchasing materials, and allocation of resources. These sorts of decisions are generally encountered on a regular basis, so organizations develop rules, procedures and policies to standardize and expedite their resolution (Katz & Kahn, 1976; Galbraith, 1973; Taylor, 1992). Strategic Decision Mild—11g Many decision situations of substantial importance to organizations are what some have called "strategic" in nature. These decisions differ from ordinary decisions in that they often cannot be forecasted, have important consequences for the entire organization and involve uncertain relationships between important variables (Taylor, 1992). According to Taylor (1992), strategic decision making has its roots in the fields of (1) individual decision making in organizations and (2) business policy and strategic management. A cursory review of the literature reveals almost as many definitions of strategic decision making as there are textbooks and papers on the subject (Taylor, 1992). Mintzbcrg, Raisinghani and Theoret (1976) examined 25 decisions in a variety of organizations and concluded that strategic decision processes are "characterized by ambiguity, novelty, complexity and open-endedness" (p. 250). Mason and Mitroff (1981) characterized strategic decisions as involving many complicated linkages between the organization and a dynamic and uncertain environment involving ambiguous information and conflicting goals among interested parties. Shirley (1982) described strategic decisions as those involving the entire organization and its environment and as ones which direct and constrain future activities important to the success of the enterprise. According to Shirley, strategic decisions involve such things as the organizational mission, goals and objectives, customer mix, product line, geographical service area, competitive advantage and activities with other organizations. Narayanan and Fahey (1982) defined strategic decisions as involving important resources or activities for which there is no precedent or predetermined responses. Thomas (1984) noted that strategic decisions have little initial structure, long time horizons, political implications, a sensitivity to environmental dynamics, and affect multiple areas of an organization. Pearce and Robinson (1988) identified strategic decisions as those made by top management involving considerable company resources that commit a firm to a course of 3 action with important implications for future profitability and require the coordination of many functional areas and factors external to the organization. Taylor (1992) stated that strategic decisions involve novel, ill-structured and complex sets of interdependent decision problems. A number of themes emerge from these various definitions. A dominant theme expressed by most is the importance of strategic decisions to the future well-being of the organization. Another characteristic is their infrequent, irregular, ofien unpredictable occurrence. Most reviewers also noted that strategic decisions afiect and pertain to multiple areas, functions or departments within an organization. Finally, a fourth critical characteristic separating strategic decisions from other important everyday decisions is their lack of structure (Ackofl‘, 1974; Mintzberg et al., 1976; Mason & Mitrofl', 1981; Thomas, 1984; Taylor, 1992). For the purposes of this paper, a decision situation can be said to be ill-structured when it involves ". . .decision processes that have not been encountered before in quite the same form and for which no predetermined and explicit set of ordered responses exists in the organization" (p. 246, Mintzberg et al., 1976). Moreover, ill-structured decisions involve uncertainty with regard to the relationships between variables in the organization and environment as well as interdependency between those variables (Hirokawa, 1990). As a result, the defining aspect of an ill-structured decision is the relevance of multiple vieWpoints or perspectives on what needs to be accomplished and how to go about doing it. For example, before a company decides whether to come up with a new company logo, it must first decide why a logo is important to company goals, how much impact the characteristics of the logo will have on those goals, and then identify the various elements that make a logo "g ." A primary purpose of this study is to better understand the factors which operate to afi‘ect group strategic decision making performance. As noted previously, strategic 4 decisions afiect the entire organization, have no precedent, involve multiple functions/departments and are ill-structured. Typically, multiple perspectives can be applied in addressing a strategic decision or problem, as well as a wealth of organizational and environmental data. Because of the above characteristics, "strategic" decisions are usually made by groups of people rather than individuals (Mintzberg et al., 1976; Quinn, 1980; Fahey, 1981; Schweiger, Sandberg & Ragan, 1986; Walsh, 1986). Strategic Decision Making in Groups In their review of decision making in organizations, Koopman and Pool (1990) note that "complex decisions are usually not made by individuals, but - after the necessary preliminary work - by boards of directors, project teams, management teams and so on" (p.107). They also suggested that the participation of multiple organizational members from several departments ideally yields a decision that is well-considered and well-analyzed. However, there is a great deal of anecdotal evidence which suggests groups often make decisions that are less than optimal - sometimes with disastrous consequences (Janis, 1972; Janis & Mann, 1977; Ilgen et al., 1993). A number ofcase studies in the literature have linked a variety of well-known "fiascoes" in world history to sub-optimal small group decision making, including poor decisions regarding the appeasement of Hitler at Munich, the escalation of US. involvement in Vietnam, the Bay of Pigs invasion, the break-in at Watergate, the launch of the space shuttle Challenger, the attack on the USS Stark, and the shooting down of an unarmed Iranian airbus in the Persian Gulf. These and other daily events provide a constant reminder of the need (and room) for improvement in strategic decision making by groups. Ifwe are to improve group decision making quality, it is necessary to Imderstand the factors that afiem group performance. The dominant theoretical approach to \mderstanding group performance was formulated by Steiner over 20 years ago and is centered around the notion of "process loss." The next section provides an overview of 5 Steiner's (1972) work on group performance and process loss, identifying two specific kinds of process loss relevant to strategic decision making teams: restricted information sharing and inadequate utilization of shared information. Process Loss in Strategic Decision Making Early research in social psychology painted a rosy picture of the efficacy of decision making in groups, showing them to be more efiective than individuals (e.g., Shaw, 1932). However, using the notion of nominal "statistized" groups, Marquart (1955) and Lorge and Solomon (1955) showed that groups appeared to perform better than individuals where each member had an equal and independent probability of solving the task simply because they were made up of more individuals. Ifthe same number of individuals working alone was compared to those working together, the advantage of groups disappeared. Since that time, a conclusion reached by several reviewers of the sizable literature on small group performance is that groups do not perform as well as their best individual member (e.g., Steiner, 1972; Hill, 1982). Although the relationship between individual performance and group performance is strongly dependent on task type (McGrath, 1984), across a wide variety of tasks and settings, groups tend to perform better than their average members but not as well as their best member (Hill, 1982). This suggests that groups do not combine their resources in an optimal fashion. The decrement between potential productivity and actual productivity has been termed "process loss” (Steiner, 1972) A Congeptugl Framework for Process Loss In 1972, Ivan Steiner conducted a review and re-interpretation of the group performance literature based on what he called a partial taxonomy of tasks and an 6 input-process-output approach. Steiner differentiated group tasks on the basis of whether or not group members could divide up tasks so that not all members were working on the same product. Whereas divisible tasks can be broken down into subtasks and the final product reassembled by combining the sub-products of the various group members, unitary group tasks involve a single group product that cannot be distributed among members and recombined. Steiner went on to assert that task type is a primary determinant of how individual inputs are combined to form a unitary group product. For unitary tasks, the group product is essentially a function of some selected individual in the group. According to Steiner, the function relating individual performance to group performance is inherently different across task type. Steiner focused his analysis on unitary tasks, identifying four types of such tasks and corresponding "rules" regarding which individual within the group would provide the group's final product: disjunctive, conjtmctive, additive and discretionary tasks. Disjunctive tasks allow the group to utilize the efl‘orts of their best member as the group product (e.g., spelling a dificult word, answering a history question). In conjunctive tasks, the nature of the task is such that group performance becomes a function of the worst member of the group (e.g., time taken for a group to cross a river). Additive tasks utilize the inputs of every group member, resulting in group performance becoming a function of the average member. Finally, discretionary tasks allow the group freedom to determine how individual inputs will be transformed into a group product - according to disjunctive, conjunctive or additive rules. A primary contribution made by Steiner was his formalization of the manner in which groups inputs are converted to outputs. According to Steiner, the actual observed productivity of a group is equal to its potential productivity minus process loss. "Process loss" is viewed as the additive result of two phenomena - coordination loss and motivation loss. 7 Coordination loss refers to a decrement in process that arises from the need to coordinate the efforts of multiple persons, for example, when a group of people participating in a tug-of-war must synchronize their tugs. In contrast, motivation loss refers to an unwillingness on the part of individuals to contribute their efforts to the group product. Since Steiner's work on the notion of process loss," a great deal of research has shown that groups can suffer from process loss in several ways. For example, Latane and his colleagues have extensively studied the "social loafing" effect (Latane et al., 1979; Latane, 1986). Social loafing is a type of motivation loss represented by a decreasing output/person ratio as group size increases. Social loafing appears to be a robust phenomenon, having been documented over a variety of physical tasks (i.e., clapping, shouting, rope pulling) and even cultures. Motivation loss and coordination loss seem well-suited to explaining process loss in groups performing tasks with some physical component. However, more relevant to group decision making settings is a conceptual fi-amework ofi‘ered by Larson and Christensen (1993) which identifies six general functions that occur in group problem solving that potentially involve process loss: problem identification, problem conceptualization, acquiring information, storing information and retrieving information. They point out that, with respect to groups problem solving, process loss is conceptually equivalent to inefficiency in mechanisms by which information known to individuals in the group is converted into a knowledge structure characteristic of the group as a whole. Problem identification is seen as the collective recognition by all members of the group that a problem exists which must be addressed. The process by which group members come to agree on the way a perceived problem is categorized is problem conceptualization. The third fimction, acquiring information, is the process by which groups identify and attain information perceived to be task-relevant. Alter individuals 8 acquire information, it must typically be stored in some fashion before it can be shared with other group members. Once in the group setting, information must then be retrieved before it can be shared with other members. Finally, after information has been shared with other group members, it must be integrated and implemented in a collective decision. Larson and Christensen (1993) note that little work on groups has been conducted focusing on how individual processes are related to group-level processes and outcomes. Their "information retrieval" function is an exception to this rule. "Information retrieval," or essentially the sharing of individual information with the entire group, has previously been identified and studied from two different theoretical perspectives. The next section reviews research conducted within these two paradigms that examines defects in group decision making that resulted from inadequate information sharing. Restricted Information Sharing Schweiger, Sandberg and Ragan ( 1986) note that group processes do not ensure that managers will adequately explore available information necessary for making good decisions. One way that this can happen is information known to individuals is not made available to the group as a whole. Essentially, if individuals in the group are unwilling or Imable to share relevant information not known to other members, that information cannot be used by other group members. Restricted information sharing between individuals and other group members has been studied from two vantage points in the literature: the "groupthink" syndrome (Janis, 1972; Janis & Mann, 1978; Janis, 1982) and the "biased information sampling" efl‘ect (Stasser & Titus, 1985; Stasser & Titus, 1987). Groupthink is thought to result from an overriding desire to preserve positive relations among group members, which leads to a restriction in the amount and type of information considered by the group. In the case of biased information sampling, groups center their discussion around information that group members hold in common. Both of these types 9 of process loss result in the failure on the part of one or more group members to share potentially valuable information. The next two sections review the literature with respect to these two phenomena. "Groupthink." Groupthink is a term that seemingly needs little introduction. In a 1972 seminal work entitled "Victims of Groupthink," Irving Janis coined the term "groupthink" and defined it as "a mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members‘ strivings for unanimity overrides their motivation to realistically appraise alternative courses of action" (p. 9). Taylor (1992) defined groupthink as "a collective pattern of defensive bolstering used by a decision making group to shield itself from negative information and criticism" (p. 980). In general, the term has been used to explain ". . . why people in authority frequently act contrary to enlightened self—interest by making decisions that are likely to be counterproductive" (Whyte, 1989). Janis and his colleagues (Janis, 1972; Janis & Mann, 1977; Janis, 1982) have identified a model of groupthink little changed in over 20 years. Janis (1972) identified five antecedents of groupthink, eight "symptoms" of groupthink and seven defective decision making processes. In essence, the theory suggests that high levels of the various antecedents lead to the exhibition of a number of groupthink symptoms and the use of defective decision processes. Groups sufl‘ering from groupthink accordingly make decisions which have a low probability of successful outcome. Janis classified the five antecedents of groupthink into three primary categories: group cohesion, structural faults of the group and a provocative situational context. Group cohesion referred to the level of interpersonal attraction and the extent to which the group was "close-101i ." Included in the category of "structural faul " were factors such as insulation from the outside world, lack of an impartial leader, lack of procedural norms and excessive member homogeneity. Among the elements of a provocative 10 situational context, Janis identified two focal aspects, high stress and low group "self- esteem." Stress was thought to result from perceived external threat and low hope of finding a better alternative than the existing plan. Low group self-esteem was thought to result from inadequate efforts to resolve the problem in the recent past. In essence, when levels of group cohesion and situational stress are high and structural aspects of the group are such that there is pressure to suppress individual dissent and come to a quick resolution, groupthink is hypothesized to manifest itself through a variety of "symptoms." These symptoms of groupthink include the illusion of invulnerability, moral certainty, collective group rationalization, stereotyped conceptions of the "opponent," self-censorship within the group, the illusion of unanimity, pressure on dissenters and the use of "mindguards" to ward ofl‘ anxiety. In terms of actual decision making behaviors, Janis (1977, 1982) noted seven process characteristics associated with groupthink: an incomplete identification of alternatives and objectives, a failure to re-examine the preferred choice or rejected alternatives, poor information search, biased and selective information consideration and a failure to develop contingency plans. Despite the popularity of groupthink as an explanation of how limited information processing leads to poor group decisions, relatively little empirical research has addressed the groupthink phenomenon. Mullen and Copper (1994) counted more reviews of the literature than empirical studies. In general, the research on groupthink can be divided into two categories: retrospective analyses of decision making episodes and laboratory studies. Several early studies of groupthink utilized a case study approach to examine the decision processes behind major policy decisions in history (e.g., Janis, 1972; Janis & Mann, 1977; Tetlock, 1979; Janis, 1982; Smith, 1984; Hensley & Griflin, 1986; Esser & Lindoerfer, 1989) In a comprehensive review of the groupthink literature, Aldag and 1 1 Fuller (1993) identified six retrospective case studies and nine empirical studies pertaining to groupthink. In general, these post-hoc analyses studies have tended to find support for the groupthink model (Aldag & Fuller, 1993). In contrast, studies of the groupthink phenomenon conducted in laboratory settings have been less supportive (Aldag & Fuller, 1993; Park, 1989). A number of studies have attempted to manipulate levels of group cohesion along with other structural or situational variables such as leader style/behavior (Flowers, 1977; Fodor & Smith, 1982; Leann, 1985), the desirability of conflict (Courtright, 1978), and task type (Calloway & Esser, 1984). For the most part, these studies have used groups of undergraduates and tasks involving hypothetical business scenarios or complex management simulations (although see Calloway & Esser, 1984 for a notable exception). A recent meta-analysis examining group cohesion and the quality of decision making found that high levels of group cohesion are necessary but not sufficient to produce characteristics of groupthink (Mullen et al., 1994). Of particular interest to this study are two experiments which examined the effect of groupthink antecedents on information sharing among group members. Fodor and Smith (1982) examined the effects of group cohesion and leader "need for power" on several process and outcome measures for groups of undergraduates solving a business scenario. Information relevant to the task was divided and distributed among group members, with a "self-censorship" variable created to measure the number of information cues group members shared from their respective "role sheets" during discussion. One experimenter observed each group during the study and coded the number of facts introduced by each member during discussion. The combined number of facts introduced by groups was found to be strongly and positively related to the number of alternatives considered (r = .71). Although Fodor and Smith found that groups introduced more factual information into discussion when group leaders had a low need for power, group 12 cohesion was not found to have a significant effect on the number of facts introduced into discussion by group members. Another study by Leana (1985) also examined the eflect of group cohesion and leader behavior on group processes and outcomes. Groups of undergraduates were given the task of solving a hypothetical business problem and with each individual assigned to a specialized role with access to unique task-relevant information. Experimental sessions were tape recorded and subsequently coded for the number of facts introduced from the player role sheets. Unlike F odor and Smith (1982), a leader characteristic (leadership style) did not have an efi'ect on the number of facts introduced but group cohesion was found to be related to information sharing. However, contrary to the groupthink model, cohesive groups shared greater than 50% more information than noncohesive groups. SMIL- The groupthink model has maintained its appeal in spite of operational dimculties in testing it and limited empirical support (Moorhead, 1982; Tetlock et al., 1992; Park, 1990; Aldag & Fuller, 1993). Retrospective case studies have tended to support the central tenets of groupthink whereas laboratory studies which have focused on testing the linkages between the proposed causal antecedents and the symptoms of defective decision making have not (Aldag & Fuller, 1993). In keeping with the general lack of empirical attention to the overall groupthink model, few studies have directly addressed the role of information sharing (i.e., "self- censorship" in groupthink terminology) in group decision making. Two studies that did look at information sharing (i.e., Fodor & Smith, 1982; Leana, 1985) came to essentially Opposite conclusions about the antecedent factors that lead to low information sharing (cohesion as opposed to leader style), but neither study explicitly measured the relationship between self-censorship and group decision making performance. Thus, despite its central role as a prOcess defect in J anis' model, the role of information sharing is unclear. The current study will address this omission by directly measuring 13 information sharing and assessing its relationship with group performance. In the next section, we review literature on a second type of process loss associated with reduced information sharing -- biased information sampling. Biased information sampling. A common technique used by groups to exchange information, resolve differences and make decisions is face-to-face discussion among group members (Schweiger et al., 1986). Recent theoretical developments concerning how information is sampled by groups during discussion has led to a formal model of information sampling by groups (Stasser & Titus, 1985). In essence, information sampling theory predicts information is sampled on a probabilistic basis by groups during group discussion. An item of information is "sampled" by a group when an individual with access to the information one shares that item with the group as a whole. The probability that a given piece of information will be sampled is a ftmction of the number of people who have access to that item. Assuming a constant probability associated with any given group member recalling and sharing any particular piece of information he or she possesses, the chances of mentioning a given piece of information increase as a fimction of (a) the number of individuals within the group that have access to the information before group discussion and (b) as the size of the group increases. A significant and unfortunate implication of information sampling theory is that group discussion tends to focus on shared information known to all or most group members. In other words, potentially important Imshared information is less likely to be mentioned to the group and therefore remains "unusable." According to the theory, the bias in favor of sampling previously-shared information increases with group size. The predictions of information sampling theory with respect to biased information sampling have been supported across a number of empirical studies (Stasser & Titus, 1985; Stasser & Titus, 1987; Stasser, Taylor & Hanna, 1989; Stasser & Stewart, 1992). 14 Stasser and Titus (1985) constructed a task in which three hypothetical "candidates" were running for student body president and asked group members to decide which candidate was most suitable for the job. When all information for each of the three candidates was available to all group members, 83% of the groups preferred Candidate A. When the available information on each candidate was divided and given to different group members, only 18% of the groups chose Candidate A. Stasser and Titus refer to this as a "hidden profile." Since individuals in the unshared information conditions collectively possessed all available information, the most likely explanation for the difference in preference rates is a failure to share needed information. Free recall data gathered before and after group discussion indicated that group members "learned" very little unshared information during the course of group discussion. Stasser and Titus (1987) replicated this finding and found that substantial gains in terms of learning tmshared information were made only when there were very low levels of unshared information in the group prior to discussion. However, neither study directly measured information sharing by groups, nor was it logically possible to determine the detriment of restricted information sharing on group performance since there was no way to identify which alternative was best for the job. Stasser et al. (1989) extended the two earlier studies by directly measuring information sharing during group discussion. Using the same candidate selection task and audio recordings of group discussion, they found that, across all conditions, group discussions included 46% of the information cues shared by one or more group members but only 18% of the unshared information cues. As predicted by the model, the sampling bias in favor of shared information was greater in larger groups (six-person versus three- person groups) and when a higher percentage of information was known to multiple group members before discussion (66% versus 33%). In addition, shared information was significantly more likely than Imshared information to be repeated in group 15 discussion, resulting in only about 5% of unshared information considered and then reconsidered during discussion. Furthermore, Stasser et al. found that specific instructions to examine all available information before making a decision led to an even stronger bias in favor of previously shared information. Stasser and Stewart (1992) followed up these studies by investigating the effects of task type. In this study, groups of students were asked to read a murder mystery after being led to believe that they either did (solve set) or did not (judge set) have suficient information to solve the crime. When critical cues were unshared before group discussion, only 35% of the judge set groups identified the correct murderer whereas 67% of the solve set groups were able to identify the proper culprit. The findings of Stasser and his colleagues suggest that group discussion is not a good device for disseminating unshared information. As Stasser et al. (1989) noted, groups tend to focus their discussion on what is already known, avoiding the consideration of potentially relevant unshared information. With respect to ill-structured strategic decisions, it may be the case that groups do not optimally use idiosyncratic information held by individual "experts." Given that strategic decision making groups are composed of diverse "experts" precisely because they are expected to share unique information and assumptions, it may be that strategic decision making groups do not utilize their primary potential advantage. Summa_ry: Restricted Information Sharing. Past research and theory on group decision making has identified two mechanisms leading to process loss in the form of failure to exchange necessary task-relevant information: groupthink and biased information sampling. Both theories share a cognitive focus centered around two assumptions: (1) the quality of group decision making is a positive function of the amount of information known by all group members and (2) individuals bring difl‘erent informational resources to the group decision making context. 16 Although little research has directly addressed the role of restricted information sharing in contributing to poor group decisions, both models highlight the importance of information sharing among group members. Those studies that did directly measure information sharing among group members were not in a position to relate it to decision making quality (Fodor & Smith, 1982; Leana, 1985; Stasser et al., 1989). Stasser and his colleagues have found that groups often do not end up discussing information known only to one or few group members. To this point, the discussion has focused on a type of process loss that arises when group members fail to share information necessary for optimal decision making. However, in order for relevant information to translate into good decisions, that information must be effectively utilized once it has been shared. A second type of process loss in group decision making occurs when group members share critical information but disagree on the manner in which the information should be used. Such disagecment may lead to conflict among group members which prevents the optimal integration of information. In the next section, we review literature on the role of intragroup conflict in inhibiting the integration of information provided by individual group members. Poor Information Integr_ation One of the six processes noted by Larson and Christensen (1993) as necessarily occurring in group problem solving is the integration and combination of information available to the group. Although there are a number of factors that may lead groups to process information in a less than optimal fashion, we review evidence in this section suggesting that one of the most common of group phenomena, conflict among group members, may lead to sub-optimal integration of shared information. Experimental social psychology has a long tradition of interest in conflict, and a number of research paradigns (e.g., Prisoner's Dilemma, Resource Conservation, Coalition Formation) have 1 7 arisen around the study of particular types of intragroup conflict (Argote & McGrath, 1993). Unfortunately, little empirical research has addressed the effects of intragroup conflict on group decision making. The Nature of Conflict. Conflict is so ubiquitous that it is considered by many to be a ftmdamental characteristic of human interaction (Pruitt & Rubin, 1986; Patton et al., 1991). Pruitt and Rubin (1986) have called attention to the fact that several of the most influential theorists of the twentieth century (e.g., Darwin, Marx, Freud) have based their work around the notion of conflict. However, at the same time, they note the term has been used so broadly and in so many difierent contexts it is in danger of losing its status as a singular concept. What exactly is “conflict?” Tillett (1991) provided the following overview: Conflict is an inevitable and pervasive aspect of human life. . . . It arises within individuals and between individuals. It takes place within and between groups, organizations, communities and nations. Conflict occurs at home and at work and in the neighborhood. . . . Conflict is sometimes violent, but more often not. . . . Much of it exists within the mind and is expressed in words . . . . Conflict is popularly equated with fighting, and is generally seen as destructive, unpleasant and undesirable. It is usually suppressed, avoided, concealed or fought over . . ." (p. l). The American College Dictionary defines conflict as “a state of disharmony between incompatible or antithetical persons, ideas, or interests; a clash.” In a similar fashion, Webster’s Dictionary defines conflict as “a sharp disagreement or opposition, as of interests, ideas, etc.” Conflict is perceived as synonymous with a number of other words as well, including destruction, anger, disag'eement, hostility, war, anxiety, tension, alienation, violence and competition, dissension, strife, friction, disagreement, dispute, quarrel, war and fight (Frost & Wilmot, 1978; Tillett, 1991). As such, it is clear that conflict in a popular sense is equated with negative emotions and hostile behavior. Further, Tillett (1991) has pointed out that conflict can occur at difl’erent levels of society. Interpersonal conflict exists between two or more people, while intragroup l 8 conflict exists within a group and involves two or more subgroups within the larger group where subgroups may be as small as a single individual. Intergroup conflict, or conflict between groups, has perhaps received the most attention in the literature. Finally, conflict can also occur at the international level between two or more countries. Although it seems reasonable to believe that conflict has a number of basic attributes common to any level, it is quite likely that the antecedents and consequences of conflict differ according to level. Given conflict's many connotations, it is not surprising that researchers have defined the term in different ways. Adding to the confusion, some researchers have defined conflict in general terms while others have focused on conflict at a particular level. Coser (1967) defined conflict as "a struggle over values and claims to scarce status, power and resources in which aims of the opponent are to neutralize, injure or eliminate the rivals" (p. 8). Similar to Coser's definition, Deutsch (1973) ofl‘ered a widely-adopted definition of conflict that focused on “incompatible activities” between two or more parties, where “incompatible activities” were seen as those that prevent, obstruct, interfere with, injure or make the actions of another party less effective. Mack and Snyder (1973) noted that conflict includes two or more interacting parties, positional or resource scarcity, attempts to acquire or exercise power, and behaviors intended to injure, thwart and control others. Frost and Wilmot (1978) defined conflict as "an expressed struggle between at least two interdependent parties, who perceive incompatible goals, scarce rewards and interference from the other party in achieving their goals" (p. 9). Pruitt and Rubin (1986) defined conflict as a “perceived divergence of interest, or a belief that the parties’ current aspirations cannot be achieved simultaneously” (p. 4). Burton (1988) described conflict as “a relationship in which each party perceives the other’s goals, values, interests or behaviour as antithetical to its own” (p. 11). Tillett (1991) defined conflict as existing “when two or more parties perceive that their values or needs 19 are incompatible” (p. 7). Finally, Zander (1994) noted that a situation involves conflict “if two or more parties disagree over what the other ought or ought not to do -- when each side knows what should be said or done and knows that opponents’ views are wrong” (p. 112). There are a number of common elements that stand out in these definitions: perceived diflerences in values, needs or goals; scarce resources; and incompatible or hostile behaviors. The various definitions reviewed above suggest that conflict is a complex construct characterized by phenomena on three general dimensions: affective, behavioral and cointive. Conflict involves strong negative emotions which are experienced as distinctly tmpleasant by individuals in the g'oup. These negative affective reactions include tension, frustration, anger, fear, hopelessness, wormded pride and even depression (T josvold, 1985; Pruitt & Rubin, 1986; Bettenhausen, 1991; Tillett, 1991). Tillett (1991) noted that the presence of strong negative emotions as something that distinguishes conflict from more mundane problems and disputes. With respect to behaviors, conflict clearly involves a set of negative interpersonal behaviors on the part of one or more group members that can be interpreted as hostile or degrading by other members (Coser, 1967; Deutsch, 1973; Frost & Wilmot, 1978; Zander, 1994). Finally, with respect to cogtition, conflict is associated with distortions in the thought patterns of group members. In particular, conflict may invoke irrational cognitive processes that result in the use of over-simplification, exaggeration, extreme generalization, defense mechanisms, rigid and inflexible adherence to one’s belief (Frost & Wilmot, 1978; Pruitt & Rubin, 1986). It is important to recognize, however, that these definitions represent attempts to define conflict in general terms as opposed to conflict at any particular “level” identified by Tillett (1991). While thereis certainly some value in this, it is also possible to define conflict in a more level-specific fashion that retains the essential elements of the construct 20 while introducing aspects of conflict that may be unique to a particular context (i.e., level). Conflict Within Grogpg. Combining the basic features of conflict and these more observable features of conflict within g‘oups, intragroup conflict can be defined as a group-level process involving: (1) negative affect, (2) hostile/degrading interpersonal behaviors, and/or (3) irrational and/or non-task-related cointive processing on the part of two or more goup members as a result of interpersonal interaction. It is assumed that conflict among two or more group members can affect the entire group. In general, intrag'oup conflict can inhibit group-level processing of information by directly absorbing group resources in the form of collective discussion time and individual cognitive resources. For instance, individuals arguing about what the group should do may take up an excessive amount of a group’s available “air time,” preventing constructive discussion of other issues. Arguments and overt conflicts may also serve as a distraction, focusing group members’ attention on their own reactions or on ways the conflict can be reduced, circumvented or even heightened. As a result, intragroup conflict does not have to involve all group members (or for that matter, even be perceived by all group members) to afl’ect g'oup-level information processing. Conflict can arise within groups or organizations in any number of ways. Patton et a1. (1991) stated that conflict can arise from almost any change, noting “Any perceived changes, ranging from leadership roles to group structure to activities to new membership, may provoke conflict. The conflicts are inevitable; the nature of the group will determine whether they are handled openly or reduced to the level of a hidden agenda” (pp. 119-120). Zander (1994) noted that conflicts within organizations can arise from differences over such things as the unwillingness of members to accept leaders, budget allocations, who should speak for the unit, how lawsuits should be settled, strategic planning, and the relative attention paid to various departments. Furthermore, 2 1 Zander identified a number of conditions that increase the potential for conflict, including a lack of systematic procedures for the group, scarce resources, multiple plans or goals for the group among group members, and confrontational procedures such as devil’s advocacy. Conflict within groups develops through various stages or steps in a sequence. Tillett (1991) noted that intragroup conflict is characterized by an ongoing sequence of problems or disputes over seemingly minor points or issues that bring on irrational, emotional or extreme reactions on the part of involved group members, as well as excessive discussion or argument. Zander (1994) described the following sequence in the development of intragroup conflict: (1) Members recoinze they disagree, (2) They confront/attempt to persuade one another, (3) Initial positions “harden,” (4) Rationality in thinking and communication decreases, (5) Members show hostility, (6) Coercion and imitation tactics make their appearance, (7) Language becomes “stronger,” (8) Members begin to feel and exhibit a lack of trust towards other members and, finally, (9) Emotional waning occurs as members become fatigued. Similar to the pattern identified by Zander (1994), Pruitt and Rubin (1986) identified a number of general transformations take place within groups experiencing conflict, including “Light -> Heavy,” “Small -> Large,” “Specific -> General,” “Doing Well -> Winning -> Hurting Opponents,” and “Few --> Many.” “Light to Heavy” refers to the tendency for group members to use increasingly direct, overt influence tactics. “Small to Large” addresses the tendency for issues to proliferate once initial differences are discovered. “Specific to General” corresponds to the general trend for issues to increase in scope, while “Doing Well to Winning to Hurting” characterizes the shift in group member’s goals from those that focus on group well-being to a relatively narrow vindictiveness. Finally, “Few to Many” represents the tendency for conflicts to draw in neutral “observers” and so increase in size. 22 One of the fundamental dynamics of groups experiencing conflict is a heightened sensitivity to the possible loss of “face” (Pruitt & Rubin, 1986; Tillett, 1991). Pruitt and Rubin (1986) describe how changes take place within the group centered around this dynamic: Zero-sum thinking develops - it’s either victory for them or victory for us. New goals come to the fore: to look better than, punish, discredit, defeat, or even destroy the adversary. The capacity for empathy with the adversary is eroded. There are also changes in the approach taken to group decision making: Positions become rigid, there is little room for compromise, and there is a dearth of imagination and creativity. Emphasis is placed on proving how tough and unyielding one is, so as to persuade the adversary that one cannot be pushed around” (p. 93). Although the efi‘ects of conflict on individuals are relatively well-known, the effects of conflict on group processes and outcomes have received little research attention (Zander, 1994). At an individual level, conflict involves feelings of agitation, annoyance, and frustration; high levels of interpersonal hostility; and distorted cognitive processes (Tjosvold, 1985; Tillett, 1991). Further, research has shown that conflict leads to psychological withdrawal, low commitment to group goals and decisions, low satisfaction with group process, and reduced preference to work with the same group members in the future (Levine & Moreland, 1990; Bettenhasuen, 1991). While little research has addressed the issue, a number of researchers have speculated that conflict is harmful to a variety of g'oup-level outcomes. In particular, it seems likely that intrag'oup conflict will have a negative impact on the ability of group members to optimally utilize information shared by members and thus available to the group. The previous review has described a number of mechanisms that may result in a failure to adequately use shared information as a result of diverted or wasted group resources. Furthermore, a primary dynamic underlying the sub-optimal utilization of group resources is a fear of losing “face.” As members present their views and get rebufl‘ed, they become angy, frustrated and impatient. As they do so, they are less likely to maintain an open mind towards the input of others and more likely to rigidly 23 advocate their own viewpoint and corresponding plan. In such a case, the process by which groups "build" a constructive compromise alternative may be reduced to a subset of vocal or high-status group members and the information salient to them (Walsh et al., 1988). Thus, the construction of a final "group" plan may be largely a product of political factors and based on a reduced subset of the information offered by group members during group discussion. Decision quality will likely suffer as a result. Summary: Process Loss in Group Decision Mak_in_g Schweiger et a1. (1986) note that "the nature of interactions and the processes by which information is shared and evaluated appear to be critical factors in the efi‘ective strategic decisions by top management" (p. 52). An assumption shared by the work of Janis (1972; 1982) and Stasser and his colleagues (Stasser & Titus, 1985; Stasser & Titus, 1987; Stasser et al., 1989; Stasser & Stewart, 1992) is that greater information sharing by individuals with access to information the rest of the group does not have leads to better group decisions. The preceding discussion has identified two potential types of process loss involving information available to strategic decision making groups: (1) a failure on the part of individuals to share relevant information and (2) a failure to adequately integrate information which is shared. With regard to inadequate information sharing, research on groupthink and information sampling theory has identified a variety of characteristics (e.g., time and conformity pressures, high levels of cohesion, external threat, highly distributed crucial information) that may impair the ability of groups to "draw out" information held by a subset of members so that it can be utilized at a gem level by all members. Ideally, all information known to individual group members will be made available to all other members during discussion. Theory and research on g'oupthink and biased information sampling suggest that this rarely occurs. 24 A second type of process loss that may occur in strategic decision making groups is the failure to adequately utilize information which has been shared by individuals in forming a final group alternative. One of the factors that may lead groups to reduce their information processing and base their collective decisions around only a small subset of available information is the presence of conflict among members. Conflict is endemic to groups and results in negative afl‘ect, hostile behaviors and distorted cognition at the individual level. The efl‘ect is to reduce or divert group-level information processing resources from the task at hand. The work of Walsh and his colleagues (Walsh, 1986; Walsh & Fahey, 1986; Walsh & Henderson, 1988) suggests that, in such situations, groups will use the problem conceptualizations and supporting information of those group members which are perceived as most dominant, powerful or expert. This in tum may lead to a failure to integ'ate all available information in coming to a final group decision and, as a result, lower g'oup performance. In spite of the intuitive and analytical reasons for avoiding conflict in group decision making, a number of researchers have noted positive aspects of intragroup conflict, including the increased exchange of information among group members and expansion of the alternatives under consideration (Frost & Wilmot, 1978; Pruitt & Rubin, 1986; Patton et al., 1991). While recognizing these benefits, other writers have associated these benefits with different labels, including "cognitive conflict" (Priem & Price, 1991; Amason, 1995) and "controversy" (T josvold, 1985). The proliferation in terms has led to confusion in the literature. Indeed, a popular approach to increasing the efi‘ectiveness of group decision making involves the introduction of "conflict" into goup discussion (Schweiger, 1990). Given the confusion already surrounding "conflict" and the theoretically-meaningful distinction which can be made between positive and negative aspects of the term, this paper'will differentiate "conflic " and "controversy." The next 25 section reviews the literature on controversy and controversy-based interventions in group decision making processes. Controversy in Strategic Decision Making Mason (1969) noted that strategic decisions are often made by top management groups after turning a problem or issue over to a technical or functional "expert" for consultation. Subsequent recommendations made by such experts to management decision making groups are typically based on numerous simplifying assumptions that usually go unrecognized and, therefore, unchallenged. Given the wealth of data that apply and the validity of multiple perspectives in any ill-structured situation, Mason suggested that routine implementation of these unilateral recommendations generally leads to poor decisions. In order to avoid the pitfalls associated with this approach, Mason (1969) and Mason and Mitrofl' (1981) have advocated the use of techniques that stimulate controversy among group members. Much of the research that has been done on the nature of controversy has been conducted by Dean Tjosvold and his colleagues. Tjosvold (1985) offers the following definition of controversy: Controversy is a special kind of conflict and occurs when one person’s ideas, opinions, conclusions, theories and information are incompatible with another’s when they discuss problems and make decisions. . . . Controversy involves difiemnces of opinion that at least temporarily prevent, delay, or interfere with reaching a decision. Persons in controversy have opposing views about how they should proceed, and face the pressure to resolve these difi‘erences in order to reach a decision and move forwar ” (p. 22). At an operational level, controversy can be seen as the extent to which group members (1) express doubts about the efficacy of an existing plan or (2) identify multiple perspectives or plans for achieving a group's primary goal. Elaborating on this definition, there are at least three types of behaviors that indicate controversy among group 26 members: (1) Indirect challenge, (2) Explicit disagreement, (3) Presentation of opposing viewpoints. Indirect challenge refers to instances where group members express only partial agreement for a position or recommendation, present contradictory information without open disag'eement, or ask questions that indicate skepticism or doubt concerning a particular point of view (e.g., "Why?"). Explicit disagreement indicates a lack of support for a plan or position that has been taken, and implies that the speaker believes there is another, as-yet-unidentified way to do things. Finally, controversy is most directly evident when group members identify, discuss and debate alternative plans or viewpoints. It is useful to distinguish these dimensions of controversy in that they all reflect the same underlying phenomenon yet may not occur or manifest themselves with equal frequency. At this point, is it useful to explicitly distinguish intragroup conflict and controversy. As noted previously, a defining characteristic of conflict is negative afiect among g'oup members that is interpersonal in nature. On the other hand, controversy involves disagreement among group members without the negative or interpersonal aspects. The bases of conflict are personal and emotional; the bases of controversy are ideological and cognitive in nature. Conflict may be caused by factors completely unrelated to the g'oup's position or goal, for example past interactions between group members, whereas controversy is task-related and need not involve any conflict at all, such as when two friends question one another while "agreeing to disagree." As noted previously, the occurrence or exacerbation of existing conflict may cause group members to use information sub-optimally. However, controversy as defined by Tj osvold should lead to a broadening of ideas and alternatives considered by the group, and may also cause group members to share information as they are called upon to explicate their difl'erent positions. As such, while conflict initiates mechanisms which may impair group functioning, controversy should positively impact g'oup functioning by aiding in the 27 identification of alternative plans and promoting information sharing among group members. The assumption underlying controversy-based techniques is that group decision making performance can be improved by requiring group members to identify and critically examine the assumptions underlying their ideas, thus "rooting out" diflerences that might otherwise go unnoticed. Assumptions which survive such scrutiny are hypothesized to be more likely to be valid than those that do not (Mason, 1969; Janis, 1972; Mason & Mitrofl‘, 1981). Controversy is thus intended to serve as a "perspective- broadening" tool helping to ameliorate the process losses associated with group decision making. Unfortunately, little research has addressed the efl‘ects of conflict in the type of setting most likely to occur in actual organizations - situations in which small gems of functional "experts" share many of the same goals (e. g., financial profit) yet disag'ee on how to implement plans to achieve those goals (McGrath, 1984; Levine & Moreland, 1990). Tjosvold (1985) notes that a number of factors influence the level of controversy in a group, including membership competition, forming subgroups, leadership style, openness norms and decision making rules. Specifically, Tjosvold states that “Forming subg'oups that are assigied opposing positions on the issue in question is a direct way to structure controversy” (p. 34). A number of studies have now attempted to examine the eflicacy of introducing controversy into the minds of decision makers in organizational settings. In particular, Dialectical Inquiry and Devil’s Advocacy approaches assmne that controversy will improve understanding of underlying issues and lead to the creation of efi‘ective strategies (T josvold, 1985). However, Tjosvold goes on to note that “these approaches have neither specified the interpersonal contexts and processes that facilitate the debate, nor incorporated research findings on the dynamics and outcomes of controversy” (p. 33). 28 Tjosvold (1985) claims that “Strategic decisions typically evoke controversy because they are complex and involve persons from different groups and departments within the organization who evaluate proposals from a variety of perspectives” (p. 33). Given that strategic decision settings provide fertile ground for controversy, it is important to determine the degree to which controversy affects group processes and/or outcomes of interest. Although not explicitly intended to increase information sharing among g'oup members, it seems likely that the process of challenging and debating assumptions will also produce increased information sharing among group members. In essence, when group members are faced with the question, "Well, why do you think that?," they will, at some point, share the data on which their beliefs are founded. As a result, controversy should yield greater information sharing among group members as a by-product of the assumption-challenging procedure. At the same time, controversy also seems likely to stir intragroup conflict between members by calling for members to disagree and confi'ont one another (Priem & Price, 1993; Zander, 1994). The next section reviews empirical research attempting to improve decision making through the use of techniques that stimulate controversy. In particular, this review focuses on two techniques known as Devil’s Advocacy (DA) and Dialectical Inquiry (DI). Controversy-Based Interventions in Individugl Decision Mgking A popular approach intended to improve goup performance in ill-structured situations has focused on stimulating disagreement among group members (Schwenk, 1990; Taylor, 1992). Although a number of studies have examined the effects of introducing controversy into the group decision making, most research has focused on the impact on group performance while ignoring the processes through which controversy has its efi‘ects. Several studies have measured the number of assumptions and recommendations identified by groups using controversy-based techniques in comparison 29 to other approaches, no study has attempted to measure the degree to which controversy stimulates g‘oup members to share information. The upcoming section provides an overview of research on the two controversy-based decision aids that have been extensively studied: Devil's Advocacy and Dialectical Inquiry. Empirical studies of DA and DI can be divided into three general categories: (1) early case studies centering on the effectiveness of D1 in actual organizational decisions, (2) laboratory research focusing on DA and DI as methods for creating cognitive conflict within individuals and (3) laboratory research examining DA and D1 in the context of group decision making. DA vs. DI. Two particular methods have been extensively studied as mechanisms for introducing controversy into group decision making - Dialectical Inquiry (DI) and Devil's Advocacy (DA). Both techniques are predicated on the assumption that top management will benefit from considering multiple alternatives or options related to achieving some organizational goal. Although sharing this core assumption, as examined in the literature, DA and DI differ in terms of how they bring alternative approaches into consideration. DA essentially requires that the assumptions and data underlying a proposed plan of action be identified and subjected to criticism. The plan is then revised on the basis of this criticism and presented again. Iterative cycles of revision and critique are conducted tmtil all criticisms have been satisfied. On the other hand, DI requires that a "comterplan" be identified based on assumptions diametrically opposed to the original plan. The "plan" and the "counterplan" are presented to the group in the form of a structured debate and a set of "surviving" assumptions is identified and used as the basis for synthesizing the two plans. As a result, criticism has been seen as the crucial element defining a DA approach while construction of a second "counterplan" has been viewed as the defining essence of D1 (Mason, 1969; Cosier, 1978). Mason (1969) and Mitrofl' and Mason (1981) argued that both DA (critique) and DI (diametric cormterplan) should improve group decision making over the presentation 30 of one plan by experts from a single functional area. Further, Mason suggested that DI should yield better group outcomes than DA because it provides a credible alternative to the existing plan and allows for a synthesis of the best ideas fiom both plans. According to Mason (1969), the DI structure allows the thesis (first plan) and antithesis (second plan) to be creatively merged into a constructive synthesis by top management (Mason, 1969; Mason & Mitrofi‘, 1981). On the basis of early research using a case study approach (see below), Mason identified DI as the technique of choice for use in improving decision making in organizations. Although this paper is concerned with groups and the processes underlying efiecfive group decision making, most of the early research comparing DA and DI involved individual decision makers instead of groups. In the typical individual-level study, individual decision makers were exposed to a short "plan," often presented in writing, and then either exposed to counter-arguments or a second "counterplan." The intention of such a manipulation was to induce controversy within the decision maker and so stimulate the consideration of alternatives (and possibly allow for a creative synthesis). Unfortunately, empirical research using DA and DI with individual decision makers is severely restricted in terms of its applicability to group decision settings. However, the majority of research on DA and DI has taken this approach and conclusions have been made concerning the relative merits of these two techniques with little regard to the distinction between individual and goup decision making. The result is a goat deal of confusion in the literature (Schweiger, Sandberg & Rechner, 1989; Schwenk, 1990). Therefore, we review all these studies both as a means of identifying and potentially clarifying the confusion in the literature. Early field studies of D1. Early studies of conflict-based decision aids focused on examining the efl‘ectiveness of D1 in actual organizational decision making. In support of his recommendation for the use of D1, Mason (1969) presented a case study of decision 3 1 making at RMK Abrasives, a real organization. Mason obtained a strategic planning document from the company's planning department, identified the assumptions underlying the plan and then created a second plan based on assumptions counter to those in the first plan. Company managers reported having favorable attitudes towards DI and indicated that they felt the plan generated using DI was superior to the one they would have generated based on the recommendations in the first plan only. In a similar study, Laurenco and Glidewell (1975) used D1 in the resolution of a conflict between a television station and its corporate headquarters over the degree of control to be exercised by corporate headquarters. According to the authors, the issue was resolved with a constructive compromise that was mutually satisfactory. Mitrofl‘, Barabba and Kilmann (1977) examined the use of D1 in an actual organization, the Bureau of Census in Washington, DC. Forty-five employees were clustered into five homogeneous groups and directed to produce planning reports suggesting new directions for the Bureau. After the five groups produced their difl‘erent plans, one representative from each group was included in an executive group that produced a final integrative report. Mitrofi‘ et al. note that the final report was characterized by several of its members as being both "exciting" and "innovative." Emshoff and Finnel (1978) studied a form of D1 known as "strategic assumptions analysis" at a firm they called "Basic Materials." A planning group at Basic Materials then utilized strategic assumptions analysis to revise an existing strategic plan, and according to Emshofi‘ and F innel, the resulting plan included a more thorough analysis of the data and a revised strategy superior to the old one. Finally, Mitrofi‘, Emshoff and Kilmann (1979) used a modified version of D1 (strategic assumptions analysis) with three mum of managers attempting to decide on a pricing decision in a drug company. Strategic assumptions analysis was utilized to examine competing assumptions, identify an expanded set of alternatives, and arrive at a 32 pricing policy that was characterized by the authors as better than that which would have been adopted without the use of strategic assumptions analysis (DI). Although these studies provided some support for the claim that structured conflict in the form of D1 should be used in actual organizations, a number of reviewers noted the need to delay the conclusion until DI could be examined under more controlled conditions and contrasted it with alternative approaches to decision making (Cosier, 1978; Schwenk, 1980; Schwenk, 1982). Shortly thereafter, research on conflict-based interventions turned to the laboratory utilizing a technique known as "Multiple Cue Probability Learning" (MCPL). Mom resegrch using MCPL tasks. Laboratory research on structured conflict has been greatly influenced by a watershed study conducted by Cosier (1978). Cosier established a paradign for studying the relative effectiveness of DA and DI using individual decision makers and a relatively simple multiple-cue probability (MCPL) learning task. Over the next five years, numerous studies of DA and DI were conducted with individual decision makers and Cosier's (1978) MCPL task. Conclusions drawn on the basis of these studies have been very influential (e.g., Schwenk, 1990). These studies are reviewed here because of their prominence in the literature and corresponding contribution to the confiision surrormding the merits of DA and D1 in group-level settings. In that first study, Cosier (1978) had individuals predict a price-to-eamings ratio (PIE) for a hypothetical firm using three cues and operating in three different decision contexts ("world states"). Three statistically-independent cues were used to predict the P/E ratio: the firm's current ratio (X1), inventory turnover (X2), and debt-to-equity ratio (X3). Prediction was studied in the context of three different world states with different profiles of cue-criterion relatiOnship. In State 1, the cues were correlated with Y in the following manner: rxly = .80, rxzy = .50 and my = .20. In State 2, all three cues correlated 33 .50 with the criterion. State 3 was the opposite of State 1, with rxly = .20, rxzy = .50 and r,‘3y = .80. Each participant made 20 predictions for each of three different profit centers of a larger organization, receiving feedback after each decision. Each profit center served as a different "world state." Cosier (1978) operationalized the single-plan "expert" (E) approach by giving participants the written recommendations of an imaginary "expert" who advocated paying most attention to X1, moderate attention to X2 moderately and only slight attention to X3. The DA criticism stated that there was reason to believe the first expert's assumptions were dubious and the recommended weighting inapprOpriate. In the DI condition, the second expert also noted that there was reason to believe the first expert's recommended weightings were not correct and suggested a weighting scheme diametrically opposite that of the first expert (i.e., pay most attention to X3, moderate to X2 and little to X1). Cosier (1978) examined the between-subjects effect of inquiry method (DA, DI, and E), and the within-subj ect effect of decision context (world states 1-3) on the accuracy of predicting the P/E ratio. He found no effect for inquiry method or decision context but did find that the two interacted. In State 1, E participants had significantly less judgnental error than either DA or DI participants, but this situation was reversed in State 3, where DA participants were significantly better at predicting than either the E or DI participants. No difi'erences were observed between methods in State 2. There is nothing particularly remarkable about the interpretation of this interaction. In essence, when the first expert's recommendations were right and participants only received this expert's view, they tended to do better. When the true state of the world was different from the recommendations of the first expert, participants did better when they were told the first expert was wrong (DA) or given the correct set of weights by the second expert (D1). The superiority of DA in yielding higher quality 34 decisions in State 3 (when the first expert was wrong) was viewed as providing support for the superiority of DA over DI. A number of other studies (e.g., Cosier, 1980; Schwenk & Cosier, 1980; Schwenk, 1982; Schwenk, 1984a; Schweiger & Finger, 1984) have used the MCPL approach to study the E, DA and DI operationalizations developed by Cosier (1978). Cosier (1980) included the effects of two individual difl‘erence variables, self-determined goal difficulty and goal relevance. Schwenk and Cosier (1980) added an additional DA condition in which the critique was framed as "emotional" and "carping." Schwenk (1982) looked at a combined DADI condition in which DI participants received a critique of the first expert's plan in addition to the counterplan and also examined the role of ambiguity tolerance as a potential moderator variables of the inquiry method-performance relationship. Finally, Schwenk (1984a) examined still another variation on the DI treatment, DI+, where participants received a short explanatory statement intended to reduce confusion over the receipt of conflicting plans. He also examined another potential individual differences moderator, task involvement. The inquiry method by decision context interaction found by Cosier (1978) was found in each of these later studies in similar form, except later studies did not consistently find E participants to be superior in State 1 (when the first expert was correct) and sometimes formd that DI resulted in better outcomes in State 3 (when the first expert was wrong). No study found a main effect for inquiry method or decision context. Cosier (1980) found DI participants to be significantly more accurate in State 3 (cormterplan correct) compared to their performance in States 1 and 2. Schwenk and Cosier (1980) fomd E participants performed significantly better than both DI and DA individuals in State 1 and while both DA and DI participants outperformed E in State 3. Schwenk (1982) found that, in State 3, all structured conflict methods (simple DA, simple DI and DADI) resulted in significantly better judgmental accuracy than E. Also, he found 35 that high ambiguity-tolerance individuals in the simple DA and DADI conditions performed significantly better than their high ambiguity—tolerance counterparts who received simple D1 or E. Similarly, Schwenk (1984a) found that all of the various conflict-based techniques (DA, DI and DI+) outperformed E in State 3 but did not difi'er among themselves. In addition, highly-involved DI and DI+ participants exhibited significantly better judgmental accuracy than highly-involved E and DA participants across all states. Sgdies of DA/DI gs_ipg non-MCPL taikg In addition to studies using the MCPL paradigm to assess the relative merits of DA, DI and E, several studies have attempted to assess the relative eflecfiveness of DA and DI on individual decisions using tasks more complex than the relatively well-structured Cosier (1978) MCPL task. In some cases, these studies have also used "real-world" decision makers instead of students, further increasing the strength of their generalizability. Cosier, Ruble and Aplin (1978) attempted to assess the relative efficacy of DA and DI using an eight-period business simulation configured to operate under three difl‘erent world states similar to the MCPL decision contexts. Four inquiry conditions were examined: DA, DI, E and a control condition. In State 1, the DA treatment led to significantly better prediction performance over the DI and control treatments while in State 2, the control treatment resulted in the most effective performance, yielding significantly better prediction than the DA, DI and E conditions. Cosier and Aplin (1980) gave 32 United Way planners a case study involving an actual United Way agency and asked them to prepare a set of recommendations. Participants in the control condition got no added information, while participants in the E condition received a plan generated by the authors of the study. In addition to this plan, DA participants received a critique of the plan and DI participants received a counterplan based on difl'ering assumptions. Plans were evaluated by three judges along six 36 dimensions: internal consistency, consistency with the environment, available resources, satisfactory risk, time horizon and workability. An overall evaluation was also given. DA was judged superior to D1 on only one criterion, satisfactory risk. Schwenk and Thomas (1983) used a business scenario in conjunction with actual managers in examining the relative effectiveness of three inquiry methods (DI, E and a control condition). The task was the Sweetsa case, requiring participants to specify actions with regard to harvesting operations in a plantation system. B was operationalized by giving participants an analysis of the case and a corresponding set of recommendations generated by an "expert consultan " that would have resulted in a gain of $26,000 had it been routinely implemented. DI was operationalized by providing participants with the analysis and recommendations of the first expert, plus an additional analysis and set of recommendations provided by a second expert whose recommendations would have resulted in a gain of $4,000 more than the first expert's plan. Those in the control condition received no additional information. Results indicated that participants in the DI condition generated significantly better solutions than those participants in the E condition, with the average DI solution yielding roughly $32,000 more than the average E solution. In addition, four of the nine DI individuals identified the optimal solution, while none of the eight E participants were able to do so. The majority of participants in the control condition recommended solutions that were deemed "infeasible" due to one or more infractions of the task "rules." Schwenk (1984b) also examined the relative efl'ects of DA, DI, E and a control condition with individual undergraduate business students acting as individuals and a business scenario involving the operations of a fictitious soft drink company. Information provided in the case centered around feasibility of two potential strategies (acquisition of winery, development of a new soft drink). Control participants received no added information other than that contained in the case study while B participants received a 37 "planning committee report" recommending acquisition of the winery. DA participants received the planning committee report plus the critique of a second planning committee which questioned the analysis and recommendations of the first committee report. Finally, the DI participants received the first planning committee report as well as a second planning committee report offering a "difierent recommendation" (p.266). In addition to these four inquiry methods, Schwenk examined the effect of presentation format for the inquiry method instructions (i.e., in writing or by videotape). Results indicated an inquiry method by presentation medium interaction, with DA participants generating significantly more strategic alternatives than B or DI participants when instructions were presented in writing, while control participants generated significantly more fimctional area alternatives than either of the two conflict-based approaches. No difi‘erences were observed across inquiry methods in the videotape condition. Schwenk also reported a significant chi-square showing that participants' final recommendations depended on the inquiry method they received. However, since the task provided no qualitative means of distinguishing the various recommendation alternatives, this is difiicult to interpret. Finally, Schwenk reported that participants in both the DA and DI conditions indicated greater satisfaction than did participants in the E condition. Cosier and Rechner (1985) extended previous research on DA and DI by using a complex business simulation (SIMQ) and samples of both undergraduates and real-world managers. They also examined the efl‘ects of experience with inquiry methods over time by having participants make decisions for four decision periods with the same inquiry method. Fom' inquiry conditions (DA, DI, E and a control) were examined in conjunction with several method variables. As in previous research, control subjects received no additional information while B participants received a comprehensive planning report plus recommendations for plant operation. DA participants received a planning report 38 plus a critique of the report that essentially questioned every recommendation made in the initial report, while DI participants received the first planning report plus a second planning report recommending a course of action diametrically opposed to the recommendations of the first report. Two planning reports were generated, with half of the subjects getting Report 1 (R1) as their "expert" report and half receiving Report 2 (R2). For DI participants, one of the plans was received fiom the first expert and the other from the second expert. The effects of inquiry method were examined over four operating periods. The second study utilized exactly the same design except with a sample of actual managers instead of students. Results of the first study indicated an interaction between inquiry method and planning report received from the first expert (R1 v. R2). Analysis of the interaction indicated that DA participants earned significantly more revenue in the simulation than the DI participants when R2 was presented first. Thus, in the first study with undergraduates, the benefits of DA were contingent on the characteristics of the first plan presented. A supplementary data analysis indicated that DA may have tempered the willingness of participants to attempt the more difficult and complex recommendations of R2 to a greater extent than DI. On the other hand, the second study utilizing 30 actual managers found no difierences across inquiry condition. The only significant finding was that, unlike the students, managers improved their performance over the first three operating periods and, in general, earned more revenue than the students. 8mm of individual-level research. The studies reviewed above all focused on stimulating conflict within an individual using either MCPL or more ill-structured tasks. In general, findings have been mixed and, as a result, conclusions about the relative superiority of the methods have been heavily qualified. Studies using the Cosier (1978) MCPL paradigm to examine the effectiveness of DA and DI inquiry methods tended to replicate the earlier study with minor extensions, 39 usually adding an additional inquiry method condition or including some individual difference variable hypothesized to mediate or moderate the inquiry method-performance relationship. For the most part, studies have found that in State 3 where the first expert is wrong and the critic is correct, DA and DI tend to result in fewer judgmental errors than B (where errors are defined as the deviation of judgments from "actual" values). In a corresponding fashion, in State 1 where the expert is right and the critic is wrong, E participants tended to do better. Schweiger and Finger (1984) reviewed the results of seven studies comparing DI and DA in controlled laboratory settings and concluded that no method of inquiry had been shown to be more efl‘ective than any other. Across all studies, the vast majority of mean comparisons were nonsignificant and those that were significant did not show a consistent pattern. They concluded that there was no firm support for the relative superiority of either DA or DI. Individual-level studies using more ill-structured tasks have produced similarly mixed results, with DA superior to DI in some studies, DI superior to DA in others, and neither superior to each other in most cases. Criticisms of individuA-level research. Laboratory studies intent on examining the efl‘ectiveness of structured conflict methods in conjunction with individual decision makers has been subjected to three major criticisms: (1) Tasks used have not been suficiently complex to generalize to real-world settings, (2) Operationalizations of structured conflict (i.e., DA and DI) have not generated conflict within individuals and (3) Structured conflict methods are not intended for use with individuals and should be studied in group-level settings. The first criticism has been readily acknowledged (i.e., Schwenk & Thomas, 1983; Cosier & Rechner, 1985), but the latter two objections have stirred some measure of debate in the literature. With regard to capturhg the essence of conflict with past operationalizations, Schweiger and Finger (1984) suggested that, instead of creating a real sense of "conflic ," 40 past MCPL operationalizations of DA and DI probably just created confusion in the minds of participants. They suggested that, given written recommendations fiom imaginary "experts" who neither provided reasons for their recommendations nor were present to address questions, participants might have simply attended to one expert and discounted the input of the other. They hypothesized that the lack of consistency in earlier findings using the MCPL task may have been due to primacy or recency effects resulting from participant confusion. As a means of testing this, Schweiger and Finger (1984) replicated the classic Cosier (1978) study while creating two conditions for DA and DI wherein the content and order of expert/critic presentation was reversed. Consistent with their predictions, an order effect was found suggesting that participants attended to one expert (the first) while ignoring the advice of the other. . However, the most damaging criticism leveled against MCPL studies of DA and D1 is the claim that the entire approach fails to capture the conditions for which DA and DI were designed: group settings (Mitrofl‘ & Mason, 1981; Mitrofi‘, 1982; Schweiger & Finger, 1984; Schweiger et al. 1986). Mitrofl' and his colleagues (Mitrofi‘ & Mason, 1981; Mitrofi' & Emshofl‘, 1979; Mitroff, 1982) have been especially vocal in pointing out that DA and D1 are intended as decision-aids for use in social situations. They claim that through intense and sometimes heated discussion and debate, managers learn from the critical re-appraisal of strongly held beliefs and assumptions. According to this position, the well-structured MCPL task is not an appropriate task for studying group-level decision making, and comparisons between methods are thus meaningless. It is important to note that this last criticism is conceptual in nature and cannot be addressed empirically. The value of much of the published literature on structured conflict is at stake. Clearly, there is no "ri t" answer to this question, but the growing consensus is that, for the most part, the criticism is valid (Schweiger et al., 1989; 4 1 Schwenk & Cosier, 1993). Rather than completely discounting the numerous studies which have adopted the individual level approach, it seems more reasonable to ask the question of each study, "How confident can one be that the results will generalize to real- life strategic decision making settings?" At least four factors may affect this: (1) The adequacy of the construct operationalization in terms of stimulating "conflic " within decision makers, (2) The structure and complexity of the task and (3) The sample employed in the study (managers v. students) and (4) The level of analysis (group v. individual). Table 1 describes the studies of structured conflict that have been done to date, and provides summary information on level of analysis, task type, sample, and results obtained. By their very nature, studies of controversy-inducing techniques such as DA and DI involving individuals cannot address the efl‘ects of such methods on group process or group outcomes. In particular, it is not possible to examine the effects of introducing controversy on intragroup conflict or information sharing. At best, these individual-level studies suggest that stimulating controversy within individuals may result in the consideration of difi‘erent information and/or alternatives. The few studies that have examined DA and D1 in group settings clearly provide the strongest base for making statements about the relative efl‘ectiveness of conflict-based decision aids to real-life strategic decision making. To this point, four studies have heeded their call, and we now turn to these studies. 42 Table 1. Empirical Studies of Controversy-Based Decision Aids. Author-Year Level Task Sample Results Cosier (1978) Ind. MCPL Students DA > DI (S3) Cosier (1980) 1nd. MCPL Students Dl (S3) > D1 (81 & 82) Schwenk & Cosier (1980) Ind. MCPL Students DA = DI Schwenk (1982) Ind. MCPL Students D1 = DA = DADI > E (83) Schwenk (1984a) Ind. MCPL Students D1 = DA = Dl+ > E (S3) Schweiger & Finger 1nd. MCPL Students (1984) Cosier, Ruble & Aplin Ind. Case Managers DA > D1 (81) (1978 Study Cosier & Aplin (1980) Ind. Case Students DA = DI Study Schwenk & Thomas Ind. Case Managers DI > Control (1983) Study Schwenk (1984b) Ind. Case Students DA > Dl (written instructions Study only) Cosier & Rechner (1985) 1nd. Bus. Students DA > Dl (students receiving R2 Game Managers first only) Chanin & Shapiro (1984) Grp. Bus. Students D1 = DA > C8 (pert) Game Schweiger et a1. (1986) Grp. Case MBAs D1 = DA > CS (pert) Study D1 = DA < CS (affect) Schweiger et a1. (1989) Grp. Case Managers D1 = DA > CS (pert) Study D1 = DA < CS (affect) Schwenk & Cosier (1993) Grp. Case Students DA > C8 (pert) Study 43 Controversy-Based Interventions in Group Decision Making Chanin and Shapiro (1984) conducted the first published study of structured conflict on group decision making using a moderately complex business simulation, the Executive Game. They used four person groups along with specialized roles for each group member corresponding to the functional diversity in strategic decision making settings (one president and three vice presidents). Each of the vice presidents represented a department within the organization, representing to some degree the diverse backgrounds of individuals typically involved in strategic decision making. Three inquiry methods (DA, DI and a control condition) were examined for their effect on several quantitative performance variables generated by the simulation (i.e., industry ranking, retmn on investment, net profit). DA and DI were operationalized by having each vice president individually prepare an operational plan from the data base of performance information generated by the simulation. In the DI condition, each plan was then presented to the entire group along with corresponding assumptions and supporting data. After all three plans had been presented, groups were instructed to conduct a general discussion of the pros, cons and underlying assumptions for all three plans. Following this, groups were told to agree on a final set of assumptions and jointly develop an integrated strategic plan. The manner in which DA was operationalized is less clear. The authors confined their description of DA to noting that it involved the following four-step process: 1) development of strategic and operational plans (forecasts); (2) plan presentation at the management briefing session; (3) management critique of the plan; and (4) development of a final plan (pp. 165-166). Control groups were allowed to operate freely and make decisions using a structure of their own choice. Chanin and Shapiro (1984) trichotomized the 51 teams in their study into high, medium and low categories on two performance variables, industry ranking and return on 44 investment, for each of the three simulation years. For each year and index, there were more DI groups in the high performance category than either DA or control groups. DA groups were noted as being most prominent in the medium performance group while control groups appeared in the low performance category more often than either D1 or DA groups. Noting that the previous categorization scheme could not be tested for significance, t-tests were also conducted to compare the three inquiry methods on a number of performance variables across the three decision periods in the game. With respect to the 15 t-tests possible comparing each method to another over the three decision periods, DI was significantly better than the control condition for nine comparisons (60%), while DA was better than the control group for only two variables (19%). DI was superior to DA on two comparisons as well (19%), both in third and final year of the simulation. The control condition did not produce higher outcomes than D1 or DA at any time. However, given the likely interdependence of the 15 measures, it is dificult to interpret these percentages in a straightforward fashion. 0n the basis of these findings, Chanin and Shapiro concluded support for a "very strong and statistically significant difierence" between DI and the control condition. Furthermore, they noted that the study permitted classification of DI as high, DA as moderate, and the control conditions as low performance problem-solving technologies. These results suggest that DI does produce superior group outcomes compared to DA, but given the methodological issues noted above, this conclusion must be qualified. In the second study on structured conflict in a group setting, Schweiger et al. (1986) compared the two conflict-based methods (DA and DI) against a consensus- seeking (CS) approach. Using a case analysis scenario (the Leitch Quality Drug Company), Schweiger et a1. operationalized DA and DI by splitting four-person groups into two—person subgroups, with one subgroup in both DI and DA assigned the 45 responsibility of developing and presenting recommendations to the other subgroup along with all supporting assumptions and data. In the DI condition, after receiving the assumptions of the first subgroup, the second subgroup then formulated a counterplan based on assumptions that negated those of the first subgroup. Following this, both subgroups presented their assumptions and recommendations and supporting data to the other subgroup in a structured debate setting. In the DA condition, the second subgroup was instructed to prepare a critique of the plan recommended by the first subgroup. After receiving the critique from the second subgroup, the first subgroup modified their assumptions and recommendations and presented the revised plan to the critiquing subgroup. This cycle was repeated until the plan was approved by the second subgroup. The consensus-seeking (CS) approach was implemented by giving groups a variety of instructions generally attempting to get members to be skeptical, noncompetitive, reasonable and flexible. Schweiger et a1. (1986) measured a number of process and outcome variables, including the number, validity and importance of identified assumptions as well as the quality of final recommendations. These variables were measured by having two judges independently code transcripts generated from audiotape recordings of each group as they solved the case. They also measured several afl'ective variables such as satisfaction, desire to work with the group again in the future, critical reevaluation of assumptions and acceptance of the group's decision. Schweiger et al. (1986) found that although DA and DI groups did not identify more assumptions than CS groups, both methods led to the identification of assumptions characterized as more valid and more important than those of the consensus groups. DA and DI groups also generated higher quality recommendations than CS groups, and resulted in more critical reevaluation of assumptions at the individual level. With respect to comparisons between DI and DA, DI groups were only superior to DA on one process 46 variable (validity of assumptions) and there was no significant difference between the two methods with regard to the quality of recommendations. On the other hand, groups in the consensus condition expressed more satisfaction with the task, greater willingness to work together in the future and greater acceptance of the group decision than did groups in the DI and DA conditions. Schweiger et a1. (1989) followed up their earlier study by examining DA, DI and CS in conjunction with groups of real-life managers and multiple decision situations. In addition to the Leitch Quality Drug Company case, Schweiger et a1. (1989) added a second case analysis, the Hudepohl Brewing Company case. Managers were randomly assigned to four person groups in one of three inquiry method conditions (DA, D1 or CS) and used the same decision making approach for their second decision as well. DI, DA and CS were implemented in the same fashion as the Schweiger et al., (1986), and the same process and outcome variables were measured along with the time taken to complete each decision task. Each session was again tape recorded. The results of Schweiger et a1. (1989) were consistent with Schweiger et a1. (1986) and provided further support for conclusions drawn fi'om the earlier study. As before, the number of assumptions identified by groups was not related to inquiry method, but the validity and importance of identified assumptions was higher for both DA and DI groups compared to the CS groups. Also, both DA and DI groups again produced superior recommendations and greater individual critical reevaluation of assumptions than CS groups. Of particular significance, there were no differences between DA and DI for any measured variables, including validity of assumptions. With respect to meeting time, CS groups took significantly less time than DA and DI groups in the first decision task, but not in the second. All groups produced higher quality assumptions and recommendations and took less time on their second decision task. 47 The pattern of results with respect to the afiective variables was similar to Schweiger et al. (1986) for the first decision, but not for the second task. In general, CS groups reported superior afl‘ective responses for the first decision, but not in the second. There was no efl‘ect of inquiry method on satisfaction but all individuals reported higher satisfaction with their groups in the second task. In the first decision, CS group members reported significantly greater acceptance of the group decision than did individuals in the DA and DI conditions, but this advantage was narrowed and became non-significant in the second decision. Schwenk and Cosier (1993) appear to have been persuaded by the arguments of Schweiger and his colleagues that DA and DI were meant to be employed as aids to group decision making. In the first study by either author involving groups, Schwenk and Cosier (1993) compared the relative effectiveness of DA and CS methods while measuring the degree of agreement within each group with respect to the groups objectives. The task used was the same case study as that employed by Schwenk (1984) involving a soft drink company focused around the major decision of whether to acquire a winery, develop a new soft drink, do both or do neither. Similar to the Schweiger et al. (1986; 1989) studies, Schwenk and Cosier (1993) measured several process and outcome variables, including the number and quality of assumptions identified, the quality of recommendations, the degree of critical self- evaluation, as well as several afiecfive variables (commitment to decision, desire to work again in future). Although the analyses indicated some efl'ect of within-group agreement on objectives, no significant difference due to inquiry method was found for any of the performance variables (i.e., number of assumptions, quality of assumptions or quality of recommendations). However, as in Schweiger et al. (1986; 1989), participants in the CS condition expressed more satiSfaction and desire to work again as a group than did DA participants. 48 Effects of controversy-based decision aids. A great deal of time and effort in the last 25 years has gone into talking about, arguing over and (to some extent) studying the degree to which controversy-based decision aids are useful tools for increasing the likelihood that organizations will make good strategic decisions (Schwenk, 1990). For the most part, the most of the literature on DA and DI has focused on answering the question of the relative eflicacy of the two techniques. In reviewing the literature, Schweiger et al. (1989) noted that the answer to this question has been clouded by variation in tasks, experimental samples and, most importantly, the level of analysis used across studies. Early studies conducted in the field using actual managers in real organizations suggested that DI was a useful tool for exposing top management to alternative approaches (Mitrofl‘ & Mason, 1981; Mitrofl‘, 1982). Laboratory research on DA/DI using individuals resulted in mixed findings. An early review of laboratory comparisons of DA and DI using the MCPL task noted no clear superiority for either DA or DI (Schweiger & Finger, 1984). Finally, three recent studies conducted with groups have all formd that neither DA nor D1 is better than the other with respect to group performance or individual afl‘ective outcomes. Schwenk (1990) conducted a meta-analysis of the efi‘ects of DA, DI and E on group performance using 16 published studies and 17 effect sizes. On the whole, DA was found to be superior to the E approach while DI was not. Further, DA was not formd to be reliably more efl‘ective than DI. According to Schwenk, the meta-analysis "supports one clear conclusion which has been disputed in past literature reviews: The DA improves decision making over an expert-based approac " (pp. 170-171). However, DA was not found to be better than DI. Given the growing consensus that strategic decision making usually occurs in group settings, it is questionable whether the Schwenk (1990) meta-analysis is capable of addressing the issue of relative eficacy between DA and DI. Only three of 16 studies in 49 the Schwenk (1990) meta-analysis employed groups as their decision making unit. This calls into question the relevance of the findings to group-level decision making. Looking at both individual and group-level studies, Schwenk (1990) addressed this issue by identifying three potential moderators of the efl‘ects of DA, DI and E on decision making performance - type of subject (students v. managers), decision making level (individuals v. groups), and type of task (MCPL v. non-MCPL). A sub-group analysis was performed for the task type moderator, but analysis of the other moderators was not conducted due to the small number of studies utilizing practicing managers and groups. In spite of the few studies examining DA and D1 in the context of group decision making, these studies enjoy two strong advantages over the remainder of the literature on controversy-based interventions: (1) They are the only studies conducted at the appropriate level of analysis and (2) Their findings are very consistent. Three different studies of DA and DI using groups have found neither method to be superior to the other (Chanin & Shapiro, 1984; Schweiger et al., 1986; Schweiger et al., 1989). At the same time, all three studies fotmd both DA and D1 to be superior to instructions to simply reach consensus. A fourth study by Schwenk and Cosier (1993) which only examined DA and CS also found DA to be better than CS. Further, the two controversy-based methods have consistently produced lower member satisfaction, acceptance of the decision and desire to work with the group again in the future than the CS approach, with neither DA nor DI superior to the other. Given the striking consistency of the group-level findings, it appears that it is now time to conclude that DA and DI as traditionally implemented work equally well. The studies available suggest that it does not make much difference whether DA is used or DI is used - either method is likely to yield higher group performance outcomes but lower group afi‘ective outcomes than a corresponding emphasis on seeking consensus. At the 50 same time, we know very little about why these techniques work. Little attention has been given to the processes through which DA and DI have their effects. Although in some sense it is enough to know simply that they do improve group decision making outcomes, both DA and DI introduce complex dynamics to group member interaction that need to be isolated and understood. It may be the case that not all aspects of controversy promote effective group decision making. In particular, the evidence we have reviewed thus far suggests that controversy impacts decision quality both positively and negatively. Sammy: The Dilemma Surrounding ControversL The relationships between information sharing, intragroup conflict, and group decision making quality imply a dilemma for those considering the use of controversy-based intervention techniques in decision making groups. In group contexts with diverse member composition and knowledge, it is necessary for individual experts to share relevant information which they alone have access to in order for there to be any chance of an optimal group decision. At the same time, interventions centered around stimulating controversy among group members may very well lead to intragroup conflict within the group. This intragroup conflict in turn may reduce the ability to integrate all the information shared by individual members as the group goes about deciding on a collective decision or plan. The few studies which have examined controversy-based techniques in group decision making settings have found them to yield more effective decisions than instructions to reach consensus; however, this may not always be the case. Ifthe level of conflict becomes high enough, the use of controversy-based techniques may actually lead to lower quality decision outcomes. Given the many factors contributing to low information sharing in groups, prescriptive interventions in group-level decision making must provide some mechanism which allows group members to overcome the potential hazards posed by groupthink and biased information sampling. The ideal intervention in such circumstances is one that 5 1 maximizes information sharing among individuals and minimizes the amount of intragroup conflict generated in the process. Furthermore, this intervention may already exist. The Forgotten Role of the Synthesis In an influential book in the early 19805, Mason and Mitroff (1981) identified three firndamental components of the dialectical inquiry: (1) a thesis, or favored plan, (2) an antithesis, or counterplan, constructed from values and assumptions contradictory to those underlying the thesis and (3) a synthesis, or new world view that incorporates the best features of both plan and counterplan yet somehow manages to reflect a “worldview” different from that which served as the basis for either plan or counterplan. The literature has largely ignored the role of synthesis in dialectical inquiry. In two studies, Schweiger and his colleagues (Schweiger et al., 1986; Schweiger et al., 1989) implemented DA and DI by splitting decision making groups into two two-person subgroups. Schwenk and Cosier (1993) implemented DA in the same fashion. Chanin and Shapiro (1984) implemented DI in four-person groups by having three members (departmental vice presidents) prepare, present and debate individual plans while one member of the group merely participated in the ensuing discussion. To our knowledge, no study has explicitly assigned the synthesis role to an independent bloc of group members in the fashion intended by the original descriptions of the dialectical process. Why is synthesis crucial to the efl'ectiveness of the dialectical inquiry process? Mason and Mitrofl‘ (1981) suggest that the synthesis embodies a new and “higher” understanding of the problems or issues faced by the group. The synthesis provides a deeper and insightful way of thinking about the matter. They explicitly highlight the transformational character of the synthesis; it incorporates a new worldview difl‘erent from those that inspired the plan and counterplan. They suggest that the importance of the synthesis lies in its ability to provide a new perspective that, while borrowing from 52 and similar to both thesis and antithesis, is at the same time fundamentally novel and distinct. It is somewhat surprising then to note that empirical studies of D1 have largely ignored the role of synthesis in the dialectical process. As typically implemented, DI instructions assign halfof the group to the formation of a plan, the other half of the group to the formation of the counterplan, and no one to the role of overseeing a synthesis. In fact, no study was found that explicitly assigned the synthesis role to one or more group members. In most cases, group members were collectively asked to develop a synthesis at the conclusion of the structured debate over plan and counterplan. To the extent that intragroup conflict results fi'om the controversy engendered during the debate, it seems overly optimistic to expect group members who have been actively arguing for the thesis or antithesis to put aside their feelings, opinions and views and creatively forge a new “worldview.” One of the goals of this study is to provide an examination of dialectical inquiry the way it was intended to be implemented: with thesis, antithesis and synthesis roles explicitly assigned to independent sub-groups. At the same time, neither Mason (1969) nor Mason and Mitroff (1981) has clearly specified the individual behaviors and underlying group-level processes that promote the formation of a new “worldview.” In the context of the complex and ill-structured decisions characteristic of strategic decision making, the synthesis role would seem to involve a set of behaviors that promote efl'ective group decision making. A well-known distinction in the literature on groups and teams involves task and maintenance functions (Levine & Moreland, 1991; Salas et al., 1992). Behaviors with a task orientation are concerned with maximizing production or meeting the group’s goals; behaviors with a maintenance orientation are most concerned with promoting/maintaining positive interpersonal relations and harmony within the group (Benne & Sheats, 1948; Thiabut & Kelley, 1959). The task and 53 maintenance functions now appear to have been accepted as fundamental dimensions for describing the behavior of group members. The definition of the synthesis role as defined by Mason and Mitroff (l 981) seems closely associated with the "task" function but, when implemented in groups, may also serve a maintenance function as well. Summarizing the results of earlier research, Patton et a1. (1991) identified a set of individual behaviors corresponding to these two basic dimensions. Behaviors that serve a task orientation include such things as initiating structure, stimulating communication, clarifying communication, summarizing, and consensus-testing. “Initiating structure” behaviors include proposing objectives to the group, introducing procedures, developing an agenda, and suggesting the group move on to new topics. “Stimulating communication” refers to direct requests for other group members to provide information or opinions. “Clarifying communication” pertains to efl‘orts to reduce confusion by asking questions or interpreting ideas. “Summarizing” behaviors provide a review of what has been said or accomplished so far. “Consensus-testing” behaviors are intended to provide some indication of the extent to which group members agree on what has been said or proposed. In domains other than psychology, researchers have identified similar sets of behaviors intended for use by facilitators or mediators intended to help groups under stressful decision making conditions. Martin (1983) listed five important behaviors for such circumstances: (1) Reflecting, (2) Silence and Attentive listening, (3) Asking for specifics, (4) Making I-statements, and (5) Focusing on areas of agreement. Frost and Wilmot (1978), in discussing tactics for intervening in group processes, noted the positive impact of behaviors such as (1) Being descriptive instead of judgmental, (2) Encouraging specificity, (3) Providing feedback, (4) Setting and keeping to an agenda and/or associated time limits, (5) Comparing and restating positions, (6) Summarizing, (7) Providing information, (8) Calling on persons in a non-threatening way and (9) Forging 54 commitment to negotiated plans. A common theme uniting these behaviors is their focus on drawing out information and structuring group activities while without provoking interpersonal conflict. Although possible to categorize these behaviors at various levels of specificity, it is possible to identify four general behavioral dimensions that serve to promote the eflecfive processing of information in groups fi'om these lists. These dimensions are: (1) Reflecting/summarizing what has been said by other group members, (2) Asking other group members clarifying questions or for specific information, (3) Integrating ideas and recommendations, and (4) Focusing/ structuring group activities and discussion items. These four behavioral dimensions comprise the basic elements of a broad construct which can be called process facilitation. Process facilitation is defined as a set of behaviors employed by one or more individuals within a group intended to help a group structure its activities and use its informational resources as effectively as possible. It seems likely that these behaviors will have positive effects on both information sharing and intragroup conflict in groups involved in the dialectical inquiry process. With regard to information sharing, process facilitation may promote information sharing in at least three difl’erent ways: (1) As post-hoe responses to summarization attempts, (2) In response to direct questions asking for clarification of confusing points and (3) As support for integrative directions proposed by the synthesizing individual(s). Process facilitation should also impact the level of intragroup conflict that develops within strategic decision making groups by de-emphasizing the potentially confrontational situations that develop around the acceptance and support of member ideas. By providing a neutral “third party” influence, the synthesis role in D1 can dilute the competitive focus instilled by the debate process, and provide a rubric for compromise, position change, and backing down without loss of face. 55 Integration This paper began by defining strategic decisions as complex, important, ill- structured problems or issues facing organizations. Although organizations confront many complex and important decisions, strategic decisions are distinctively characterized by their relevance to the entire organization, multifunctional representation, and lack of structure (Taylor, 1992). These characteristics imply that strategic decisions will involve large amounts of relevant data and multiple perspectives with regard to how objectives can best be attained (Tjosvold, 1985). As a result of this low structure, high information load and cross-functional relevance, strategic decisions are often made by groups of individuals from multiple departments within the organization (Koopman & Pool, 1990; Larson & Christensen, 1993). Koopman and Pool (1990) note that, by gathering specialists with diverse expertise fi'om diflermt departments, organizations seek to maximize their chances of high-quality decisions. In other words, for most organizations, strategic decision making is a group phenomenon. A Process Model of Group Decision Making Unfortunately, despite intuitive reasons to expect otherwise, it is clear that strategic decision making groups can, and do, make poor decisions— sometimes with disastrous results (Janis, 1972; Janis & Mann, 1977). Like all groups, strategic decision making groups are susceptible to "process loss" (Steiner, 1972; Hill, 1982). This paper identified two forms of process loss that may occur in decision making groups: restricted information sharing and poor integration of shared information. The focus of attention on information in complex, ill-structured situations is based on the assumption that a high level of information sharing is a necessary - although not suflicient - condition for high quality group decisions. Under normal circumstances, more information is usually better in that it provides more resources from which to form 56 the group product. In these situations where no group member could possibly know everything needed to create an optimal plan, it is necessary for groups to combine and integrate information available to multiple members. The failure to consider important information known to one or more individuals is likely to correspond to a failure to consider important information. At the same time, in many situations, high levels of information sharing may not be crucial for good decisions but rather optimal decisions, as individuals can couch information within high-quality recommendations without explicitly sharing information. Still, it seems reasonable to believe that, in most situations, high levels of information sharing will result in maximal decision quality. Research on "groupthink" (Janis, 1972; Janis, 1982) and biased information sampling (Stasser & Titus, 1985; Stasser et al., 1989) has called attention to the tendency for groups not to discuss information known to one (or only a few) group member(s). In the case of "groupthink," although the empirical evidence is scant, high group cohesion, external threat and a desire to reach consensus quickly are hypothesized to result in restricted consideration of information known to the various group members. In the case of biased information sampling, research suggests that groups tend to spend their time discussing information known to all group members. Both of these approaches suggest that low information sharing among group members leads to sub-optimal decision outcomes. Although it is clear that high levels of information sharing are necessary for optimum group decision making, decision quality is likely to be afi‘ected not only by the amount of information available to the group as a whole, but also the manner in which that information is used once it has been shared. A second form of process loss can occur when groups do not adequately utilize information at their disposal. Furthermore, a review of the literature suggests that intragroup conflict may inhibit the ability of groups to combine the information at their disposal (T josvold, 1985; Pruitt & Rubin, 1986; 57 Zander, 1994). Although the intrapersonal and interpersonal negative effects of intragroup conflict have received a great deal of attention in the literature, the effects of intragroup conflict on group—level information processing have received much less attention (Tjosvold, 1985; Zander, 1994). Conflict among group members has been found to produce a variety of afi‘ective, behavioral and cognitive consequences at the individual level, including anger, annoyance, frustration, hostility, insults, issue distortion, cognitive simplification and irrationality. In groups, conflict breaks down relationships, hinders communication, obstructs problem solving (Tillett, 1991). As group members become agitated and annoyed during group discussion, they tend to become defensive, close-minded and less willing to accept the views of others. Emotions become aroused, initial positions tend to harden and group members become hypersensitive to the threat of losing “face.” The desire to avoid losing face leads members to forego admitting that someone else might have a better way of doing things or even a useful recommendation. As certain members begin to dominate, the information shared by dissenting members of the group may tend to be left out of the collective plan. Thus, when levels of intragroup conflict are high, constructive synthesis of differing viewpoints seems unlikely as the information processing resources available to groups in the form of time and individual attention are diverted or reduced. Therefore, I propose that: H1: The level of intragroup conflict in strategic decision making groups will moderate the effect of information sharing on group performance. When intragroup conflict is low, a high level of information sharing will result in higher group performance. However, when intragroup conflict is high, high levels of information sharing will not result in higher group performance. The discussion of group processes so far suggests that information sharing among group members is essential for optimum decision making. Furthermore, we have hypothesized that intragroup conflict will moderate the relationship between information 58 sharing and decision quality. Figure 1 presents a model that summarizes the discussion of the processes involved in group decision making for ill-structured task conditions. In light of the many well-known fiascoes resulting from the failure of decision making groups to examine questionable assumptions and use all available information, a number of researchers have advocated the use of techniques that introduce controversy into decision making groups. Earlier in this paper, conflict and controversy were distinguished on the basis of their nature (i.e., affective versus cognitive) as well as their focus (i.e., directed at other members, directed at ideas). Accordingly, the introduction of controversy is designed to promote the critical examination of assumptions, recommendations and supporting information as well as inspire search for creative alternatives. Controversy can lead individuals to become motivated to explore and understand opposing views and arguments, appreciate the shortcomings of their own perspective, integrate useful aspects of others’ positions, develop a fresh vieWpoint and, as a result, make high-quality decisions (Tjosvold, 1985). In strategic decision making groups, controversy seems likely to promote information sharing, as members are confionted and called upon to give credible explanations for their positions. However, at the same time, in strategic decision making environments involving members with strong, entrenched beliefs and an interest in preserving "face," controversy seems likely to have a number of unintended negative side-effects as well. For example, group members may express their opinions directly but close-mindedly or refuse to acknowledge the appropriateness of another group member's views to avoid the embarrassment of backing down from a position. As alternative positions harden, proponents may attempt to find weaknesses in opposing arguments, counterattack and undercut opposing positions in an effort to impose their own views (Zander, 1994). If handled inappropriately, disagreement over ideas may soon transform itself into interpersonal hostility. As a result, I propose: 59 8550 usouwmbfi mo Spam wcufioeoE 2E. ._ 223m mo:mE.5%5L @380 A + wccmzm :osmEeEE SEED msouwfifiz 60 H2: The level of controversy in strategic decision making groups will be positively related to the level of information sharing. H3: The level of controversy in strategic decision making groups will be positively related to the level of intragroup conflict. The previous two hypotheses form the basis of a dilemma when it comes to considering the use of controversy-inducing methods such as dialectical inquiry: although controversy should serve to increase information sharing among group members, it should also result in high levels of conflict that prevent that information from being used properly. This existence of this dilemma is indirectly supported by the results of a growing body of studies that have measured affective outcomes and decision quality in groups using dialectical inquiry, devil’s advocacy and consensus-seeking inquiry methods. These studies have tended to find that controversy-based methods such as DA and DI do produce higher-quality outcomes, but also tend to lead to higher levels of individual dissatisfaction. At the same time, studies from a number of domains relevant to group performance suggest that group members may engage in a number of behaviors that facilitate interpersonal interaction and the use of information. Process facilitation has been defined as a set of behaviors that help provide the group with structure in a procedurally ambiguous environment as well as utilize the informational resources at its disposal by reviewing where the group has been and identifying where it needs to go. As a by-product, process facilitation may help quell conflict originating out of the confiontational atmosphere of the dialectical inquiry process. Whereas the assignment of all group members to one of two salient "sides" in a debate may accentuate the confi'ontational aspects of the task, the use of facilitative behaviors should weaken the "us versus them," "win-or-lose" mindset that might otherwise develop within the group as a result of the debate. In particular, facilitative behaviors may serve as a springboard for constructive discussion in the first awkward moments after debate has ended. Group 61 members performing a facilitative function can identify variations of plan or counterplan, ofi‘er integrative solutions combining elements of both plan and counterplan, probe advocates for underlying reasoning and data, and generally provide a means for movement around impasses without the loss of face occurring on either side. By taking charge of the post-debate process and promoting understanding of the positions that have been taken, we may expect that an effective synthesis process may reduce the longevity and severity of conflicts that do arise among members and promote information sharing through clarification, integration and imposed structure. Thus, I propose the following hypotheses: H4: The level of process facilitation in strategic decision making groups will be negatively related to the level of intragroup conflict. H5: The level of process facilitation in strategic decision making groups will be positively related to the level of information sharing. Figure 2 displays an expanded process model of group decision making showing the efl‘ects of Controversy and Process Facilitation. A Prescriptive Model of Group Decision ME The process model of group decision making just described has the potential to inform our understanding of the ideal intervention into group decision making. The literature on group decision making interventions has focused on increasing the information sharing among group members through the introduction of structured conflict (i.e., DA and DI). As a result, the ideal group intervention is one that maximizes information sharing among group members while minimizing the intragroup conflict that results fiom this sharing. Given that DI has never been implemented with explicit assignment of the synthesis role, it is useful to compare the two DI variations (with synthesis role, without synthesis role) to what may perhaps be the default group decision making norm in 62 mafia—2 :22on 955 me .032 388m < .N ocswfi 00:5.:._O,.—._®L wccmzm A 95.5 > 5:328 E 35:00 escawmbfi A 1T 3.55.5200 :OSSEQE . mmoooi 63 organizations: group member consensus. Research has found that DI implemented without explicit assignment of the synthesis role ("Traditional DI") has tended to produce better group decision outcomes but lower affective outcomes for members, possibly as a result of conflict engendered in the dialectical process. It seems reasonable to expect that D1 in either form will continue to lead to better collective decisions but also more intragroup conflict. On the other hand, when comparing the two DI techniques, we might expect that DI with formal assignment of the synthesis role (“Synthesis” DI) will result in both increased information sharing and, to some degree, less intragroup conflict. Many process facilitation behaviors fall very naturally to the member (or members) assigned the synthesis role in the DI process. The presence of one or more neutral facilitative group members in the DI process should allow groups to enjoy high information sharing AND relatively low levels of intragroup conflict. Thus I propose: H6: Groups employing Synthesis D1 will exhibit higher levels of information sharing than groups employing Traditional DI, while groups employing Traditional D1 will in turn exhibit higher levels of information sharing than groups employing Consensus. H7: Groups employing Synthesis D1 will exhibit higher levels of intragroup conflict than groups employing Consensus, but lower levels of intragroup conflict than groups employing Traditional DI. Finally, as a result of sharing more information than Consensus-Seeking and Traditional DI groups, as well as generating less intragroup conflict than Traditional DI groups, Synthesis DI groups should incorporate the best features of both structured conflict and consensus-seeking methods. Thus, I propose the following: H8: Groups employing Synthesis D1 will exhibit higher levels of group performance than groups employing Traditional DI, while groups employing Traditional DI will in turn exhibit higher levels of group performance than groups employing Consensus. 64 Figure 3 displays a model of group decision making that adds the prescriptive elements discussed above to the expanded process model of group decision making. However, it is important to note that the model identified here is limited to decision making groups facing certain task conditions. First and perhaps most importantly, the model is constrained to situations where task-relevant information is distributed across members in such a fashion that it is impossible or impractical for one individual to eflecfively accomplish the task. Indeed, a central feature of the model identified here -- information sharing - makes no sense in situations where all group members possess essentially the same knowledge of the subject (e.g., juries). Furthermore, given the lack of feedback loops, this model most directly addresses the process and performance of groups or teams assembled in an ad-hoc fashion for a particular task. Finally, this model may be limited to groups or teams that meet in a face-to-face manner. In all likelihood, the relationships among inquiry method conditions, process variables and performance might very well be difl‘erent for groups interacting through information-restricted media (e. g., computer-linked networks). Such media not only limit the richness of interpersonal interaction but introduce a temporal dimension to interaction not reflected in the present model. In reality, as a whole these constraints limit the relevance of the model to ad-hoc decision making groups interacting in a face-to-face manner and facing broad, diffuse problems or issues that involve large amounts of specialized information. As such, this model is most relevant to organizational decision making groups brought together for a single major “even ” such as creating a strategic plan and/or setting organizational policies. Therefore, although some group tasks can be accomplished by individuals, this model applies only to task s that must be performed by decision making groups because of the physical and information processing limitations associated with individuals. wet—£2 56609 @280 me .252 033585 < .m oczwmm mocmE._£._oL o\+ wccmsm All 95.5 A » 3.53.530 A] :o_EE._o%__ 65 9.053505— mmeooi 35:00 asewfisc _ 3522 \CEUE \ 66 Summa_ry Decision making groups can suffer from at least two types of process loss relevant to the processing of information known by individual members: failure to share information with the group and failure to optimally use information which is shared. A good deal of research on improving strategic decision making has focused on two particular methods that purport to introduce controversy as a means of promoting information sharing among group members. Unfortunately, the process of introducing controversy probably leads to conflict among members, which may then prevent the optimal integration of information which has been shared. There is growing evidence in the literature that the most salient distinction between traditional DA and DI techniques - "critique" versus "counterplan" - may be relatively tmimportant. On the basis of recent studies using groups, it appears that critique is interchangeable with counterplan in terms of affecting decision making quality (Chanin & Shapiro, 1984; Schweiger et al., 1986; Schweiger etal., 1989). At the same time, these studies have implemented structured controversy in different ways - two two- person subgroups versus three one-person subgroups and an apparently "neutral" fourth member. With the benefit of a process model depicting group-level information processing, I hypothesized that Synthesis DI incorporating facilitative member roles would result in as much if not more information sharing than Traditional DI and generate less intragroup conflict. This study can extend our understanding of the effectiveness of decision making groups in three ways. First, this study seeks to clarify the role of information sharing, intragroup conflict, controversy and process facilitation in determining group performance in a complex, ill-structured task. Second, this study will provide a further assessment of the relative eflicaCy of structured conflict methods and the consensus- seeking approaches to group decision making. Third, it attempts to replicate the findings 67 of Stasser with regard to the failure of decision making groups to identify "hidden profiles," extending our knowledge of that issue to ill-structured situations with meaningful measures of group performance. Finally, this study will provide a preliminary test of the relative merits of a modified form of DI hypothesized to result in more information sharing and lessened intragroup conflict. Ultimately, this study will help to provide a better understanding of how structural interventions can be designed to improve both individual and group outcomes in decision making settings via the promotion of information sharing and the reduction of intragroup conflict. METHOD Participants Research participants in this study consisted of 240 college students enrolled in one of several tmdergraduate psychology courses offered at a large public university in the midwest. Individuals participated in this study as part of four-person groups and data was collected on 60 groups in total. Of the 60 groups, 14 were composed entirely of females, 17 groups had three females and one male, 18 groups were composed of two females and two males, eight groups were made up of three males and one female, and only three groups consisted entirely of males. Individuals who took part in the study received course credit or extra credit for participating in the study. In addition, a financial incentive was employed to maximize participant motivation in the study. In each of the three study conditions described below, top performing teams were awarded $80 ($20/person), second-place teams were awarded $60 ($15/person), and third-place teams were awarded $40 ($10/person). W The task employed in this study simulated the operations of a hypothetical regional U.S. airline organization, "SouthEast Airlines" (Devine, 1995). The simulation is intended to be a moderately realistic strategic decision making task conducted in a face- to-face setting. In "SouthEast Airlines," groups of four individuals represent the top management team of "SouthEast Airlines" charged with the task of creating a strategic business plan for an upcoming period of airline operations. Each member of the group is assigned a position as an executive vice-president in the company responsible for one and 68 69 only one of the following areas: Flight Operations, Industry Analysis, Marketing, or Finance. The overall object of the task is to formulate a plan resulting in maximum profit for the organization. In order to create such a plan, each group must make numerous interdependent decisions involving choices about such things as service routes, aircraft route assignments, facility locations, fare prices, advertising media and spending levels, as well as aircraft sales and purchases. The simulation consists of two major parts: (1) an Individual Preparation phase in which participants are allowed to study the information provided to them and (2) a Group Discussion Phase in which group members are allowed to interact in the process of forming their collective strategic plan. (See the procedure section below for further details on experimental protocol.) Group performance in the simulation is defined as profit earned by the group's plan when resolved according to the simulation's algorithms. Appendix A contains all task materials associated with the "SouthEast Airlines" simulation. "8th Airlines" is intended to represent real-world situations where information relevant to group decision making is distributed across a number of "experts." In "SouthEast Airlines," much of the information available to the group is distributed across the four vice president positions mentioned above, each of which represents a difl‘erent fimctional area of the organization. Prior to beginning the simulation, after having been assigned to one of the four vice president positions, each group member receives a packet of information corresponding to the particular position. The information contained in each position packet can be divided into two parts: information provided to ALL group members and information provided to only one member (although a few pieces of information were provided to two players out of logical necessity). All group members received a document entitled "8th Airlines' Year- End Report," a document containing information about the company's operations in the 70 last fiscal year. Each vice-president also received a packet of information ("Memo") from his or her respective staff (see Appendix A also for these documents). The material contained in each vice-president's memo concerned information relevant to that particular frmction within the organization. Table 2 provides a summary of the information provided to the four vice-presidents in "SouthEast Airlines." 71 Table 2. Information Provided to Vice—President Positions in "SouthEast Airlines." Document-Recipient Information Concerning: Year-End Report Last year's: (ALL) Operational routes Aircraft assignments Route Market Share Fare prices Revenue & Costs (total and by route) Route Return on Investment (ROI) Memo to Last year's fuel costs (by route) VP Flight Operations Aircraft operating characteristics Eflect of #Daily Flights, Aircraft Accommodations, and #Flight Stafi on Market Share Memo to Potential expansion routes VP Industry Analysis Expected Competition levels Expected Passenger Demand values Round-trip Distances Industry averages for Flight Staff Memo to Setting optimum fare prices VP Marketing Advertising costs Advertising effects Advertising media Memo to Personnel cost projections VP Finance Facility cost projections Existing loans Aircraft sales 72 As can be seen from Table 2, each role in the simulation received basic information describing operations during the last fiscal year plus some important information which no other participant received. Given the sub-optimal quality of the existing strategic plan, each vice president was in a position to recommend some changes that would improve upon existing operations. However, although possible to improve profit by simply employing unilateral suggestions from each vice-president, the simulation was designed so that each group, in attempting to integrate the recommendations of each position, would have to resolve the dilemma of "expansion" versus "consolidation." In order to allow for the examination of dialectical inquiry with "diametrically opposed" plans, information was provided to the Vice-Presidents of Flight Operations and Industry Analysis leading them to adopt one or the other of these two strategies. In particular, information provided to the Vice-President of Flight Operations supported a "consolidation" strategy based on dropping high-cost existing routes, re-allocating aircraft, increasing daily flights and flight stafi‘, and buying a new fuel-efficient aircraft. The Vice-President of Industry Analysis, on the other hand, received information suggesting the need for an "expansion" strategy centered on dropping most of the existing routes, adding many new routes with limited daily flights to each city, and buying a number of large, expensive new aircraft. Information provided to the Vice-Presidents of Marketing and Finance was theoretically "neutral" with regard to each of these strategies. However, since no one other than the Vice President of Industry Analysis had information on new routes, the situation in fact tended to begin as a "three-against-one" coalition in favor of an efficiency-consolidation approach. To reinforce the underlying dialectic in the simulation, certain types of information were provided in a probabilistic fashion. In particular, the information provided to the Vice-President of Industry Analysis concerning the competition level on 73 various routes was given as a percentage likelihood, while passenger demand values were provided in terms of a range. This uncertainty was designed to add realism to the task in that outcomes could not be perfectly determined in advance, as well as allow assumptions to enter into discussion as participants were forced to "fix" random values in their own minds in order to create a cohesive plan. In summary, "SouthEast Airlines" is intended to be a low-fidelity simulation involving: (1) uncertainty, (2) heavy information processing demands and (3) the existence of multiple perspectives and approaches with regard to how the group can best satisfy its objective. The simulation incorporates a number of concepts relevant to strategic management, including price-demand relationships, competition, environmental tmcertainty, local monopoly, market share, advertising, operating costs. Information cues in "8th Airlines" were distributed so that no group member had access to all (or even the majority) of the available information. Some information was known to only one member, while most of the information was known to all members before group discussion. However, all rules and information necessary for creation of a high-quality plan was provided. Thus, as in Stasser's previous work on information sampling, groups were in the position of having to "uncover" the knowledge possessed by all members in order to resolve the various trade-offs that arise in the course of specifying operations. Research Design This study involved one manipulated between-groups factor, Inquiry Method, composed of three levels: (1) Consensus-Seeking (CS), (2) Traditional Dialectical Inquiry (TDI), and Synthesis Dialectical Inquiry (SDI). Groups were randomly assigned to one level of the Inquiry Method factor with the constraint that all conditions must have run once before any condition could repeat. This established a repeating cycle of three- group sequences where each condition appeared once in the sequence in a random order. Eighteen full sequences were conducted (54 groups, 18 of each condition), and then a 74 selected number of each condition were run in a randomly determined order in an attempt to arrive at equal numbers of each condition after some groups were removed due to the loss of data or invalid group plans. In total, 19 CS groups, 20 TDI groups and 21 SDI groups were run, although some groups were dropped from one or more analyses due to missing data (see section on missing data in the Results for further details). Inquiry Method was manipulated by providing videotaped instructions to all groups concerning the procedures to be used in completing their task. Instructions were presented to groups immediately before the Individual Preparation phase, and a short verbal reminder was provided immediately prior to Group Discussion. A printed copy of the group's inquiry method instructions was also provided in the role packet of information given to each participant so that instructions would be available at all times. The CS condition was intended to provide a baseline condition with respect to the two DI conditions. Appendix B displays the instructions presented by videotape and in writing to participants in groups using the consensus-seeking method. As evident in the instructions, groups were given general instructions to present and consider all views, manage conflict productively, and avoid adopting a plan immediately if everyone seems willing to accept it. Groups were instructed to discuss ideas until all members were willing to accept the features of a particular plan, at which point the group was said to have reached consensus. No special role assignments were made in the CS condition and group discussions were not constrained to begin in any particular fashion. The Traditional DI (TDI) condition was intended to represent dialectical inquiry in a fashion similar to the way it has been operationalized in the literature - with explicit thesis and antithesis assignments, but no assignment of the synthesis function. Appendix C presents the instructions given to groups employing the Traditional DI method. Inspection of this appendix shoWs that TDI groups were instructed to begin Group Discussion with the following sequence of events: (1) The VP of Flight Operations 75 presents Plan A, (2) The VP of Industry Analysis presents Plan B, (3) The VP of Industry Analysis critiques Plan A, and (4) The VP of Flight Operations critiques Plan B. The dialectic process was to occur in the first 20 minutes of Group Discussion, after which general discussion among all members was allowed to begin and continue until a plan had been reached which was acceptable to all group members. In the TDI condition, the Vice-Presidents of Marketing and Finance were given no specific role and were asked to hold their questions and comments until after the dialectical process between the other two vice-presidents was finished. As noted above, information provided to the two vice-presidents involved in the creation, presentation and critique of plans was explicitly designed to yield two plans that were "diametrically opposed" to one another. These two individuals also received a supplementary set of instructions in order to help them enact their role in the dialectical process. See Appendix D for the role instructions given to the two vice-presidents involved in the dialectical process. The Synthesis DI condition was designed to employ dialectical inquiry in a manner consistent with that originally discussed by Mason (1969) and Mason and Mitrotf (1981) by explicitly assigning the synthesis role to members of the group. Appendix E depicts the instructions given to groups in the SDI condition. As can be seen from the appendix, the SDI condition was implemented in exactly the same fashion as the TDI condition with one exception: The Vice-Presidents of Marketing and Finance were instructed to take a synthesis role after the 20 minute dialectical process involving the other two vice-presidents had ended. Specifically, these two vice-presidents were asked to summarize what had taken place in the "debate," ask questions about unclear recommendations, integrate ideas where possible and provide some structure for the remainder of the group's discussion. Similar to the role instructions provided to the dialectic presenters/critiques, the vice-presidents asked to play a synthesis role were 76 given detailed instructions in order to help them fulfill their role in the group's discussion. See Appendix F for a copy of the synthesis role instructions. Table 3 provides a summary of the role assignments in each study condition. Table 3. Role Assignments by Study Condition. 77 Role Consensus- Traditional DI Synthesis DI Seeking VP Flight None Create/Present Plan Create/Present Plan Operations (Consolidation) (Consolidation) VP Industry None Create/Present Plan Create/Present Plan Analysis (Expansion) (Expansion) VP Marketing None None Synthesis role VP Finance None None Synthesis role 78 Two manipulation checks were conducted to assess the degree to which groups effectively employed their assigned inquiry method. The first check involved several items in the post-experimental questionnaire asking respondents about the quality of role performances in the group. See Appendix N for the post-experimental questionnaire measure. The second check involved a simple dichotomous judgment regarding whether a group should be removed from the analysis made by a single rater based on the group’s videotaped interaction and discussion. The data from these two checks were used to identify groups which did not take the task seriously and/or failed to learn the task well enough to make their results meaningful. Procedure Groups for this study were formed using standard sign-up sheets circulated to undergraduate psychology courses. Individuals were scheduled to arrive at the lab in groups of six, of which four individuals were needed to form a group for the "SouthEast Airlines" simulation. In the event that more than four persons showed up for a session, the first four individuals were formed into a group and "extra" individuals were moved to another room where they participated in a study unrelated to the one described here. The study was conducted in four phases: Pre-Experimental, Individual Preparation, Group Discussion, and Post-Experimental. The Pre-Experimental phase lasted approximately 20 minutes, the Individual Preparation phase lasted for 60 minutes, groups were allowed up to 75 minutes in the Group Discussion phase, and the Post- Experimental phase lasted about 10 minutes. Table 4 provides a chronological listing of the activities in this study broken down according to these four phases. The remainder of this section describes each of these phases in more detail. 79 Table 4. Sequence of Events in "SouthEast Airlines." Pre-Experimental phase 1. Participants arrive & receive "Overview" 2. Wonderlic test administered 3. Individuals randomly assigned to vice—president positions 4. Individuals receive position-specific packets a. "Year-End Report" b. Position-specific "Memo" c. Task knowledge measure (1. Inquiry Method instructions e. Role instructions (in TDI and SDI conditions) 5. Groups watch tape with Inquiry Method instructions Individug PrepaLation Phfie 6. Individuals allowed 60 minutes to prepare for Group Discussion a. In TDI and SDI conditions, VPs of Flight Operations and Industry Analysis prepm plans ‘ b. In SDI conditions, VPs of Marketing and Finance review instructions on how to fulfill synthesis role Group Discussion phase 7. Groups receive inquiry method reminder 8. Groups allowed 75 minutes to fill out planning document a. CS groups have 75 minutes of unstructured discussion b. SDI and TDI groups complete dialectical process in first 20 minutes, then Post-Exmgrp’ ental phase 9. Groups complete post-experimental questionnaire 10. Groups debriefed and dismissed 80 The pre-experimental portion of the study began when four individuals had arrived at the experimental room and were seated around a large rectangular table. As they arrived, group members were given a written document providing an overview of the experiment, identifying their goal as a group, and describing the simulation environment (including the general algorithms for figuring revenue and costs). Sessions were started five minutes after the fourth group member arrived, ensuring that all participants had at least five minutes to look over their introduction (determined in pilot testing to be sumcient time to read through the document at a comfortable pace). Five minutes after the arrival of the last individual, the Wonderlic Personnel test was administered to the four group members. When the test was finished, group members were randomly assigned to a vice president position in the group and moved to a specific location at the table so that the respective positions were always in the same seats, providing a standardized spatial arrangement and making it easier for observers watching the videotapes. Once group members were seated in their appropriate spots, the experimenter passed out a packet of information to each member of the group and set out name plates that identified each group member's position title. Each group member received a packet of information containing: (1) a year-end report, (2) role-specific information relevant to future planning, (3) a task knowledge measure, (4) a hard copy of the group's inquiry method instructions, and, in the two DI conditions, (5) specific instructions relevant to performing their assigned role during group discussion (i.e., presenting and critiquing plans, enacting a synthesis role). After distributing the position packets, the experimenter reviewed the purpose of the task and the ftmction of each component in the information packets provided. The experimenter then asked if there were any questions and proceeded to play a short videotape detailing the instructions for the inquiry method to be used by the group. The 8 1 videotape instructions were identical to the instructions given to each member earlier in his or her role packet and participants were instructed to read along on their hard copy as they listened to/looked at the videotape. The video provided instructions concerning how the group should structure its activities during group discussion and, for the DI conditions, identified the roles each group member would have and the sequence of activities that should be followed once the Discussion Phase began. As noted earlier, in both the Traditional and Synthesis DI conditions, participants playing the Vice-Presidents of Flight Operations and Industry Analysis were asked to create their own individual plans and present them to the group at the start of group discussion. These individuals were also instructed to provide a critique of the "opposing" plan after the two plans had been presented. Additionally, in the Synthesis DI condition, the vice presidents of Marketing and Finance were instructed to act in a facilitating role and were given instructions regarding this role. After the videotaped instructions related to inquiry method were played, participants were again provided with an opportunity to ask questions. After answering any questions, the experimenter set a timer for 60 minutes and then left the room, returning every so often to check on the group and answer questions. This marked the beginning of the Individual Preparation phase. In the Individual Preparation phase, group members were allowed 60 minutes to review the materials provided to them and prepare for the upcoming Group Discussion phase. During the Individual Preparation phase, group members were provided with scratch paper and calculators and were allowed to write on the materials they had been given. Individuals were instructed to use the time allowed to prepare for the upcoming group discussion subject and prepare for their assigned role during the Group Discussion phase. Group members were also instructed to complete the task knowledge measure 82 before the end of the Individual Preparation phase, and were given several reminders to do so during the course of the phase. After the 60 minutes allocated for Individual Preparation had expired, the Group Discussion Phase began. The experimenter re-entered the room, gave a brief reminder of the instructions the group were given before the Individual Preparation phase, and started a video camera set up to film the group's discussion. The final action taken by the experimenter before leaving the main room again was to start a clock timer set for 75 minutes and turn it so that all group members could see its face. Each group was allowed 75 minutes to arrive at a collective group strategic plan covering desired operational activities. Groups were instructed to complete their strategic plan using the appropriate form and return it to the experimenter when he or she returned at the end of the Group Discussion phase. Again, Consensus-Seeking groups were given general instructions to explore all options, press for clarification, avoid win-lose statements and refrain from conflict- reducing "tricks." In the DI conditions, groups utilized a format wherein the two plans created by individual members will be presented and debated. In the T'DI condition, the two group members not involved in the creation of the plan or counterplan were given no special instructions. SDI groups received the same instructions as the TDI groups with one exception: after both plan and counterplan have been presented, the two group members not involved in their creation were to summarize the major points of each plan, ask questions to clarify confusing statements or recommendations, ofl‘er integrative or compromise plans, and otherwise provide a structure for the remainder of the group's discussion. During the course of the 75 minute Group Discussion phase, the experimenter checked at the door to monitor progress toward completion and insure that groups stayed on task. 83 When groups were finished (or after the expiration of the 75 minutes allowed), the experimenter re-entered the room, collected the group's plan and administered the post- experimental questionnaire. Upon completing this questionnaire, participants were thanked for their participation, told they would be contacted in the event of their winning a prize for superior incentive, and given a debriefing sheet. At this point, participants were invited to ask any further questions and then dismissed. Measures As the primary focus of the study, one outcome and four process variables were measured in this study at the group level: (I) Group Performance, (2) Information Sharing, (3)1ntragroup Conflict, (4) Process Facilitation and (5) Controversy. In addition to these variables, three group composition variables were also measured and used as control variables in the analyses: cognitive ability, task knowledge, and sex composition. In this section, I describe each construct, identify its dimensions, and discuss how a measure of each was generated. In the next section, a discussion of rater training is offered. Because of complications in the estimation of measurement reliability, information on this is reserved for the Results. Group Performance represents the degree to which the group was able to accomplish its primary goal of creating a plan which would bring in more profit than the previous fiscal year. Group Performance was determined in a nonj udgrnental fashion by using a set of algorithms to determine the profit that would have been generated for each company in the simulation environment. More specifically, the decisions that groups make about such things as the routes to be serviced, the type of plane assigned to each route, the number of daily flights, etc., were translated into revenue and costs for each group. Group Performance was determined by calculating total assets (i.e., cash plus revenue) and subtracting total liabilities (costs plus debts). All group members were provided with the basic terms and formulas used to determine revenue and costs in the 84 "Overview" materials before the beginning of the study. Since the strategic plan created by the group represents the entire group regardless of the distribution of effort expended by individual members, Group Performance was treated as a group-level outcome variable. Process Variables. Information Sharing refers to the act of making an information one available to all group members for use in group-level planning. In SouthEast Airlines, information cues are bits of factual information provided to group members concerning: (1) the reported level or status of variables during the last year of operation, (2) the projected level/status for variables in the upcoming (i.e., simulated) year of operation, and (3) the relationships between variables. Examples of each type of information cue include (1) the average fare charged by the competition on a given route last year, (2) the expected passenger demand for a particular route next year, and (3) the formula for route revenue, respectively. At the beginning of the simulation, some of the information cues in the simulation were known to all group members, a few to two or more group members, and most to only one group member. Items available to all members at the beginning of the simulation are termed common, while those known only to one member (or in a few cases, two members) are designated as unique. An instance of information sharing was considered to have occurred when an information cue known to only a subset of group members was spoken aloud dming group discussion. Information cues known to two members before discussion were considered to be “unique” as this seemed closest to the spirit of Stasser’s earlier work where “shared” (i.e., “common”) information was known to all group members before discussion. Of the 700 information cues available in the simulation, only 12 cues (<2%) fell into the category of being known by two members before discussion. 85 Information Sharing measures were obtained from video tapes of group discussions which allowed the coding of video and audio behaviors of group members. A measure of Information Sharing was calculated by having a trained observer count the number of unique information cues shared during group discussion (see discussion of rater training to follow). In other words, information cues available to only one (or two) group members before discussion which were spoken aloud were counted. Videotapes were scored by one rater using a checklist of the information cues provided to groups in terms of the number of individual-level cues shared by each group member. Appendix G contains the checklist used by raters which contains all information cues presented to groups in the simulation. Each group’s Information Sharing score was determined by adding up the number of unique information cues shared during discussion and using this total score to represent the group in a fashion similar to Stasser et al. (1989). Intragroup conflict is a multidimensional construct characterized by negative affect, hostile interpersonal behaviors and distorted cognition within the group. Although conflict tends to occur between identifiable subsets of group members, the literature suggests that conflict afl‘ects the entire group. For instance, if two members are at each other’s throats, this can affect the entire group by (1) directly consuming scarce group resources (e.g., time) and (2) arousing negative affect in those members who are “watching.” Group members who are watching may become tense or anxious, frustrated at the time wasted in the dispute, and angry at those arguing. This is consistent with Pruitt and Rubin (1986) who noted that conflict tends to spread out over all group members, lose its focus, encompass broader issues and become more general. Intragroup conflict was measured in two ways: (1) with a 10-item questionnaire using a five-point Likert response format, and (2) by having trained raters watch the videotape of each group during discussion and rate the degree of intragroup conflict exhibited by each group. Appendix H contains the instrument used by raters to provide 86 judgments of intragroup conflict. The lO-item Intragroup Conflict questionnaire measure is presented in Appendix I. An internal consistency reliability estimate was performed on responses to these 10 items and two items displaying a negative item-total correlation were removed. (These Items are marked with an asterisk in Appendix I). The coefficient alpha reliability estimate for the final eight-item scale was .78. Individual scale scores on the questionnaire measure of conflict were then created by summing the unweighted responses on the final eight items and a score was then assigned to the group by averaging the scores of the four individual group members. (See the Results for further discussion concerning the meaningfulness of this procedure.) Process Facilitation is the degree to which the individuals in a group engage in behaviors serving to facilitate the processing of information during group decision making. Process facilitation behaviors should aid groups in identifying and using the informational resources at their disposal. Four general sets of behaviors can be distinguished in the literature: (1) Summarizing/reflecting, (2) Asking questions/clarifying, (3) Integrating and (4) Focusing/structuring. Although behaviors are engaged in by individual group members, they benefit the entire group. Therefore, by definition, process facilitation is something done by individuals in the interests of the group. The instrument used by raters to provide ratings of process facilitation on each of its four component dimensions is included in Appendix J. An overall score on the measure was generated for each group by summing the unweighted observer ratings for the four component dimensions. Controversy represents the degree to which individuals in a group hold different ideas, opinions, theories, and viewpoints with regard to how the group's goal can best be achieved. In a practical sense, controversy can be seen as the extent to which group members (1) indirectly challenge views or positions taken by other members of the group, (2) explicitly disagree with views or positions expressed by other members concerning 87 how things should be done and (3) present multiple options, alternatives and plans for accomplishing the group's goal. Furthermore, although past research has tended to view controversy as a construct that exists within an individual decision maker, it is treated here as a group-level construct involving the identification of multiple "pa " to the group goal. The instrument used by raters to provide judgments of controversy within groups is contained in Appendix K. A group-level index of controversy was then formed by summing the unweighted observer ratings for each of these three component dimensions. Because of the poor reliability associated with this measure of controversy (to be addressed in the Results section), a second measure of controversy was generated using new raters and a modified ratings instrument. In an attempt to improve reliability, a post hoc effort was made to identify the specific types of substantive controversy which might occur in "SouthEast Airlines" and the videotapes of group discussion were recoded by the raters. Appendix L contains this alternative measure of controversy. As can be seen, two types of controversy were distinguished in this measure: (1) task-related controversy (or strategy-related issues) and (2) process-related controversy (structure and agenda-related issues). The various dimensions of controversy were collectively generated by the experimenter and the two research assistants who provided the original controversy ratings. As with the original controversy scale, group-level scores on the revised measure of controversy were generated by summing the unweighted ratings on each of the nine dimensions. See the section on "Quality of Measures and Manipulations" in Results for further details. Comppsition Variables. In addition to the process variables described above, three group composition variables were also measured - cognitive ability, task knowledge and sex/ gender ratio. These variables are “composition” variables in the sense that, for each variable, the score assigned to the group is a perfect ftmction of the four 88 individual values. Although there are certainly many dimensions along which the members of a group may differ, cognitive ability and gender were included because of empirical evidence indicating their effects on group process and/or performance (e.g., Tziner & Eden, 1985; Wood, 1985; Sundstrom & F utrell, 1993; Milliken & Martins, 1996). Task knowledge was included on the basis of its logical relationship with group performance. In contrast to the group structure variable (i.e., inquiry method), these three composition variables were not manipulated but simply measured for use as control variables or examination in supplementary analyses. Cognitive Ability refers to the degree to which the members of a group are able to learn new tasks, adapt to novel situations and problem-solve. In complex strategic decision making tasks, it is reasonable to assume that the cognitive ability of all members is important. Each member has the “duty” of recognizing, comprehending and sharing important information that he or she possesses. After information is “out on the table” for all group members, each group member can add to, or withhold fiom, the collective product being fashioned. As a result, because of the additive nature of these processes, it seems appropriate to generate a score for each group based on the average individual cognitive ability of each of its members, although it certainly would have been possible to argue for and employ other combination rules. Ultimately, given the dearth of research on the topic, the decision to use the averaging function was somewhat arbitrary. Individual Cognitive Ability was measured by giving each member of the group a short, paper-and-pencil measure of cognitive ability, the Wonderlic Personnel Test. (This measure is not shown due to the proprietary nature of the instrument). The "Wonderlic Personnel Test & Scholastic Level Exam: User's Manual" (1992) notes that the Wonderlic measures of "g," or general intelligence, and demonstrates excellent reliability as well (with various internal consistency and test-retest estimates ranging from .84 to 89 .92). Scores for individual members were averaged to create a mean score representing the entire group. Task Knowledge was defined as the amount of role-specific knowledge learned by the members of each group. Because of the design of the task, it was possible to distinguish between two types of task-related knowledge: general and position-specific. General knowledge questions pertained to the basic "rules" of the simulation and were the same for all group members. Position-specific knowledge corresponds to unique information known only to a subset of group members (usually one). With regard to the most appropriate function relating individual task knowledge and the knowledge possessed by the group as a whole, again an analysis of the task seems to suggest that a score assigned to the group should weight equally the information of each member in an additive fashion (i.e., an average). All members possess important information that could be used to improve the group product if shared with other members and, at least by the intention of the design, no member possessed information that could be considered more crucial than any other group member’s information. As a result, a score characteristic of the group was assigned on the basis of the arithmetic average of the four individual members, but this is again acknowledged to be somewhat arbitrary. A measure of general task knowledge and four measures of position-specific knowledge were created, and each group member received a paper-and-pencil test of the general knowledge items as well as the appropriate position-specific items during the Individual Preparation phase of the experiment (see below). Appendix M contains the general measure of task knowledge given to all positions, as well as the four position- specific measures. For each group, task knowledge was measured by summing the four position-specific knowledge scores for the four individuals in the group. Sex/gender ratio refers to the relative number of men and women within a group. This variable was included in the study on the basis of past research indicating 90 performance differences in small groups as a function of the sex of group members. In particular, Wood (1985) conducted a meta-analysis examining studies of small group performance in the context of single-sex groups. Across all studies, Wood found a tendency for male groups to perform somewhat better than female groups but also identified task type as a potential moderator of the sex composition-performance relationship. More specifically, Wood’s analysis suggested that male groups perform better on agentic tasks (i.e., those involved with giving opinions and recommendations) while female groups do better on communal tasks (i.e., those involved with fostering friendship and agreement). Given the clearly agentic nature of the present task, it seems reasonable to expect that the sex composition of the groups will affect group performance. However, given that this finding is based on research utilizing single-sex groups and few or no process measures of interest here, it is not clear how mixed-sex composition should affect group process or performance. As a result, no specific hypotheses regarding sex composition seem appropriate. Sex composition was operationalized as the number of men in each group of four individuals (0-4). Although it would have been possible to generate a sex/ gender composition score in some fashion other than this, parsimony suggests the use of simpler representations until the utility of more-complex representations has been established. As such, although again somewhat arbitrary, the decision was made to represent sex/ gender composition as a simple cormt variable. Rater Training Overview. Between the conclusion of pilot testing in the fall of 1995 and the beginning of actual data collection for this study in spring 1996, a number of research assistants were recruited and trained as observers of group discussion. Initially, seven research assistants were trained to observe videotapes of group discussion and either code instances of information sharing or provide ratings on one of the three judgmental 91 measures (i.e., intragroup conflict, controversy, and process facilitation). Each research assistant was trained to observe only one construct (i.e., intragroup conflict or information sharing or controversy). As a result, there was no possibility of method bias in the relationships among process facilitation, controversy, intragroup conflict and information sharing that would have resulted if one rater had provided multiple measures for each group. To begin with, one research assistant was assigned to code information sharing and two were assigned to each of the judgmental measures. It was anticipated that the one research assistant would provide all the ratings for information sharing, while the two raters assigned to each of the judgmental measures would rate half of the groups, plus five groups rated by the other rater. This was done in order to provide a sample of 10 groups redtmdantly coded for each of the three judgmental measures which would allow for an estimate of interrater reliability. Gron Training. The training procedure used for all four measures was as follows. The seven research assistants were gathered for a preliminary meeting where they received an overview of the task, construct definitions, and a copy of the respective rating form (or checklist) they would be using. Focusing on each construct in turn, the experimenter went over the definitions and behavioral examples of each sub—dimension. After this, a period of time was devoted to the issue of what defines an "instance" of some dimension. In essence, an "instance" was defined as one of two types of activity matching the definition of a component dimension: (1) a discrete action or utterance on the part of an individual or (2) a sequence of verbal interaction with an identifiable beginning and end. After this, raters were given instructions for how to go about making their ratings (see next section for details). Finally, raters viewed several segments of videotape from pilot groups in order to practice identifying behaviors that corresponded to the various dimensions of each construct. At this point, formal training ended, but the 92 experimenter met periodically with each rater in order to gauge drift from the standards set during training. For practical considerations, raters were allowed to watch the videotapes of group discussion and complete their ratings (or fill out the information sharing checklist) using a VCR at their home. _R_ater igstructions, In the case of Information Sharing, coders were instructed to listen carefully to each group's dialogue and record all quantitative information represented in the simulation spoken by any of the group members by circling that piece of information on the Information Sharing checklist. Coders were instructed to record questionable instances of information sharing on the last page of the check-list, as well as any item of shared information which they considered to be meaningful that was not represented in the checklist. In practice, this portion of the checklist was rarely used. For the three judgmental measures, raters were instructed to watch the entire tape of group discussion and record each "instance" of conflict they observed using tic marks in the margin to the right of each dimension rating. Raters were told to use the total number of tie marks tallied for each dimension as a guide in making their ratings for each dimension. Because it was expected that there would be a fair amount of disagreement at the "micro" level of behavior, it was stressed that raters should make their ratings for each dimension on the basis of their overall impressions and use the tic marks for guidance. As such, no explicit quantitative guidelines were given beyond those implied in the anchors for each scale. Again, the ratings provided on each dimension were summed to yield an overall conflict score for each group. Due to unexpected turnover in research assistants over the course of actual data collection and the slow, uneven pace of coding, three additional raters were trained to observe and rate groups on intragroup conflict and one additional rater was trained to observe and rate process facilitation. As a result, five research assistants ended up making ratings of intragroup conflict, three research assistants made ratings of process 93 facilitation and two research assistants rated controversy. "Second-wave" research assistants were trained in the same manner as the original research assistants with two exceptions: (1) the initial training session was conducted individually for each new assistant and (2) instead of watching behavioral examples as a group, new assistants were given several "practice" videotapes consisting of pilot groups rtm during the fall. Each new research assistant then met individually with the experimenter to discuss the practice ratings and receive feedback. RESULTS Overview of Results Section The results of the analyses performed for this study are divided into several parts. First, I discuss the cell sizes for the various conditions of the study and the loss of groups due to attrition. Second, in the section entitled "Quality of Manipulations and Measures," there is a discussion of pilot testing, the reliability of measured variables used in the study, several analyses best viewed as manipulation checks and an examination of the effects of several potential nuisance variables. In the following section, the results of analyses related to the efi‘ects of various group processes on other group processes (Hypotheses 1-5) are reported. In the fourth section, the effects of the three inquiry method conditions on group performance (Hypotheses 6-8) are considered. Finally, in the fifth section, the results of several exploratory analyses are reported. Gropp Attrition In total, the CS condition was run 19 times, the TDI condition 20 times, and the SDI condition 21 times. However, one or more process-related measures was lost for six groups (e.g., videotape failure during group discussion), and eight groups failed to submit a valid business plan according to the rules of the simulation. In addition to these 14 groups, one group was removed from the analysis because its score on Information Sharing was four standard deviations above the mean for all groups. Table 5 provides details on the groups that failed to provide acceptable data on one or more study 94 95 measures. Unfortunately, these two categories were mutually exclusive, resulting in the maximum possible loss of 15 groups for correlations with strategic planning quality. On the plus side, the groups lost were distributed fairly evenly across experimental conditions. As a result, there were 53 groups with complete data for the analyses which did not involve group performance (1 9, l7, 17 groups for CS, TDI and SDI conditions, respectively) and 45 groups with complete process and outcome data available for use in the regression analyses involving group performance (16, 15, 14 groups for CS, TDI and SDI conditions, respectively). Table 5. 96 Cell Sizes for Study Conditions and Statistical Analyses. Consensus Traditional DI Synthesis DI Total Sessions Run 19 20 21 Groups missing one or more 0 2al 4b process measures Groups which failed to submit 3 2 3 a valid business plan Groups removed as outliers 0 1° 0 Final N for analyses NOT 19 17 17 involving Group Performance Final N for analyses involving 16 15 14 Group Performance Notes ‘One group was inadvertently left out of the controversy recoding; another group was unable to be recoded for controversy due to tape breakage bOne group was inadvertently left out of the controversy recoding; another group failed to complete the post-experimental measure containing the conflict scale; process measures were rmavailable for two additional groups due to technical failures during videotaping. °One group was four standard deviations above the mean on Information Sharing 97 Qu_alitv of Mamulptions and Measures Pilot Testing. Pilot testing for this study was conducted in the spring of 1995 and then again in the fall of the same year. Six groups were run in the spring with the experimenter present taking notes during group discussion. Another 12 groups were run in the fall, with revision and development after the first block of six had been completed. The final six pilot groups were videotaped using standardardized experimental procedures. Using these last six groups, an initial estimate of inter-rater reliability was calculated for information sharing, intragroup conflict, controversy and process facilitation using this sample of six groups. Because only two research assistants were available during pilot testing, it was necessary to train these two individuals to make ratings for each of the four measures. Note that in the actual study, each rater was trained to rate (and only rated) one measure. Observers were instructed to keep the constructs distinct in their minds, avoid "halo" as much as possible, and review the tape as needed after watching it once. Each observer watched the first 15 minutes of each group discussion and made ratings for each of the four measures noted above. Then, for each of the six groups, total scores generated by Rater 1 on the four measures were correlated with the total scores generated by Rater 2. The four reliability estimates generated in this fashion for the six pilot groups varied widely in magnitude. Consistent with earlier work by Stasser suggesting that information sharing could be reliably measured from audio tapes, the interrater reliability correlation for information sharing was extremely high (50, = .97). For intragroup conflict and controversy, interrater correlations were a little lower but still quite acceptable (rxx = .85 for controversy; 3;,“ = .75 for intragroup conflict). On the other hand, the estimate of interrater reliability was calculated for process facilitation using a sample of six groups that were observed and rated by two independent raters. The resulting interrater correlation was rm = -.17. As a result, rater instructions and definitions of the process 98 facilitation dimensions were revised and more behavioral indicators were generated before this rating instrument was used in the actual study. RLligbility of Measured Variables. For the actual study, reliability estimates for the intragroup conflict, controversy and process facilitation measures were calculated again using a difl‘erent procedure than in the pilot study. The reliability of the information sharing index was not re-estimated given its nonj udgmental nature and the extremely high estimate of interrater reliability obtained during pilot testing. Also, as the group performance score was derived from computational formulas involving only basic addition and multiplication, the reliability of this variable was not estimated and assumed to be near 1.0. As noted earlier, measures of intragroup conflict, controversy and process facilitation for the actual study were obtained by having a single trained observer watch each videotape of group discussion and provide ratings on the particular measure for which he/ she was trained (see Method for details of training). As a result, each videotape of group discussion was observed by four difl‘erent individuals, each trained to provide ratings on one specific measure. To estimate the interrater reliability for intragroup conflict, controversy and process facilitation, a sample of videotapes was rated a second time by a second rater trained to rate that particular dimension. Interrater reliability estimates were obtained for the three judgmental measures by calculating the Pearson correlation for the groups coded by two independent raters. Table 6 presents information related to the calculation of inter-rater reliability correlations for intragroup conflict, controversy and process facilitation. As can be seen in the table, approximately 20% of the groups were rodlmdantly coded by two raters on each of the three measures. 99 Table 6. Interrater Reliability Estimates for Process Measures. Observer N Raters Initial Final Rating rxx rn** Measure ___ Intragroup 6 2 & 3 .84 ?? Conflict 10 4 & 5 .10 Controversy 12 7 & 8 .23 .56‘ Process 12 9 & 1o .47 .84" Facilitation ** After post-hoc scale modification 'Reliability estimate for revised measure of controversy using revised scale and sample of each group's discussion. l’Reliability estimate after two dimensions were dropped 100 In general, initial estimates for all three constructs were rather low. Based on a sample of 12 groups redundantly coded by the two raters who rated the construct, the interrater reliability correlation for controversy ratings was .23. Similarly, the interrater correlation of total scores generated by Raters 9 and 10 for process facilitation was .47 based on a second sample of 12 groups. Because of the large number of raters providing judgments of intragroup conflict, two estimates were generated using four of the five raters who made conflict ratings (the fifth rater only rated a few groups). The correlation between Rater 2 and Rater 3 was very high (no, = .84) based on a sample of six groups rated by both, but the correlation between Rater 4 and Rater 5 was extremely low (In = .10) with a second, larger sample of 10 groups redtmdantly rated by these two individuals. Thus, despite evidence during pilot testing that trained raters could produce measures of intragroup conflict, controversy and process facilitation with adequate inter- rater reliability, estimates for all three measm‘es obtained during the actual study were tmacceptably low. In view of the low estimates of inter-rater reliability for intragroup conflict, controversy and process facilitation, an effort was made to identify reliable indices of each construct using a subset of the dimensions making up each construct. For I all three constructs, using the samples involved in the initial estimate of reliability, new scores were calculated for every possible combination of dimensions by summing the unweighted dimension ratings involved in the composite. For example, with intragroup conflict, new composite scores were formed by summing ratings on dimensions 1 and 2, dimensions 1 and 3, and dimensions 2 and 3, as well as treating each of the dimensions as a separate composite. After these new composite scores had been generated, interrater reliability estimates were generated for each composite for each measure by correlating the total scores of the new composites across the raters who provided the initial estimates. Using this approach, it was possible to create an index of process facilitation using only the clarifying and focusing/structuring dimensions with an acceptable level of 101 interrater reliability (rxx = .84). Of the two dimensions omitted from this new process facilitation index, one was dropped because it never occurred (paraphrasing/ summarizing) while the second was removed due to a negative inter-rater correlation among the two raters (integrating). It was not possible to form a more reliable index of either controversy or conflict using a total score calculated from a reduced set of dimension ratings. Given the presence of the supplementary data for intragroup conflict, further effort aimed at improving the measurement reliability of the judgmental variables was focused on controversy. As no reliable sub-composite could be identified using a subset of the component dimensions of controversy, the videotapes of group discussion were recoded using a modified ratings format and a sample of each group's entire interaction. The procedure devised involved single raters watching three five-minute samples of interaction semi-randomly selected from each 75 minute tape (i.e., 20% of each group's discussion phase). Four research assistants were involved in scoring the groups on this revised measure, with one individual having provided original controversy ratings, two individuals having provided intragroup conflict ratings, and one person having been involved in rating process facilitation. The revised controversy measure is shown in Appendix L. The four research assistants available to recode controversy met with the primary researcher for two training sessions at which the definitions of controversy and its component dimensions were reviewed. Also during these sessions, eight group videotapes were scored by all group members based on watching three five minute segments of each group's discussion. After each group was observed, raters discussed differences in their ratings and agreed on consensus ratings after collective discussion. At the completion of the second session, the remaining unviewed groups were divided among the four assistants and subsequent ratings were completed at the rater's homes. 102 An estimate of inter-rater reliability was generated by examining the matrix of correlations based on the eight groups rated by three of the four assistants during training (one assistant was not able to rate all groups during training). The median correlation among the three pairs of raters was .56. In addition to this, after all groups had been rated on the revised measure, the fourth assistant (left out of the above matrix) re-rated five groups completed by one of the other three assistants chosen at random. The correlation of total scores across the five groups for these two raters was .59. On the whole then, the revised measure of observer ratings of controversy appears to have improved measmement reliability to some extent but not to the point of traditionally-accepted levels of measurement quality (i.e., .70 or better). In addition to the observer ratings, data on group-level conflict were collected via the post-experimental questionnaire as well except for one group which did not complete this questionnaire by mistake. After two items were dropped, the eight-item questionnaire measure of conflict produced an internal consistency reliability of Jo, = .78. An AN OVA performed on the individual ratings of conflict was conducted with "group" serving as the independent variable. The resulting test statistic [E (58, 164) = 3.10, p < .00; MSW = 14.40, eta2 = .52] indicates that group membership had a significant effect on individual scores and accounted for over half of the variance in individual scores. This provides some support for the view that averaged individual conflict scores obtained from the questionnaire measure can be meaningquy assigned to the group. As a result, conflict scores were generated for each group by summing individual scores on the eight remaining items and averaging the four individual scores within each group. However, the weak correlation between the observer ratings of conflict and the averaged individual questionnaire scores (5,, = .18) raises doubts as to whether these two measures are tapping the same construct, although the poor reliability of the observer ratings is likely to have severely attenuated this correlation. More will be said about this later. 103 In sum, despite pilot data which suggested the adequacy of the rating scales used in this study, the reliability estimates for each of the three process measures obtained via observer rating indicated room for considerable improvement with regard to the reliability of the ratings measures. Although a reliable sub-composite of dimensions was constructed for process facilitation, it was necessary to obtain a second measure of controversy after revising the rating form and using only a sample of each group's discussion. As it is possible to question the use of either measure of controversy or conflict, multiple versions of each analysis were performed using each potential measure when analyses involve either controversy or conflict (i.e., ratings and questionnaire data for conflict; original and recoded ratings for controversy). Manipulation Checks. The inquiry methods employed in this study were intended to afiect a number of group processes during the experimental task. In particular, the two conditions involving dialectical inquiry instructions were expected to produce more controversy than consensus instructions, and the synthesis DI condition was expected to yield more facilitative behavior than either of the other two inquiry methods. In actuality, the omnibus F tests for the effects of inquiry method on both controversy [I_~‘ (2, 54) = .45, p > .05] and process facilitation [E (2, 54) = .25, p > .05] failed to reach statistical significance. However, the efl‘ect of inquiry method on controversy was marginally significant when assessed using the recoded controversy scores (E (2, 52) = 2.78, p < .10; _M_S_.m = 4.84; eta2 = .10). Planned comparisons among the inquiry methods (Mes = 3.92, Mm! = 3.83, M5151: 2.39) showed that the synthesis DI condition resulted in less controversy than both the consensus condition (t (52) = 2.12, p < .05) and the traditional DI condition (1(52) = 1.97, p = .05). The mean diflerence between the consensus and traditional DI conditions was not significant. An examination of the three questionnaire items pertaining to the quality with which group members firlfilled their roles does little to explain the ineffectiveness of the 104 inquiry method interventions. These three items, shown in Appendix N, asked individuals to rate on a 5-point Likert scale (1 = Strongly Disagree, 5 = Strongly Agree) the extent to which they agreed with statements indicating: (1) they had effectively carried out their personal assigned role, (2) they were conscious of that role during group discussion, and (3) other group members had fulfilled their roles. Across all participants, the means for these three items were 3.90, 2.31 and 4.22 (respectively), suggesting that participants generally thought they had done what was asked of them and perceived their fellow group members having done so as well. Further, participants generally reported being conscious of their assigned roles during discussion (M = 2.31 for Item 2). Therefore, in general, participants seemed to think they did a good job following instructions. With respect to the dialectical inquiry method conditions of particular interest here, an examination of the first item corresponding to the respondent’s own role performance has the potential to be more informative. However, consistent with the previous data pertaining to all individuals, the two members assigned specific roles in the TDI condition and all four members assigned roles in the SDI conditions generally indicated that they thought they had done a good job. With regard to the TDI condition, the mean response to the first item was 4.00 for the roles of both Vice President of Flight Operations and Vice President of Industry Analysis. In the SDI condition, mean responses across the four positions were 3.75 for Flight Operations, 4.20 for Industry Analysis, 3.80 for Marketing and 4.05 for Finance. As a result, these data do not suggest that the inquiry methods were implemented poorly. Further, no group was removed from the analysis on the basis of the second manipulation check, overall rater judgment. Raters were instructed to provide a dichotomous, “yes-no” judgment as to whether groups should be removed for being grossly incompetent or failing to take the task seriously. Given the already-insufficient 105 statistical power, raters were instructed to be conservative in recommending groups for removal. Although no groups were subsequently recommended for removal, 3 number of groups were found to be of bordrerline usefulness and might have been removed if data had been available on a larger number of groups. However, on the whole, raters reported that groups generally did know what they were doing and took their task seriously. Overall, the inquiry method conditions appear to have been relatively inefl'ective with regard to inducing controversy and stimulating facilitation in decision making groups. Even further, the data suggest that the consensus-seeking approach resulted in more controversy than the synthesis DI condition (but not more than the traditional DI condition). The inquiry methods apparently did not impact process facilitation. Nuisance Variables. In addition, a number of one-way AN OVAs were performed as a check for the influence of several extraneous variables on the process and performance variables of interest. The potential nuisance variables examined were experimenter (five individuals), day of the week (every day except Saturday), time at which the study was conducted (day, afternoon, or night), and type of participant group (introductory psychology v. non-introductory psychology). Table 7 provides a summary of the MANOVA conducted to test for the presence multivariate main effects stemming from these four potential sources. As can be seen in the table, using Pillai's test statistic, which tends to have the most power, no significant main effects were found for any of the potential nuisance variables. SW Table 8 displays the means, standard deviations, intercorrelations and reliability estimates for measures of the variables used to test study hypotheses. As will be recalled, there was a fair amount of attrition in groups resulting from one or more pieces of missing data (see Table 5). For all correlations except those involving group performance, sample size is 53 groups. Correlations in the last column of the table involving group performance are based on 45 groups. Table 7. 106 Multivariate Analysis of Potential Confound Variables. Variable dfmum df mom Pillai's Approx. Sig. Value F Ratio WEEKDAY 5 60 2.15 1.30 .19 TIMEOFDAY 2 18 1.11 1.59 .18 EXPERIMENTER 4 44 1.21 .68 .86 SUBJECT LEVEL 1 8 .56 1.44 .31 Note: Multivariate statistic reported is Pillai's trace. Table 8. Means, Standard Deviations, Intercorrelations and Reliabilities for Measured Variables. 107 ABIL KNOW CONTl CONT2 FACL CONFl CONFZ Mean 95.40 9.77 3.91 3.46 3.54 2.17 15.03 SD 11.18 3.18 1.13 2.26 1.31 1.53 3.42 R,“ — - .23 .55 .85 .10/.84 .78 ABIL .12 —.02 -.12 -.02 -.25 .15 KNOW .11 .06 .24 .06 -.08 CONTl .30 .19 .19 .06 CONT2 .12 .30 .34 FACL .14 .11 CONFl .18 CONFZ INFO #MEN PERF Note: N = 53 for all correlations except those involving PERF (N = 45) Maugham: ABIL = Sum of Cognitive Ability scores in group (Wonderlic Personnel Test) KNOW = Sum of individual Role Knowledge Scores in group CONT] = Observer rating of group-level Controversy CONT2 = Recoded observer rating of group-level Controversy FACL = Observer rating of group-level Facilitation CONFl = Observer rating of group-level Conflict CONFZ = Averaged group member questionnaire responses for Conflict INFO = Sum of unique pieces of information shared by all members #Men = Number of males in the group PERF = Group Performance (net value) Table 8 (cont’d.). Means, Standard Deviations, Intercorrelations and Reliabilities for Measured Variables 108 INFO #MEN PERF Mean 35.62 1.53 121.12 SD 13.66 1.08 50.00 Ru .... _ .. ABIL .23 - 03 34 KNOW .l l .05 .34 CONT] .13 .22 .03 CONT2 .01 .08 -.02 FACL .19 -.01 .37 CONFI .04 -.01 -.1 l CONF2 -.07 -.01 .06 INFO .17 .05 #MEN .09 PERF Note: N = 53 for all correlations except those involving PERF (N = 45) Abbreviations: ABIL = Sum of Cognitive Ability scores in group (Wonderlic Personnel Test) KNOW = Sum of individual Role Knowledge Scores in group CONT] = Observer rating of group-level Controversy CONT2 = Recoded observer rating of group-level Controversy FACL = Observer rating of group-level Facilitation CONFl = Observer rating of group-level Conflict CONF2 = Averaged group member questionnaire responses for Conflict INFO = Sum of unique pieces of information shared by all members #Men = Number of males in the group PERF = Group Performance (net value) 109 Process Hypotheses Information Sharing x Intragoup Conflict --> Group Performance. Hypothesis 1, predicting an interaction effect between unique information sharing and intragroup conflict on group performance, was tested with a hierarchical moderated multiple regression analysis. In order to examine the incremental contribution of process over and above knowledge and ability in predicting group performance, the average cognitive ability and task knowledge scores for each group were first entered into the equation at Step 1. Following this, unique information sharing and intragroup conflict were entered at Step 2 followed by the interaction term (Intragroup Conflict x Information Sharing) at Step 3. Support for Hypothesis 1 would be obtained by noting a significant increment in r2 at Step 3 when the product term is added to the equation after its additive components have already been entered at Step 2. Because of the two sources of conflict data available, this analysis was conducted twice, once with the ratings data and once with the questionnaire data. Table 9 summarizes the results of the regression analyses associated with Hypothesis 1. 110 Table 9. Hierarchical Moderated Regression Results for Hypothesis 1. Using observer ratings of conflict: Step Vamblets) Added r2 change A 1. Ave. Cognitive Ability .24 .00 Ave. Role Knowledge 2. Information Sharing .00 (.02) .96 (.60) Intragroup Conflict 3. Information Sharing x .04 (.05) .14 (.13) Intragroup Conflict Using averaged individual perceptions of conflict: Stet; Variable! 8) Added r2 change 9 1. Ave. Cognitive Ability .22 .00 Ave. Role Knowledge 2. Information Sharing .01 (.01) .85 (.81) Intragroup Conflict 3. Information Sharing x .00 (.00) .95 (.79) Intragroup Conflict Notes: N = 49 groups Values in parentheses apply when control variables are not entered on Step 1 1 1 1 As can be seen in the table, group cognitive ability and task knowledge account for a good share of the variance in group performance while information sharing and conflict do not add anything to the predictive accuracy of the equation. Using observer ratings of group conflict, there is a .04 increment in 1'2 when the interaction term is added to the equation at Step 3 (t (43) = -l .50, p > .05). When the questionnaire data regarding conflict are used instead, there is clearly no interaction effect (r2 change = .00). When these two analyses are conducted without entering the cognitive ability and task knowledge, the change in r'2 accompanying the interaction term increases to .05 for the observer ratings of conflict (1 (45) = -1.53, p > .05) but remains at .00 for the questionnaire data. Although more will be said about this interaction in the Discussion, there is no support for Hypothesis 1 predicting that a primary determinant of group performance is the interaction between information sharing and conflict. Controversy --> Information Sharing, Intragroup Conflict. Hypotheses 2 and 3 predicted that controversy would be positively related to both information sharing (Hypothesis 2) and intragroup conflict (Hypothesis 3). Using the original ratings of controversy (CONTl in Table 8), the group-level correlations with unique information sharing (r (53) = .13, p > .05) and intragroup conflict (CONFl) as measured by observer rating (r (53) = .19, p > .05) were in the expected direction but not large enough to reach statistical significance. When the questionnaire data were used as the measure of conflict (CONF2), the corresponding correlation between controversy and conflict was also non- significant (r (53) = .06, p > .05). On the other hand, the correlations between controversy and both unique information sharing and controversy and intragroup conflict changed somewhat when the recoded controversy scores (CONT2) were used instead of the original observer ratings (CONTl ). With the more reliable CONT2 measure, the observed correlation between controversy and information sharing decreased somewhat (r (53) = .01, p > .05), but the 1 12 correlation between controversy and observer ratings of conflict increased to the point of statistical significance (r (5 3) = .30, p < .05). Similarly, the correlation between the recoded controversy scores and questionnaire-based conflict measure also attains statistical significance (r (53) = .34, p < .05). Overall, there was support for Hypothesis 2 regarding the positive relationship between controversy on group conflict when the more reliable recoded controversy scores were used, but there was no support for Hypothesis 3 predicting that higher levels of controversy within groups are associated with higher levels of unique information sharing. Process Famation -> Information Sharing, IntragQup Conflict. Hypotheses 4 and 5 predicted a negative relationship between process facilitation and intragroup conflict (Hypothesis 4) and a positive relationship between process facilitation and information sharing (Hypothesis 5). As with controversy, process facilitation (F ACL in Table 8) was not significantly correlated with unique information sharing (INFO), but the observed correlation was in the expected direction (a (53) = .19, p > .05) . On the other hand, the observed correlation between process facilitation and intragroup conflict was in the opposite direction from that predicted (although non-significant) for both the observer ratings measure of conflict (1(53) = .14, p > .05) and the aggregated group member questionnaire responses (r (53) = .11, p > .05). An examination of the correlations between the two component dimensions of facilitation and the two overall measures of conflict revealed that the two dimensions both had zero or positive relationships with conflict, but each was more strongly related to a difi‘erent measure of intragroup conflict. Average group scores on the questionnaire- based index of conflict (CONF2) were more strongly correlated with ratings on the "clarifying" dimension of process facilitation (r (56) = .14, p > .05) than with ratings on the "focusing/strumming" dimension (; (57) = .04, p > .05), while observer ratings of 1 13 conflict (CONFl) were more strongly correlated with the "focusing/structuring" dimension (g (57) = .20, p > .05) than with the "clarifying" dimension (; (56) = .00, p > .05). Although none of these correlations is significant and the difference in pattern may simply be the result of sampling error, it may be that the two measures of conflict are tapping somewhat different constructs. In sum, there is not much support for the notion that process facilitation impacted the degree of conflict and information sharing within these groups. Although the obtained correlation between facilitation and information sharing was positive, it was not large enough to be statistically significant. On the other hand, the positive correlation between facilitation and conflict was not predicted but also not large enough to reach significance. An examination of the relationship between the two component dimensions of process facilitation (clarifying and focusing/structuring) and the two measures of conflict (CONFl and CONF2) did not find either dimension of process facilitation to be negatively related to conflict, but did uncover a somewhat different pattern of relationships between the two dimensions of process facilitation and the two measures of conflict, suggesting that observer ratings of conflict and the aggregated group member perceptions of conflict did not measure the same thing. Ingg'g Method Hymtheses Hypotheses 6 and 7 predicted a relationship between inquiry method (CS, TDI, SDI) and two process variables (information sharing and intragroup conflict), while Hypothesis 8 concerned the impact of inquiry method on group performance. These three hypotheses were tested using one-way AN OVAs with inquiry method as the independent variable and information sharing, conflict and group performance (respectively) as the dependent variables with group as the level of analysis. Table 10 provides summaries of these three AN OVAs and Table. 11 reports the means and standard deviations for the three inquiry method conditions across the three dependent variables. 1 14 As shown in Table 10, F-tests for the effects of inquiry method on all three dependent variables were not significant. For information sharing [E (2, 54) = .40, p > .05] and group performance [E (2, 48) = .84, p > .05], the value of the F statistic was below 1.00. The F -test for the AN OVA on conflict was marginally significant when observer ratings (CONFl) were used [E (2, 54) = 2.40, p < .10], but not with the questionnaire-based conflict scores (CONF2), g (2, 55) = .39, p > .05. Based on the marginally significant F-test for the conflict scores, the planned comparisons implied in Hypothesis 7 were carried out. Using observer ratings of intragroup conflict (CONFl), as predicted, the mean level of conflict in the synthesis DI condition (M = 1.50) was lower than mean for the traditional DI condition (M = 2.58) , t (54) = 2.71, p < .05). The other two planned comparisons were not significant. As a result, there is no support for Hypotheses 6 or 8 regarding the superiority of synthesis DI over traditional DI and consensus approaches with regard to information sharing or group planning quality. However, there is partial support for Hypothesis 7 and the prediction that synthesis version of D1 would result in less conflict than traditional DI when observer rating of intragroup conflict are used as the dependent variable (as opposed to aggregated group member perceptions of intragroup conflict). Table 10. ANOVA Summaries, Hypotheses 6-8. 115 DV drum, dam, F ratio Sig. Eta2 — _—_ Group 2 48 .84 .44 .03 Performance Information 2 54 .40 .67 .0] Sharing Conflict 54 2.40 .10 .08 Conflict 55 .39 .68 .01 (questionnaire) Table 11. Cell Means and Standard Deviations, Hypotheses 6-8. I DV Consensus Traditional DI Synthesis DI I Group Performance 131.32 114.26 L 109.54 (62.67) (42.64) (46.14) Information Sharing 33.26 37.21 34.89 (14.27) (10.88) (15.39) Conflict (ratings) 2.08 2.58 1.50 (1.22) (1.93) (1.31) Conflict 14.89 15.58 14.62 (Que-“mm!” (3.75) (3.37) (3.23) m: Cell sizes for Conflict and Information Sharing: 19 groups Cell sizes for Group Performance: 16 (CS), 17 (TDI), 18 (SDI) 116 Surmnag Although the inquiry method interventions appear to have been weak, there is some support for several study hypotheses. The synthesis DI condition does appear to have resulted in significantly less conflict than the traditional DI condition as predicted, but did not yield more facilitation than either of the other two conditions. Further, the consensus condition resulted in more controversy than the synthesis DI condition, but not more than the traditional DI condition. With regard to the efieas of process variables on other process variables, generally weak and nonsignificant effects were found that tended to be in the predicted direction. In particular, the recoded controversy scores (CONT2) were found to be significantly and positively correlated with the both measures of group conflict. Finally, the predicted interaction of information sharing and conflict was not significant but did produce a .04 change in r2 when entered into the regression involving group performance after controlling for group-level cognitive ability and task knowledge. Figure 4 displays estimates of the path coefficients (beta weights) for the combined process model pictured in Figure 2. These estimates were obtained from three separate single-step regressions, one for each of the dependent variables in the model (i.e., conflict, information sharing and group performance). Because of the improved reliability, the recoded measure of controversy was used in this analysis. As can be seen in Figure 4, the only statistically significant path is from controversy (recoded) to conflict (observer ratings), but most coeficients are in the direction predicted by the model. Of particular note, the path from controversy to conflict does not change substantially regardless of the conflict measure used (.32 for ratings, .33 for questionnaire data). Again, the interaction effect predicted in Hypothesis 1 is strong enough to warrant consideration, but is not significant at the .05 level. The prospects for this model will be a central theme in the next section. 117 E52 388m 2: com 3383mm 5mm .v 2&3 mo. samba/8:80 C. oocmgofiom wccgm @386 A > :BEESE: #0. H NH afio . mm 85:00 meccwebfi A S. sessions 388m 1 18 Exploratory Analyses Given the findings by Stasser suggesting that biased information sampling is an extreme problem in decision making groups, it is interesting to consider the extent to which groups identified the unique informational resources of their members in this complex, ill-structured setting. Unfortunately, given that a different task was used, it was not possible in this study to compare the degree of bias present in the information sampled at the group level with previous work. At a purely descriptive level, across all conditions, groups shared on average 35.62 unique information cues out of the 510 1mique cues available to members - only 7%. On the other hand, during discussion groups shared an average of 39.79 cues known beforehand by all members out of a total of 190 common information cues - 21%. Although the absolute values of these percentages are rather meaningless given the ease with which they can easily be manipulated by task demands, the relative difference in rates is striking. Basically, on average, groups were three times more likely to mention an information cue known to all members before discussion than an information cue known only to one member (or in a few cases, two members). This is remarkably similar to the 2.5:] ratio observed in earlier work by Stasser et a1. (1989). In light of this, a follow-up analysis was conducted to examine whether the inquiry methods used in this study had any efl‘ect on the relative amounts of sharing for unique and common information cues. To this end, a new variable was derived for each group by dividing the number common information cues mentioned by the number of unique information cues shared. This ratio, which represents the number of common information cues mentioned for every unique piece of information shared, was then used as the dependent variable in an AN OVA with inquiry method serving as the independent variable. The resulting F ratio was marginally significant [F (2, 53) = 2.52, p < .10; MSW = .28, eta2 = .09], but no two conditions were significantly different using Tukey's l 19 honestly-significant difference test for post-hoe comparisons (M = .96, 1.15, 1.35 for CS, TDI and SDI conditions, respectively). A final exploratory analysis was conducted to examine the degree to which groups were able to uncover the "hidden" nature of the optimal strategy. The task was constructed so that, although several difl‘erent strategies could be employed to increase profit, the one which would yield the most profit (according to expected values for route revenue using the most likely level of competition) was the expansion strategy. As noted earlier, the only group member that had any information relevant to expansion was the Vice-President of Industry Analysis. One possible indicator of the extent to which groups followed an expansion strategy is in terms of the number of new routes added to flight operations. To examine the possibility that the DI methods may have led groups to adopt the expansion strategy more than the CS method, an AN OVA was conducted with the number of new routes added to the flight plan as the dependent variable and inquiry method as the independent variable. The resulting F test was non-significant [E (2, 57) = 1.40, p > .05], suggesting that the dialectical inquiry condition was not more successful than the consensus condition in leading groups to "tmcover" the optimal strategy. I will have more to say about the role of biased information sampling in complex, ill-structured decision making tasks in the Discussion. DISCUSSION Study Contributiona The primary purpose of this study was to examine the determinants of group decision making in a complex, ill-structured decision making task with particular attention to the incremental contribution of process-related factors over and above group input variables such as cognitive ability and task knowledge. Beyond this, the effects of several possible group decision making structures (i.e., inquiry methods) were also assessed with regard to their expected impact on group process and performance. A third purpose of this study was to examine the phenomenon of biased information sampling in a complex, ill-structured environment where group members represented "experts" from difl‘erent areas of an organization. Overall, there was little support for the proposed model. More precisely, only two hypotheses involved statistically significant effects «the relationship between controversy and intragroup conflict (Hypothesis 3), and the effect of inquiry method on intragroup conflict (Hypothesis 7). However, given the low statistical power in this study, it is important to note that the obtained pattern of effects was generally consistent with the predictions of the model. The results of this study, qualified though they must be, suggest that group process variables may have some incremental validity over and above the powerful input factors of ability and task knowledge. Of particular interest was the interaction of information sharing and intragroup conflict. This interaction continued to account for 4% of the variance in group performance even after controlling for group- level cognitive ability and task knowledge. In addition, the measure of process 120 12 1 facilitation unexpectedly had the strongest relationship with group performance ~- stronger even than cognitive ability and task knowledge. These findings provide some support for the notion that process variables such as intragroup conflict, information sharing and process facilitation may explain some of the variance in group performance that group-level cognitive ability and task knowledge cannot. In the remainder of this section, I discuss a number of issues related to evaluating and refining the model proposed in the introduction and the use of structural manipulations (i.e., "inquiry methods") to improve group performance. I also comment on the issue of biased information sampling in complex, ill-structured decision making environments and conclude by identifying a number of areas that need further research attention as a model of group decision making in ill-structured contexts is progressively identified. Study Limitations Given theoretical basis for the hypotheses in this study, an important issue to address is the general lack of support for the model. In this section, I first discuss several measurement issues that constrain conclusions about the model, then discuss the overall model with these issues in mind. Boundary Conditions. As noted earlier, due to the nature of the task employed in this study, the current findings are limited to groups in situations involving non-routine decisions, face-to-face interaction, and large amounts of task-relevant information distributed across group members. Although some organizational decision making groups certainly operate in these conditions, others clearly do not. Given these task characteristics, the results of this study are most applicable to temporary, ad-hoc, heterogeneous decision making groups dealing with complex, ill-structured, strategic problems. In addition, the use of undergraduate students in the present study further limits generalization to real-world strategic decision making groups. Undergraduate 122 students in psychology differ in many non-trivial ways from experienced business managers and executives. For example, undergraduates are likely to have had less experience making decisions in groups and less opportunity to have developed a sense of their own competence in such situations. Difl‘erences such as these do not necessarily mean that the findings in this study would not be found in other samples. However, they do require that caution be used when generalizing results to other populations. Clearly, the efl‘ects found in this study need to be replicated in the field with actual managers and existing work groups. Mement Iasues. There are several measurement issues which necessitate cautious interpretation of study results: (1) Measurement reliability, (2) Range Restriction, (3) Construct validity, and (4) Low statistical power. To begin with, poor measurement reliability proved to be a troubling and inn'actable problem for all three judgmental measures of group process used in this study (i.e., process facilitation, controversy and intragroup conflict). Despite piloting and training, initial reliability estimates for observer ratings of conflict, controversy and facilitation were all below .50. Subsequent efforts to improve the reliability of these variables were only partially successful. The efl‘ect of poor measurement reliability is to attenuate the observed correlation between two measures, with the decrement in the magnitude of the observed correlation multiplicatively worsened when both measures have low reliability. As a result, correlations involving the two observer ratings of conflict and controversy are likely to be underestimated. Although it is certainly possible to correct observed correlations for attenuation due to measurement error, the appropriateness of this procedure is heavily dependent on the accuracy of the reliability estimates. A number of concerns call into question the accuracy of the reliability estimates generated in this study, including the small samples sizes available for the interrater reliability correlations and the different estimates for 123 observer ratings of conflict based on two independent pairs of raters. Ideally, in situations involving interrater reliability, a matrix of inter-rater reliability coefficients would be available involving all possible pairs of raters for each measure, and there would be little variance across the set of pairwise estimates such that the mean (or perhaps median) correlation would provide a stable estimate of the overall reliability for the measure. For practical reasons, it was not possible to generate such a matrix for this study except for the recoded controversy measures (CONT2), and therefore all reliability estimates (except one) are based on only one of several possible pairs of raters for each measure. In the case of the exception, observer ratings of intragroup conflict, the two estimates available differ considerably (. 10 versus .84). All things considered, correcting the observed correlations in this study for measurement error may very well be misleading. One factor which may have coincidentally contributed to the low levels of reliability for all three measures based on observer ratings is range restriction on their corresponding constructs. An examination of Table 8 reveals that the means and standard deviations for the three process measures (CONTl, CONFl and FACL) are quite small in magnitude compared to their respective maximum possible scores of 9.0, 9.0 and 12.0. The notion of range restriction is consistent with the anecdotal impressions of several raters, who noted long periods of silence during some group's discussion time and a general low-keyed tone to many group discussions. After observing the videotape of a particularly quiet group, one rater was moved to inquire "Did they know they could talk?" Although high levels of process facilitation, intragroup conflict and controversy did occm' within some groups during the study, across all groups the distribution of scores on these three measures was positively skewed. This problem was anticipated and steps were taken to address this issue in the design of the task (e.g., divergent information, role guidelines, monetary rewards, etc.), but it appears that what was done was not sumcient to overcome the strong cultural norms against talking to strangers, appearing disagreeable 1 24 and/or looking stupid. Although incentives and experimental procedures can offset this to some degree, the root of the problem may lie more with the nature of the people involved (i.e., undergraduate psychology students) than the design of the task. One potential design mechanism that might ofiset the motivation loss inherent in group norms inhibiting disagreement would be the existence of incompatible individual sub-goals in addition to an overall group goal. With respect to the task used in this study, along with being told to create a plan yielding maximum profit for the organization, each Vice President might be given one or more sub-goals related to incorporating certain features into the final group plan considered to be important by the department he or she represents. For example, the Vice President of Flight Operations might be told to see that annual fuel costs are held below a certain amount, or the Vice President of Finance might be assigned the sub-goal of reducing existing personnel levels by 10%. With corresponding incentives for their achievement, individual sub-goals arrayed in a trade- ofl‘ fashion could act as conflict “lightning rods” by providing tangible issues arormd which amorphous disagreement could coalesce. Given the seemingly ubiquitous presence of non-aligned goal hierarchies in real-life strategic group decision making, directly incorporating conflicted sub-goals in the design of a research task would increase the realism and generalizability of study findings. As a result, future research should strongly consider employing individual sub-goals along with an overall group goal. Although the use of multiple measures of the various process constructs was able to alleviate some of the problem associated with low reliability, using these alternative measures may have also created a construct validity problem in that the revised measures may not have tapped the same construct domain as the original measures. This is most pronounced in the case of facilitation, where the final scale used consisted of only two of the four dimensions defined as part of process facilitation. Although respectable reliability was attained using two of the four dimensions, the modified measure of process 125 facilitation is now somewhat deficient compared to the original conceptualization of the construct. In spite of this deficiency, it is interesting to note that the revised process facilitation measure has a relatively strong relationship with group performance. This relationship might be even stronger with a reliable measure of the broader construct. In the case of conflict, the correlation between the two measures of the construct, although most likely attenuated by the low reliability of the ratings data, is only .18. Further, an examination of the pattern of correlations for the respective conflict measures and other measures reveals marked differences. In several cases, the correlations between another measure and the two measures of conflict (CONFl and CONF2) are in opposite directions (i.e., cognitive ability, task knowledge, information sharing and group performance). One notable exception to this pattern is the relatively strong and positive correlation both conflict measures have with the recoded controversy measure (CONT2). Still, the low correlation between the two conflict measures and the different pattern of external correlations suggests that the two measures of conflict are not tapping the same construct. Looking at the correlations with other measures, the pattern for the observer ratings of conflict makes the most theoretical sense. Given the post-hoe, self-interested nature of having group members rate the conflict in their own groups, it is tempting to treat the observer ratings of conflict as the better (if less reliable) measure for theoretical reasons. Unfortunately, the issue cannot be definitively resolved in this study and remains a complicating factor in attempting to identify the relationship between intragroup conflict and other group process constructs. Conversely, the two sets of controversy ratings (CONTl and CONT2) show more convergence than the two conflict measures. The convergent validity correlation between the two measures of controversy is relatively strong (5,, = .30) given the low reliability of the initial (CONTl) ratings (an i: .23), and the pattern of correlations with other measures is generally in the same direction. On the other hand, the original ratings of controversy 126 (CONTl) are more strongly related to facilitation and information sharing, while the recoded controversy measure (CONT2) is strongly related to the questionnaire-based measure of group conflict and the original measure is not. Although it seems reasonable to use the recoded controversy scores given the improved reliability and the generally- similar pattern of outside correlations, it is likely that the two measures are not tapping the same construct domain. A final measurement-related problem for this study which is common to many studies involving groups is the low statistical power associated with the relatively small sample size. Given that all path coefficients but one were in the predicted direction, it is possible that more linkages would have been statistically significant had there been greater statistical power for the analyses. The probability of obtaining significant results with two-tailed tests and alpha = .05 is only 11% when rho = .10 and stilljust 30% rho = .20 (Cohen, 1988). Ifone is searching for relatively small effects associated with complex, overdetermined phenomena (as likely in group decision making), it would take approximately 200 groups to yield an 80% chance of achieving significance when rho = .20 and almost 800 groups when rho = .10! Given the logistical difficulties associated with collecting that much data on groups, there often may be no alternative to conducting studies that are seriously underpowered in some respects. In sum, there are a number of measurement issues which cloud interpretation of the results obtained in this study. However, the low reliability of a couple of variables, range restriction and low statistical power make this study a conservative "test" of the model. In some respects, given these obstacles, it is noteworthy that any substantial efl'ects were found at all. It is to these effects which we now turn. The Process Model Overview. In general, the process model hypothesized in Figure 2 was not supported in this study. The only hypothesized linkage found to be statistically 127 significant was between controversy and intragroup conflict. The one significant relationship that was not predicted but which is relevant to the model is that between process facilitation and group performance. On the other hand, the pattern of obtained relationships was generally as predicted in the hypotheses. This suggests that, consistent with the previous discussion, a primary reason for the lack of support for the model is low statistical power. In particular, the hypothesized interaction of information sharing and conflict in affecting group performance was not significant in spite of accounting for 5% of the variance in profit earned by the groups in the simulation. Given that the interaction was hypothesized and the change in r2 was large enough to be of substantive interest, we now turn to a closer examination of this interaction. Interaction of Information Sharing and Conflict. Given the low power associated with this analysis and the predicted interaction, it is useful to plot the interaction between unique information sharing and intragroup conflict on group performance in order to determine if the interaction could be interpreted in a fashion consistent with the prediction in Hypothesis 1. Using the regression equation without the control variables entered on Step 1, the interaction is plotted in Figure 5. To generate the lines shown in Figure 5, specific values of information sharing and conflict were selected and inserted into the regression equation. The graph shows that, when intragroup conflict is low (i.e., -1 SD on intragroup conflict), high levels of information sharing were associated with higher levels of profit in the simulation. However, when there is a high level of conflict in the group (+1 SD on intragroup conflict), a high level of information sharing is associated with lower profit. It is also important to note that the variance accounted for by the product term is essentially independent of the variance accounted for by mean individual cognitive ability and task knowledge. Thus, although the effect did not reach traditional levels of statistical significance, the strong effect size and predicted nature of 128 55385 85.50 anew—«:5 x wstmcm cosmetomfi .m enema 05.65 5sz.22 :9: 2,3 29:80 :9: I 8580 33 l eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII A3053 5 cto>> #mzv 856.0th 9.65 cm ow om om oo P ON F 0v _. om P 129 the interaction suggest that future research should examine this interaction again using a design with more statistical power. Revised process model. In general, this study did not provide much support for the proposed model of group decision making in ill-structured task environments. However, as noted, the existence of low reliability of two measures in the model, range restriction and low statistical power provided very conservative conditions under which to test the model. As a result, it is not clear that this model should be dismissed just yet. At the same time, it is useful to identify the factors which were most useful in explaining group performance in this complex, uncertain task. Measurement issues not withstanding, Figure 6 presents a modified model of group decision making in ill-structured tasks. Overall, the revised model highlights the dual importance of "input" and "process" factors in determining group performance on ill-structured tasks. Whereas the a priori model focused largely on the role of process-related factors and treated cognitive ability and task knowledge as control variables, this model directly incorporates them on the basis of their strong and relatively independent efi'ects on group performance. What may have happened in this study, as it may in many similar real-life situations, is that input factors such as ability and task-related knowledge manifest themselves independent of group process through the identification of high-quality recommendations and alternatives on the part of individual group members. Clearly, individual recommendations and suggestions can incorporate unique, task-related knowledge without explicitly referring to that knowledge (i.e., sharing unique information). Thus, to some extent, the individual inputs of group members (e. g., cognitive ability and specialized knowledge) may impact group performance without involving group interactive processes. However, beyond a certain point, it is probably the case that increments to group performance are a function of group process (i.e., discussion and 130 mac—02 003009 0:80 .00 3002 0830M < 0 80w?— 00§EB§ 00.5 0w0230§ 0331 0000800005 \ 025.4 essence 0:000 u e warmsm .0000 005000 00000005 tombs/00:00 13 1 interaction). At this point, process-related factors such as facilitation, information sharing and conflict become important. The relative weights for the input factors and process factors may then depend on the exact nature of the particular task. .Dlrlectical Inquiry in Group DecisLmr MM In general, the DI methods used in this study do not appear to have had much impact on group process. Although the DI methods were explicitly intended to impact facilitation and controversy, they do not appear to have done so (although synthesis DI did result in less group conflict than traditional D1). The self-report data gathered via questionnaire suggested that group members in the role of plan presenters and debaters thought they were doing their job. Why did the two DI methods not increase controversy and why did the synthesis DI not increase facilitation? Previous research has formd that controversy-inducing methods such as DA and DI result in higher levels of group performance than the consensus-seeking method. Why was this not found in the current study? The explanation may reside in the particular manner in which DI was implemented in this study. First, it should be recalled that the task was explicitly designed so that group members would have to deal with divergent information, incompatible alternatives and necessary trade-offs. In other words, there was a great deal of latent controversy built-in to the task and the two dialectical inquiry methods were designed to bring this controversy to the surface. The two members asked to present and critique plans were ’ “ration pointing to the need for radically difi‘erent changes to the existing plan. members were asked to question the other presenter as to the ations and were not supposed to stop until achieving a good person's position. Detailed instructions were provided to the edure they were to use to implement the dialectical methods, l 32 and DI group members received specific and comprehensive instructions regarding how they were to fulfill their respective role assignments. Within this context, there are several factors that seem likely to have played a role in the failure of the DI methods to produce their expected consequences. First, as alluded to previously, participants on the whole did not know one another before participating in the study and appear to have been reluctant to engage in behaviors that could have been considered confrontational. Second, with regard to stimulating controversy, the particular manner in which both DI techniques were implemented probably lessened the controversial impact. Specifically, the simultaneous creation of two independent plans rather than the iterative creation of a plan and counterplan did not allow presenters/critiquers time to familiarize themselves with the opposing plan (essentially requiring all arguments to be generated on-line). Third, despite efforts to insure that planners created incompatible plans, the simultaneous ("blind") generation of competing plans may have resulted in perspectives that were not always antithetical. Previous research suggesting DI improved group performance relative to CS has always used plans that were created in advance and presented intact (assigned) to group members who then advocated them in the dialectical process. Combined, these factors suggest thatthe DI methods did not yield more controversy and higher group performance in this study because too much of the dialectical process was assumed to occur rather than forced to occur. The failure of the synthesis DI condition to produce more process facilitation is more difficult to explain, as it was not predicated on antithetical plans or confrontational interpersonal behavior. Here, social factors appear to be most relevant in explaining the lack of predicted efi‘ect. Enacting the synthesis role required the ability to comprehend and integrate what others were saying, and necessitated the willingness and ability to 1 3 3 provide leadership. This may have been beyond the motivation or capabilities of some participants. In summary, a variety of factors may have combined to reduce the expected effectiveness of the DI inquiry methods in producing controversy, facilitative behavior and subsequently group performance. Although generating and assigning antithetical plans to groups in the DI conditions might have improved the strength of these manipulations, it would have eliminated the opportunity to observe the degree to which DI methods allowed groups to "uncover" the hidden profile of the correct strategy (i.e., expansion). Did groups using DI uncover the better (i.e., expansion) strategy more often than the CS groups? It is to this question that I now turn. Biased Information Sampling and Hidden Profiles Despite the diflemnce in tasks, it is interesting to compare the relative rates of information sharing obtained by Stasser et al. (1989) with those found in this study. In the earlier study, groups were approximately 2.5 times more likely to mention an information cue known to all members than an information cue known only to one. In the current study, groups were three times more likely to do so. As a result, initial indications are that the bias towards shared information is fairly robust across task type. Further, in keeping with recent research by Stasser (1992) and Stewart and Stasser (1995) which formd that "advocacy" and "expert role assignment" manipulations (respectively) were not particularly effective at reducing the bias in favor of common information, the dialectical inquiry methods used in this study did not result in more unique information sharing or a lower ratio of common/unique sharing during group discussion. Indeed, consistent with Stasser's (1992) results with the DISCUSS simulation, this study found that the bias in favor of common information may be worsened by conditions that require one or more group members to "advoca " a position (i.e., "plan" v. "counterplan"). As a result, it is still unclear how DA and DI have their 1 34 beneficial effects on group performance. One possibility is that these methods cause more alternatives to be generated and considered by the group without inducing group members to share relevant supporting information. Future research might further explore the mechanism by which DA and DI impact group performance by measuring the number and quality of individual recommendations, the number of alternatives considered by groups, and the relative amounts of common and unique information shared during group discussion. Futulre Direction_s There are a number of issues raised by this study which warrant attention in future research. First, future research should consider using ill-structured, moderate-fidelity business simulations that can be conducted in a relatively short period of time. In the past, research on group decision making has tended to use tasks that are relatively simplistic and unengaging (e.g., the Moon Survival task or simple case studies) or tasks that are extremely complex and intended primarily as teaching tools (e.g., semester-long management simulations). As this study demonstrated, there is a middle grormd between the two extremes that can present participants with a challenging, self-contained task environment that is probabilistic, complex and moderately realistic. Given the low statistical power and the measurement issues present in this study, further research should continue to address the adequacy of the model of group decision making proposed in this study. In particular, with improved measurement reliability and greater statistical power, it should prove interesting to compare the original model from Figure 2 with the revised model presented in Figure 6. With a larger sample size and better measurement reliability, it would be possible to directly compare these two models with structural equations modeling techniques. Future research might alSo better address several issues related to levels of analysis. Roberts, Hulin and Rousseau (1978), among others, have identified the need to 135 consider and examine multiple levels of analysis in order to understand the behavior of individuals in organizations. When organizations are viewed as being composed of hierarchical systems operating at multiple “levels” of social complexity, it becomes both possible and necessary to understand organizational performance as a function of processes occurring at and across different levels of the organization through the use of composition, cross-level and multi-level theories (Rousseau, 1985). Two levels of analysis particularly relevant to tmderstanding the performance of decision making groups are the group and the individual. However, when data are measured at one level and analyzed at another, it becomes necessary to aggregate/disaggregate measuremen -- a procedure that can artifactually create or alter functional relationships between focal constructs and other constructs (James, 1982). Because the focal level of interest in this study was the group, constructs were (for the most part) conceptualized, measured and analyzed at the group level in an efl‘ort to avoid the potential problems and ambiguities associated with aggregation. One exception to this statement concerns the composition variables used as control variables (i.e., cognitive ability, task knowledge and gender), where scores for each variable were assigned to groups by combining individual attribute values in a somewhat arbitrary fashion. Given the relatively strong relationships between composition, process and outcome variables observed in this study (as well as others), future research would benefit from the development of composition and cross-level theories and the use of more sophisticated analytical techniques in understanding how these individual difi‘erence variables operate in group settings. With respect to improving the reliability of group process measures, there are at least two strategies. One strategy would involve abandoning the use of observer ratings and the videotaping group discussion in conjunction with an attempt to develop reliable paper-and-pencil measures that could be completed by the group. While this strategy 1 36 would involve considerably less work in the long run compared to reliable observer ratings, it brings to the foreground complicated issues related to method bias, assessing intragroup agreement and the appropriateness of using aggregated individual perceptions to represent the group as a whole. A second strategy for improving measurement reliability would involve the continued use of observer ratings along with an extensive analysis of the experimental task in the hope of identifying all possible behaviors that constitute instances of each of the various measures. If all (or even most) potential behaviors could be identified for each process construct within the confines of a particular task, the "rating" process could be reduced to a checklist procedure, with an accompanying shift in focus away from examples and interpretation towards comprehensiveness and recognition. In implementing this strategy, it almost goes without saying that it is necessary to assemble (and retain) committed raters who are given standardized training involving practice, "true score" feedback, refresher sessions and incentives for accurate coding. Although a direct comparison of the two alternative models of ill-structured group decision making would be helpful, future research should also consider including other group composition variables not examined in this study. As evident fi'om r-square value generated by the regression analysis associated with Hypothesis 1, a great deal of the variance in group performance was not explained by the variables in the analysis. There is a fair amount of evidence that the gender composition of groups will affect the dynamics of group interaction (Moreland & Levine, 1992). It may also be the case that various combinations of one or more Big 5 personality constructs such as extroversion, agreeableness or neuroticism at the individual-level may give groups a distinctive personality "profile" which affects the manner in which information is processed in the group. 137 With regard to interventions in group process designed to improve group performance, the DI manipulations used in this study were not successful. One potential problem that might be addressed in future research would be to modify the experimental procedures used in this study so that "plan" and "counterplan" are created in a serial fashion, with one building on the other, so that both sides have an opportunity to study the other plan before the dialectical process begins. This would negate the need for criticisms and questions to be generated "on-line," and might substantially improve the quality of the dialectical process (as well as increasing the chances of getting "diametrically opposed" plans. Also, with regard to future research on prescriptive interventions, it may be that DI and DA would be more efl‘ective when employed using multiple individuals advocating "plan" and "counterplan." In order to keep the number of participants to a manageable level in this study, it was necessary to limit the assignment of "plan" and "counterplan" to one group member each. It may be that the intellectual stimulation and moral support provided by a partner would allow for a more in-depth debate when plan and cormterplan are debated. Future research might address this issue by comparing DI with one-person roles and DI with multi-person roles. Further, it should be noted that the interventions used in this study constitute only a small sample of the difl‘erent ways in which controversy can be increased in decision making groups through the use of some intervention. To begin with, to the extent that implementation problems are suggested by the lack of efl‘ects for the DI methods used in this study, it may be possible to use a functionally-equivalent controversy-inducing technique that is easier to implement on a procedural basis. For instance, Devil's Advocacy involves only one role assignment and does not necessitate the creation of plans or cormterplans or a rigid Sequence of events during group discussion. Future 138 research might employ DA in situations where participant confusion is expected with the more structured and involved DI process. Finally, there is a clear need to expand the parameters of the research setting used in identifying and testing models of group decision making in ill-structured environments. The current study was based on undergraduate students with little or no history or collective future who tended to have no managerial experience and who were not given the opportunity to develop any of these characteristics because of the "one-sho " nature of the study. Future research attempting to identify a model of ill-structured group decision making would benefit most from using intact groups of managers who know and interact with one another on a regular basis in conjunction with longitudinal designs that allow learning and development. After such a model has been developed, it would be helpful to demonstrate the model's explanatory power using multiple tasks that would rule out the possibility of task-specific findings. 99.11% This study identified two types of process loss that may hamper group decision making efl’orts: a failure to share information among members and a failure to optimally use information that is shared. A model was proposed integrating the process findings in the psychological literature and the prescriptive interventions identified in the management literature. Despite the presence of several factors which made this study a conservative test of the hypotheses generated by the model, the pattern of findings was not inconsistent with the predictions of the model and provided some support for the incremental validity of process-related factors in explaining group performance. On the other hand, the expected advantages of the dialectical inquiry conditions were not found. Future research attempting to identify a model of group decision making in ill-structured 139 contexts would greatly benefit from the longitudinal study of intact, ongoing decision making groups and the replication of findings across various types of ill-structured task environments. APPENDICES APPENDIX A APPENDIX A SOUTHEAST AIRLINES, INC.: A BUSINESS SINIULATION IN THE AIRLINE INDUSTRY 1. OVERVIEW Welcome to SouthEast Airlines, Inc. In this simulation, you and three other individuals play the role of a top management team charged with creating a strategic plan for SouthEast Airlines for the upcoming fiscal year. Each member of your group will be assigned to one of the following positions: (1) Vice Presideng, Flight Operations. (2) Vice President. F inappg, (3)3133; President, Marketing, or (4) Vice Presideng, Indam Analysis. In order to develop a strategic plan, your group will be provided with information on last year’s operations and what can be expected in the future. You will each receive information that corresponds to your position. You will be asked to familiarize yourself with this information and apply it dming group discussion. Like real-world business operations, this simulation is relatively complex and will be confusing at times. However, by the time you are finished, it will make sense. Do the best you can as a group, and remember - it's just money! YOUR OBJECT AS A TEAM IS TO IDENTIFY A PLAN THAT WILL EARN THE MOST PROFIT FOR " SOUTHEAST AIRLINES." This plan is the final product of your efforts. H. SEQUENCE There are two phases in the creation of a strategic plan: 1. Individual Preparation Phase (60 minutes) 2. Group Planning Phase (75 minutes) In the Individual Preparation Phase, you will be given a packet of information relevant to your position in the company. Use the 60 minutes to become thoroughly familiar with it. During group discussion, you will not have time to go back and "learn" this material. It will be EXTREMELY HELPFUL to your group if you use the time allowed to prepare yourself wellll In the Group Planning Phase, your group will be reassembled for the purpose of reaching agreement on a final plan. In a few minutes, you will be given special instructions for how to proceed as a group in this phase. AQAW, YOUR OBJECT AS A GROUP IS TO COME UP WITH A PLAN THAT RESULTS IN MAXIMUM PROFIT FOR SOUTHEAST AIRLINES! 140 141 APPENDIX A III. HOW THE SIMULATION WORKS Your airline is based in Atlanta, GA. There are a number of other cities represented in this simulation. Your firm generates revenue by providing airline transportation between Atlanta and these other cities, but this also incurs various costs. Your team will be trying to come up with a plan that maximizes total revenue and minimizes costs. Profit is determined as follows: Profit = Total Revenue + Ca§_h - Costs - Debts Total Revenue = sum of Route Revenues Route Revenue = (Market Share x Passenger Demand x Fare) per route Cash = Invested Cash x Interest Rate Costs = Aviation Fuel + Facilities/Equipment + Flight Staff + Ground Staff + Maintenance + Marketing + Purchases + Loan Repayment + Finance Charges Debts = Balance of outstanding loans Setting aside Cash and Debts for the moment, it can be seen from the profit formula that the more Total Revenue you generate, the greater your profit. At the same time, the more Costs you incur, the lower your profit. Thus, you should strive for a plan that generates as much Total Revenue as possible while minimizing Costs. LL'axraLiLingTotal Revenue and Maxim ' ’ ' Costs Figure 1 provides a graphic display of the factors that affect profit on any given route. Arrows in the diagram depict causal relationships between variables. When there is an arrow between two variables, the variable to the left at least partially determines the level/amount of the variable to the right of the arrow. The variables and their relationships are explained in detail in the information provided to you group. In general, the five “little” variables at the left are the factors over which you and your group have the most control. In most cases, you can simply choose the values of these variables (for example, deciding to offer five Daily Flights on a route). Once you have made choices with regard to these five variables, they affect other variables further to the right. Note that Total Revenue is simply the sum of all the Route Revenues, and Route Revenues are greatest when Fare Price is high, Passenger Demand is high, and Market Share is high. Although Fare Price and Passenger Demand are relatively straightforward, Market Share is a complex variable afi'ected by all five of the “little” variables. Further, the five “little” variables affect Costs by determining the number of planes used on a route, the amount of aviation fuel consumed and the 142 APPENDIX A number of staff required to operate/ service the route. Generally, as you do things to increase Market Share, you also increase Costs to some extent. The goal of the simulation is finding routes where you can: (1) Charge a high Fare Price, (2) Expect high Passenger Demand, (3) Establish a good Market Share and (4) Avoid excessive Costs. In a nutshell, this is how you succeed in “SouthEast.” IV. COMPLETING THE STRATEGIC PLANNING DOCUDIENT A completed Strategic Planning Document is the final product of your group effort. All actions you desire to implement as a group MUST be specified on the Strategic Planning Document. To complete a Strategic Planning Document, you MUST do at least five things: 1). 1""“SELECT ROUTES FOR AIRLINE SERVICE“r and for each selected route: 2). Decide the # and TYPE of AIRCRAFT to use 3). Decide the # of DAILY FLIGHTS to offer 4). Decide on the PRICE of the FARE/TICKET 5). Decide on the # of FLIGHT STAFF to have on each flight In addition to these things which you must do, there are a number of other activities which you MAY choose to do if you wish. The information you will be given covers these actions in more detail. In general, you can indicate your desire to conduct these actions by noting your intentions to do so in the appropriate place on the third page of the Strategic Planning Document. For now, these actions are: 1. Invest Cash: You have $24.3 million in cash and short-term investments. Should you decide to invest some or all of this money, it draws interest at the Investment Interest rate. 2. Market/Advertise: You can spend money on marketing campaigns by advertising SouthEast’s service in one or more cities using various media (e.g., TV, radio, newspaper, billboard) if you desire. This increases Market Share on routes that connect with the city where marketing/advertising is being conducted. 3. Purchase Aircraft: You can buy additional aircraft and use these aircraft as you would the ones you already own. 143 APPENDIX A Sell Aircraft: You can sell aircraft that you do not intend to use. This provides you with more cash. Pay off loans at an accelerated rate: SouthEast has two outstanding loans. Although there is a certain amount you are required to re-pay next year, you can accelerate repayment (i.e., pay more than the minimum) and reduce your finance charges. This reduces Costs. 144 APPENDIX A STRATEGIC PLANNING DOCUIVIENT RULES FOR FILLING OUT THE STRATEGIC PLANNING DOCUMENT: a). b). c). d). e). YOU CAN ONLY HAVE ONE TYPE OF PLANE ON A ROUTE. UNLESS OTHERWISE STATED, ONE AIRCRAFT CAN MAKE TWO ROUNDTRIP FLIGHTS PER DAY. FARE PRICES MUST BE WITHIN $100 OF THE AVERAGE FARE FOR THE ROUTE AND MUST BE IN MULTIPLES OF $25 (e.g., $125, $250, $475). ALL ROUTES MUST BE NONSTOP BETWEEN ATLANTA AND SOME OTHER CITY. THE PLANNBNIG DOCUMENT MUST BE SIGNED BY ALL TEAM MEMBERS NOTE: FAILURE TO ABIDE BY THESE CONSTRAINTS WILL NULLIFY YOUR PLAN! All members MUST provide a signature in one of the spaces provided below for the plan to be valid: (VP, Flight Operations) (VP, Marketing) (VP, Finance) (VP, Industry Analysis) Group # Date 145 APPENDIX A Strategic Planning Document Group #: Date: Route: Aircraft Aircraft Daily Flight Fare“ , ATLANTA- Type“ #14 Flights“ Staff ** See restrictions 146 APPENDIX A Strategic Planning Document Marketing Efforts: Place an “X” where you wish to designate marketing efforts City Television Radio Newspaper Billboard Amount Cash Invested: $ Extra Loan Repayment: LOAN A: $ LOAN B: $ Aircraft Purchased: Aircraft Sold: Type # Type # A-300 A-300 DC-9 DC-9 B-757 B-757 L-101 l L-lOl l B-727 B-727 DC-8 DC-8 B-747 B-747 DC-10 DC-10 147 APPENDIX A SOUTHEAST AIRLINES YEAR-END REPORT OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO TO: ALL VICE PRESIDENTS FROM: BOARD OF DIRECTORS 00000000000000000000000000000000000000000000000000000000 Note from the BOARD: In the 1994-1995 fiscal year, SouthEast Airlines generated a profit of $11.78 million. This is considerably lower than the average profit over the last five years. We are somewhat concerned with this "slide," and hope that your efforts might reverse this trend. You should find the information contained in this report useful for your meeting. We want you to consider all possibilities. As you are aware, there really wasn't a formal strategic planning process last year, and that hurt us. THE PURPOSE OF YOUR MEETING SHOULD BE TO GENERATE A PLAN THAT WILL BRING IN MORE REVENUE THAN LAST YEAR'S OPERATIONSII 148 APPENDIX A SOUTHEAST YEAR-END REPORT Overview SouthEast Airlines, Inc. is headquartered in Atlanta with primary operations centered at the Atlanta International Airport. Currently, SouthEast operates 39 aircraft and provides services to 11 other cities in the United States. At the end of the last fiscal year, we employed 569 personnel classified as flight staff and 720 personnel classified as ground staff. Histogy SouthEast Airlines was founded in 1952 as a small regional airline intended to provide air transportation to passengers traveling in the Deep South. Initially, SouthEast served four cities - Dallas, New Orleans, Miami, and Nashville. SouthEast grew during the 1960s and expanded service to four more cities in the southeastern United States (Raleigh-Dmham, Tampa, Louisville, Memphis). In the 1970s, service was extended to a number of cities outside the Deep South (Chicago, New York, Los Angeles). The company entered a period of financial difficulty in the early 19805 due to the national economic recession and federal deregulation of the airline industry. In 1982, SouthEast Airlines lost money for the first time in its history, and did so again in 1983 and 1984. The situation has tinned around since 1984, with annual profits ranging from a low of $28.3 million in 1985 to a high of $139.1 million in 1992. However, during the last fiscal year, profits fell to their lowest level since 1984—1985 (approximately $12 million). There are a variety of reasons for this decrease, but further decline is not acceptable. 149 APPENDIX A SOUTHEAST YEAR-END REPORT Table 1. Flight Operations Dag Route Aircraft # Aircraft Flight Daily Type Assigned Staff Flights Memphis B-757 3 6 5 Louisville DC-9 3 6 6 Nashville DC-9 3 6 6 Raleigh-Durham A-300 4 4 7 Dallas-Ft. Worth DC-8 4 10 7 New Orleans B-757 3 7 5 Tampa L-lOl l 3 8 6 Miami B-727 5 8 9 Chicago B-747 4 10 7 New York B-727 4 9 8 Los Angeles D101] 3 12 6 Totals/Ave. - 39 - 72 Table 1 Notes: Aircraft Type refers to the type of aircraft used on the route. Our policy is to use only one type of aircraft on a particular route so customers know what to expect. # Aircraft Assigned is the number of aircraft employed on the route. FAA regulations prevent planes from making more than two roundtrip flights per day, necessitating a 1:2 ratio of aircraft to flights on all routes. Flight Stafi' represents the number of flight attendants assigned to work EACH flight. Daily Flights represents the number of flights offered on a given route per day.. Table 2. Revenue Information 150 APPENDIX A SOUTHEAST YEAR-END REPORT Route Previous Ave. Our Pass. Market Revenue Competition Fare Fare Demand Share (millions) Memphis Low $250 $200 200,000 43% 17.20 Louisville Low $175 $250 125,000 35% 10.94 Nashville Moderate $200 $225 175,000 30% 11.81 Raleigh- Moderate $175 $125 200,000 35% 8.75 Durham Dallas- Heavy $275 $325 500,000 24% 39.00 Ft. Worth New Orleans Heavy $250 $300 200,000 8% 4.80 Tampa Heavy $225 $250 225,000 16% 9.00 Miami V. Heavy $250 $300 525,000 8% 12.60 Chicago V. Heavy $300 $350 600,000 16% 33.60 New York V. Heavy $300 $300 550,000 15% 24.75 Los Angeles V. Heavy $400 $425 $75,000 8% 19.55 Totals/Ave. - $254.55 $277.27 352,273 22% 192.00 1% Previous Competition is an indication of how many competing airlines also offer flight services on the route in question. Ave. Fare is the price of the “average” fare offered by our competitors for the route. Our Fare represents the price we charged for a roundtrip ticket last year. Passenger Demand represents the number of people traveling roundtrip between Atlanta and the various other cities over the course of the last fiscal year. Market Share represents the percentage of the Passenger Demand that used Southeast. Revenue is how much money we generated on the route last year (in millions). SOUTHEAST YEAR-END REPORT Balance Sheet: Liquid Assets: $24.3 million in CASH and short-term investments 151 APPENDIX A W: $23.5 million in outstanding loans Last Year's Passenger Revenue: $ 192.00 million Last Year's Cost: $ 180.22 million Last Year’s Operating Profit: $ 11.78 million Table 3. Cost Brea_kdown Type Cost °/o of Total Aviation Fuel 65.91 M 36.6 Facilities/Equipment 48.00 M 26.6 Ground Staff 28.80 M 16.0 Flight Staff 25.61 M 14.2 Maintenance 5.85 M 3.2 Loan Repayment 2.75 M 1.5 Finance Charges 2.30 M 1.3 Advertising/Marketing 1.00 M 0.6 Total 180.22 M 100.0 Note: Cost is in millions of $ 152 APPENDIX A SOUTHEAST YEAR-END REPORT Table 4. Profit Information by Route (in millions) Route Route Route Profit Profit Revenue Cost (millions) Ratio Memphis $17.20 $10.36 $6.84 .66 Louisville $10.94 $11.76 - $ .82 -.07 Nashville $11.81 $10.97 $ .84 .08 Raleigh-Durham $ 8.75 $10.86 - $2.11 -. 19 Dallas-Ft. Worth $39.00 $17.35 $21.65 1.25 New Orleans $ 4.80 $10.99 - $6.19 -.56 Tampa $ 9.00 $14.75 - $5.75 -.39 Miami $12.60 $20.10 - $7.50 -.37 Chicago $33.60 $19.77 $13.83 .70 New York $24.75 $20.55 $4.20 .20 Los Angeles $19.55 $32.69 - $13.14 -.40 Totals/Ave. $192.00 $180.22 $11.78 .08 Table 4 Notes: Route Revenue is the total $ income generated by the route last year (in millions). Route cost is the total $ cost to operate the route last year (in millions). Profit is (Route Revenue - Route Costs) Profit Ratio is simply Profit/Route Costs and represents a standardized measure of Rettun On Investment (ROI). Note that a negative profit ratio indicates the loss of money on a route, 0.00 is the “break even” point, and a ratio of 1.00 would be equivalent to earning twice as much revenue on a route as it cost to operate (i.e., 100% R01). 153 APPENDIX A MEMO OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO TO: Vice President, Flight Operations FROM: Flight Operation Staff This memo is in response to your inquiry about route efficiency. As you know, a number of our routes lost money last year, and a primary reason was because of poor decisions concerning the allocation of aircraft to routes, the number of daily flights to offer, and the number of flight staff to put on each flight. We have prepared an analysis of costs associated with flight operations on orn- various routes, and provided some conclusions and recommendations for you to consider. Since fuel costs are such a big percentage of our total costs, we thought you might want to know how we calculated fuel costs for the routes. We did it using this formula: Route Fuel Costs = (Cost/Flight * Daily Flights * 365 days) where Cost/Flight = (Cost/Mile * Roundtrip Distance) For example, the aircraft flying the Memphis route have a Cost/Mile of $1.50 and the roundtrip distance is 600 miles. We multiplied these two values together to get Cost/F light ($900), and then multiplied $900 Cost/F light by 5 Flights/Day and 365 Days to get the Annual Fuel Cost for the Route. This information is presented for each route in the following table. 154 APPENDIX A Summary: Annual Fuel Costs by Route Route Cost/ Rd. Trp. Cost/ Daily Annual Mile Distance Flight Flights Fuel Cost Memphis $1.50 600 $900 5 $1.643 M Louisville $1.80 600 $1080 6 $2.365 M Nashville $1.80 400 $720 6 $1.577 M Raleigh-Durham $1.00 500 $500 7 $1.278 M Dallas-Fort Worth $2.30 1000 $2300 7 $5.877 M New Orleans $1.50 800 $1200 5 $2.190 M Tampa $2.75 800 $2200 6 $4.818 M Miami $2.10 1100 $2310 9 $7.588 M Chicago $2.50 1300 $3250 7 $8.304 M New York $2.10 1400 $2940 8 $8.585 M Los Angeles $2.75 3600 $9900 6 $21.681 M Totals/Ave. $1.97 1100 $2445 72 $65.91 M Nagaag Cost/Mile measures the cost in dollars for an aircraft to fly one mile. Each aircraft has a Cost/Mile rating from $1.00 - $2.75. This value is multiplied by the distance in miles to determine the cost of one aircraft making a single, one-way flight on a given route. Rd. T rp. Distance is the roundtrip distance in miles from Atlanta to the various cities. Cost/Flight is simply (Cost/Mile * Rd. Trp. Distance). Daily Flights is the number of daily roundtrip flights on the route. Annual Fuel Costs is the total amount in millions that it cost to ptu'chase aviation fuel for planes operating on the route over the course of the last fiscal year. 155 APPENDIX A Summary: Aircraft Characteristics & Cost Information Type Accom. Flight Staff Cost/Mile Purchase Class Lim. Cost (mil). A-300 E 4 $1.00 $1.0 M DC-9 D 8 $1.80 $2.0 M B-757 D 7 $1.50 $2.5 M L-lOll C 12 $2.75 $3.0 M B-727 C 9 $2.10 $4.5 M DC-8 B 11 $2.30 $6.0 M B-747 B 15 $2.50 $6.0 M DC-10 A 12 $2.40 $8.0 M 15913 Aircraft Type is the formal FAA designation for the aircraft used on the route. Accommodations Class refers to the ergonomic, user-friendly aspects of an aircraft's design. There are 5 classes of Aircraft Accommodations, ranging from "E" (poor) to "A" (excellent). Flight Stafl' Limit refers to the maximum number of Flight Staff that can effectively serve aboard the aircraft. Each flight requires a separate crew. Cost/Mile represents the cost in dollars for flying the plane one air-mile. Pro'chase Cost refers to the price associated with the pch of one new aircraft of the type indicated (IN MILLIONS OF DOLLARS). 156 APPENDIX A Market Share Modifiers We have prepared a few tables to show how changing these values can influence our Market Share on a given route. To use these tables, figure out which column to use by estimating the Competition and the level of the modifying variable (e.g.., Daily Flights). Then, look down the various rows and compare the various options in terms of their effect on Market Share. In general, lots of daily flights, excellent accommodations and lots of flight staff translate into big market shares. Modifier #1: Convenience (Daily Fligata) COMPETITION Daily Low- Heavy- Flights Moderate Very Heavy 1 -12% -25% 2 -10% -l6% 3 -8% -l 1% 4 -5% -7% 5 -2% -4% 6 0% -2% 7 +2% 0% 8 +4% 0% 9 +6% +1% 10 +8% +3% 1 1 +9% +5% 12 +10% +7% 13 +1 1% +8 14 +1 1% +9% Note: Values in table are ADJUSTMENTS to Market Share 157 APPENDIX A Modifier #2: Comfort (Accommoaaaions) Roundtrip Distance Factors <1500 miles 1501+ miles Competition Low or Heavy or Low or Heavy or Accommodations Mod. V. Heavy Mod. V. Heavy "E" Class -2% -10% -20% -3 5% "D" Class 0% -5% -10% -20% "C" Class 0% 0% -1% -7% "B" Class +2% +5% +7% +5% "A" Class +3% +7% +12% +9% Note: Values in table are ADJUSTMENTS to Market Share 158 APPENDIX A Modifier #3: In-Flight Service (Flight Staff) To estimate the modification to Market Share for the number of Flight Staff, we use a different rule of thumb: Take the number of Flight Staff on the route and subtract the value for the Average Flight Staff for competitors. Then, multiply this number by one or two, depending on the competition (1= Low or Moderate Competition, 2 = Heavy or Very Heavy Competition). The resulting number is the adjustment to existing market share. Here is the formula: (Flight Staff- Average Flight Staff) * 1 or 2“ ”Low OR Moderate Competition = 1 ”Heavy OR Very Heavy Competition = 2 "Flight Staff" refers to the number of Flight Staff you decide to put offer on a given route. "Average Flight Staff" refers to what your competition on the route is offering. Example: You want to know what the effect on Market Share would be if you use put 13 Flight Attendants on each flight on a route where the Average Flight Staff value = 12 and where there is Very Heavy Competition (Multiplier = 2). The modification would be: (13-12) * 2 = 2. Thus, if you were to use 13 Flight Attendants for flights on this route, there would be a “+2” modification to our Market Share. 159 APPENDIX A Analysis and Conclusions. 1. There is a trade-off between fuel-efficiency and passenger comfort. We need fuel- efficient planes on the longer routes, but these aircraft need to have good accommodations. Some of our longer routes definitely need new planes! 2. We can drastically affect Market Share by altering the number of Daily Flights, type of aircraft (i.e., accommodations) and the number of Flight Staff assigned to a route - possibly increasing Market Share by up to 40%! 3. Many of the routes where we are losing money could be made more profitable by simply adding a few more flights, getting better aircraft (including more fuel efficient ones on the long routes) and/or adding a few Flight Staff to each flight. Recommendations. 1. Juggle existing aircraft assignments so as to minimize fuel costs and maximize market share bonuses for good Accommodations. 2. Sell some of the old fuel-inefficient aircraft and pruchase new ones that have (1) decent fuel-efl‘iciency and (2) good Accommodations. 3. Adjust Daily Flights and Flight Staffs to maximize market share bonuses for Convenience and In-F light Service. 160 APPENDIX A MEMO TO: Vice President, Finance FROM: Finance Department Staff In keeping with your request to put together some information for your upcoming planning meeting with the other Vice Presidents, we have compiled the following material which we hope you will find useful. We begin with a description of the various kinds of costs we incur in our operations, then provide a table displaying our unit costs in each area last year and projected costs for next year. We conclude with some options for reducing costs in next year’s plan. Description of Costs Aviation Fuel. Costs associated with the purchase of aviation fuel for our aircraft. Facilities Costs associated with renting hangars, offices, storage space, equipment, etc., on every route we service. We currently have 12 domestic facilities (counting Atlanta). Flight Staff. Costs associated with the employment of flying crews and flight attendants. fle total number of Flight Staff on agiven route is equal to Flight Staff "' Daily Flights (each flight requires a separate crew). Last year, we employed 579 F light Staff. Ground Staff. Costs associated with the employment of all non-flight personnel. It works out that we need 10 gzound staff per fligm, so the total number of Ground Staff can be determined by multiplying Total Daily Flights by 10. Last year, we employed 720 Ground Staff. Maintenance. Costs associated with routine maintenance, inspection, and repair of our aircraft fleet. Marketing. Costs associated with advertising campaigns in one or more cities using one or more different media. Purchase. Costs associated with the purchase of new aircraft. Loan Repayment. Payments made on the balance of the principal for SouthEast’s two long- term loans. Finance Charges. Interest paid on the outstanding balance of SouthEast’s two long-term loans. 161 APPENDIX A Previous and Projected Costs Item Last Year Projected Next Yr. Aviation Fuel —- -— Ground Staff personnel @ $40,000 $42,000 Flight Staff personnel @ $45,000 $49,000 Maintain Domestic Facility @ $4,000,000 $4,250,000 Start-up Domestic Facility @ -- $5,000,000 Start-up Foreign Facility @ — $7,000,000 Aircraft Maintenance @ $150,000 $160,000 Marketing —— ..... Loan Repayment (A) $1,500,000 $1,500,000 Loan Repayment (B) $1,250,000 $1,250,000 Finance Charges (A) $633,750 $536,250 Finance Charges (B) $1,663,750 $1,526,250 Loan LD. Outstanding Interest Rate Principal 001-91 (A) 9,000,000 6.5% 002-94 (B) 14,500,000 11.0% We are required by our loan agreements to pay the minimum listed in the table on the previous page (minimum = “projected”). However, we can save money by paying off these loans faster. To do this, all we need to do is indicate how much extra we want to pay on the second page of the Strategic Planning Document. Finance Charges are based on the average monthly balance, much like a credit card. The more you pay on the outstanding principal, the less the Finance Charges will be. 162 APPENDIX A Conclusions 1. The bulk of our costs come from aviation fuel, leasing facilities/equipment and paying for staff (both ground and flight). These are primary areas for cutting costs. 2. It takes 10 Ground Staff for each and every flight we offer. Cutting back on Daily Flights where possible could result in big savings. 3. Each flight utilizes its own separate flight staff. Therefore, adding one flight staff to each flight on a route can result in adding up to 14 Flight Staff personnel - depending on the number of Daily Flights. Thus, for routes with many flights, adding Flight Staff can get very expensive! Recommendations. 1. Eliminate inefficient routes — cutting routes which lost money will automatically result in additional profit. 2. Buy planes that are more fuel efficient. 3. Pay off existing loans at an accelerated rate and reduce finance charges. 163 APPENDIX A MEMO TO: Vice President, Marketing FROM: Marketing Staff re: Fare prices and advertising information We just got the analysis back from the big customer satisfaction survey we did last year. As you know, the basic formula we use to calculate revenue on each route is as follows: Passenger Revenue = Passenger Demand * Market Share * Fare Simply put, the money we make on each of our routes is equal to the number of people who fly SouthEast multiplied by the price of their fare. For a given route, the number of people who fly SouthEast is equal to the total number of people traveling (Passenger Demand) multiplied by our Market Share for the route. We make the most money when we offer services between big cities, when we have high Market Share and when our fares our high. Unfortunately, the inverse relationship between Price and Demand results in low Market Share when prices are high, and vice versa. Therefore, picking the optimal price is a bit tricky. The data from our survey indicate that the two primary factors affecting how Fare Price affects Market Share are: (1) Deviation from Average Fare and (2) the Level of Competition on the route. On the next page, we provide a very important table for determining Market Share. With the table, we can figure out what the "optimum" price of our fares is for each route. 164 APPENDIX A _R_elations_hia) between Fae Price and Market Shara To use table: 1. Choose a possible price 2. Determine this price's deviation from the route's Average Fare 3. Determine level of Competition on route 3. Cross-index row and column to see what Market Share would be Competition Deviation from Low Moderate Heavy Very Heavy Average Fare $ 100 under ave. 53% 44% 35% 26% $75 under ave. 48% 39% 31% 25% $50 under ave. 44% 35% 28% 24% $25 under ave. 41% 32% 25% 21% Same as Ave. 40% 30% 22% 17% $25 over ave. 39% 28% 19% 13% $50 over ave. 37% 24% 16% 10% $75 over ave. 34% 20% 12% 9% $100 over ave. 30% 15% 9% 8% Comments: The "Same as Ave." row in the table reflects our Market Share if we set the price of our Fare equal to the average price of our competitors. The optimum price for any particular route depends on the level of competition and the value of the average fare. This can be determined by calculating “expected value.” To calculate an "expected value," multiply the price of a potential fare (e.g., $150 or $400) by the market share it would have (e.g., .22 or .45). The higher the expected value, the more money we will make if we use that price. To compare several price options, do the same for each potential price and compare their expected values. The fare with the highest expected value is the optimum choice for pricing. 165 APPENDIX A Marketing Modifiers Market share on the various routes can be increased by advertising in the media. Basically, by advertising SouthEast's services in the various cities we serve, we can increase our market share over and above what it would be based simply on the price of the fare. Last year, we only maaketed in Atlanta, and we confined our advertising to outdoor billboards. The following table shows the modification to Market Share when advertising using various media in cities on our routes: Media Cost/City Effect/Route - Effect/Route - (Low/Moderate (Heavy/V. Heavy Competition) Competition) Television $5,000,000 +3-4% +2-5% Radio $2,000,000 +2-3% +2-3% Newspaper $1,000,000 +2% +1-3% Billboard $500,000 + 1% + 1-2% NONE O 0% -3% HEELS; The cost paid to advertise in a city affects all routes into/out of the city. For instance, if we advertise in Atlanta, we pay ONE cost but get the advertising bonus for ALL routes. Ifwe advertise in cities other than Atlanta, the bonus will apply only to the route connecting Atlanta and the city in question. However, we can market in Atlanta as well as other cities and the effect is cumulative. The result of using more than one medium is NOT the sum of the individual bonuses. The combined adjustment will be less than the sum of the individual bonuses (approx. 75%) because of redundancy in media coverage of the population. 166 APPENDIX A _A_nalysiL and Conclusions: 1. We should raise our fares when facing Low or Moderate Competition and lower our fares when Competition is Heavy or Very Heavy. 2. The value of a market share percentage point depends on the Passenger Demand for the route - an extra point on busy routes means a lot more than an extra point on a secondary route. 3. Some of our prices are really out of whack! Simply by finding the optimum price, we could increase Market Share on some routes by 15%! 4. Relative to their cost, marketing efforts are probably worth the cost - especially in Atlanta. By using all four media, we can increase our Market Share on each route by 5-6%! 5. If we don’t market, we will get hit hard on routes with “Heavy” or “Very Heavy” competition. Recommendations: 1. Revise existing price structtne to maximize expected value. 2. Heavily advertise in Atlanta using multiple media. 3. Consider advertising in a few key metropolitan areas like Chicago or LA. 167 APPENDIX A MEMO TO: Vice President, Industry Analysis FROM: Industry Analysis Staff After scouring the information in the federal publications, we've finally got the information you asked for: Estimates for Passenger Demand and Competition for SouthEast's existing routes and a number of prospective routes. We lay out this information on the next two pages, then provide an analysis and some recommendations for you to consider. As you know, the basic formula we use to calculate revenue for each route is as follows: Passenger Revenue = Market Share * Passenger Demand * Fare We don't have the complete analysis on Price-Market Share relationships, but we do know what Market Share we can expect if we adopt the Average Route Fare as our own: Low Competition: 40% Moderate Competition: 30% Heavy Competition: 22% Very Heavy Competition: 17% Using the formula above and plugging in these average figures, it should be possible to determine some new routes that bring in more money than some of the existing routes. A NOTE ON COMPETITION: It is extremely difficult to estimate what sort of competition we will face on routes next year. The number of competitors could change, fare wars could break out, etc. As a result, we have estimated the probability of facing each level of competition. For some routes, we can be pretty certain what the competition will be like. For others, it’s anyone’s guess. 168 APPENDIX A Existing Routes: ATLANTA— Route Demand Rd. Trp. Competition Ave. Ave. Distance Likelihood Fare Flight Staff Memphis 195-205,000 600 70-15-15-0 $250 6 Louisville 100-150,000 600 75-15-10-0 $175 6 Nashville 170-210,000 400 60-30-10-0 $200 5 Raleigh- ZOO-225,000 500 50-35-15-0 $175 5 Durham Dallas- 425-475,000 1,000 5-20-60-15 $275 9 Ft. Worth New Orleans 190-220,000 800 15-35-35-15 $250 7 Tampa 215-245,000 800 5-30-50-15 $225 9 Miami 500-530,000 1,100 0-15-65-30 $250 10 Chicago 605-625,000 1,300 0-5-15-80 $300 10 New York 560-5 80,000 1,400 0-10-15-75 $300 1 1 Los Angeles 585-610,000 3,600 0-10-20-70 $400 12 Note: "Demand" = Expected Passenger Demand for upcoming year IN THOUSANDS Note: "Competition Likelihood" = % chance that competition on the route in the upcoming year will be low-moderate-heafl-vgy beau, respectively. APPENDIX A Potential Routes: ATLANTA- Route Demand Rd. Trp. Competition Ave. Ave. Distance Likelihood Fare Flight Staff Madrid 85-90,000 8,000 70-15-15-0 $875 12 Paris 75-125,000 9,000 55-20-20-5 $900 1 1 London ISO-200,000 9,000 50-30-10-10 $850 12 Sao Paulo-Rio 80-100,000 12,000 85- 1 5-0-0 $1200 10 Mexico City 110-175,000 2,800 40-30-20-10 $500 1 1 Cancun 90-120,000 3,000 55-30-10-5 $500 13 Virgin Islands 85-145,000 4,000 45-35-15-5 $550 1 l Note: "Demand" = Expected Passenger Demand for upcoming year IN THOUSANDS Note: "Competition Likelihood" = % chance that competition on the route in the upcoming year will be low-moderate-heag-very heayy, respectively. 170 APPENDIX A Potential Routes: ATLANTA- Route Demand Rd. Trp. Competition Ave. A. Flight Distance Likelihood Fare Staff Seattle 305-350,000 4,500 50-30-15-5 450 12 Minneapolis- 25 5-295,000 1,800 65-30-5-0 325 1 1 St. Paul Cincinnati 265-295 ,000 700 60-20-15-5 250 7 Indianapolis 245-260,000 800 75-20-5-0 225 9 San Francisco 395-470,000 4,200 40-20-20-20 425 11 Denver 420-500,000 2,300 20-40-30-10 375 12 St. Louis 260-280,000 900 25-60-10-5 250 6 Buffalo 270-295,000 1,400 35-30-30-5 300 8 Kansas City 245-255,000 1,300 30-55-15-0 300 9 Pittsburgh 385-450,000 1,000 10-25-55-10 275 8 Philadelphia 335-365,000 1,200 15-25-50-10 300 7 Phoenix 395-490,000 3,100 10-65-20-5 375 13 Boston 47 5-520,000 1,800 10-15-30-45 325 9 Detroit SOD-615,000 1,200 0-10-25-65 325 8 IV)Vr(r:shington, 725-800,000 1,000 0-10-20-70 250 9 OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO Note: "Demand" = Expected Passenger Demand for upcoming year IN THOUSANDS Note: "Competition Likelihood" = % chance that competition on the route in the upcoming year will be low-modgate-heavy-very hem, respectively. 171 APPENDIX A Analysis & Conclusions 1. We could earn a great deal more revenue by picking up some longer routes to large markets. 2. Big markets with low competition are prime prospects - even if we only offer one flight per day! 3. Looking at the probability of competition for next year, some cities are definitely "safer bets" than others. Recommendations 1. DROP SOME OF THE EXISTING ROUTES AND PICK UP SOME PRIME ROUTES TO BIG CITIES WHERE LOW COMPETITION IS EXPECTED. APPENDIX B APPENDIX B Instructions to Groups in the Consensus-Seeking Condition Your group will use the Consensus approach in creating a strategic plan. This approach is based on a thorough group discussion involving all group members during the Group Discussion phase. Through questioning, discussion and an open exchange of views, the Consensus approach should result in a better plan than any single group member could produce. During the Group Discussion phase, everyone should feel free to offer any and all thoughts, ideas, and recommendations they have. When you have generated a set of ideas that you can all collectively agree on, your group has reached "consensus." It is not necessary that you each be completely satisfied with the final plan — you only have to consider it workable. Here are some gta'delines for achieving consensus: 1. Present your ideas clearly and logically, specifically noting any recommendations you have concerning changes that need to be made. Avoid thinking that someone must win and someone must lose when there is disagreement. When a deadlock occurs, look for a compromise solution. Don't change your position simply to avoid conflict and/or speed things up. Similarly, avoid things like majority voting, tossing a coin, etc., as a means of solving disagreements. Differences should be reconciled through discussion. Be cautious if everyone agrees on something without discussion or examination. All ideas should be thoroughly scrutinized and alternatives should be considered. When you all agree on a final plan, record the features of the plan on the Strategic Planning Report form provided, sign it as a group, and notify the experimenter. Your plan isn't valid and will not be implemented if it isn't signed by all members. In the event of an invalid plan, last years plan will be implemented again by default without any changes. REMEMBER: YOU ONLY HAVE 75 MINUTES TO FINISH! When the timer goes off, you must hand in what you have completed or your plan will not be invalid. 172 APPENDIX C APPENDIX C Instructions to Groups in Traditional Dialectical Inquiry Condition Your group will use the Competing Plans approach to help you create your strategic plan. This simple approach is based on having two group members create their own individual plans and present them to the group. The Vice Presidents of Flight Operations and Industry Analysis have been randomly selected to create these plans. Note that your respective staffs have already generated a number of good ideas. At the beginning of the Group Discussion Phase, the two plans are presented and then each plan is critiqued by the other presenter. The presentations should get a number of ideas out on the table for further discussion, then the critiques will help to "weed out" flawed ideas. Ideas and recommendations that survive the critique are more likely to be good ideas than those that don't. Here is the sequence you should follow to implement the Competing Plans approach: At the start of the Group Discussion Ph_as_e_: 1. The VP of Flight Operations presents Plan A (5 minutes or so) 2. The VP of Industry Analysis presents Plan B (5 minutes or so) 3. The VP of Flight Operations critiques Plan B (5 minutes or so) 4. The VP of Industry Analysis critiques Plan A (5 minutes or so) 5. Open Discussion (Remaining 55 minutes or so) Here are some gg'delines to follow in implemen_t_r_ng' the Competing Plans approach: 1). Again, the VP's of Flight Operations and Industry Analysis get to present and critique plans. The VP's of Finance and Marketing should hold any questions and comments until after the critiques. After the critiques have been conducted, everyone is free to say anything. 2). The VP's of Flight Ops. and Industry Analysis should keep in mind that a "plan" is simply a collection of workable ideas and a "critique" is just a systematic process of asking "Why?" Your materials provide guidelines for finalizing your plan, summarizing it, and critiquing the opposing plan. 3). When everyone agrees on a final plan, record the features of the plan in the Strategic Planning Document form provided, sign it as a group, and notify the experimenter. Your plan isn't valid and will not be implemented if it isn't signed by all members. Ifthis happens, last year’s plan will be implemented again by default without any changes. 4). REMEMBER: YOU ONLY HAVE 75 MINUTES TO FINISH! When the timer goes off, you must hand in what you have completed or your plan will not be valid. 173 APPENDIX D APPENDIX D Role Instructions Provided to the Vice-Presidents in the Dialectical Inquiry Process (1‘ DI & SDI conditions). Overview Your role in the group discussion is similar to that of an attorney representing your Department (i.e., Flight Operations or Industry Analysis). Your staff has put together some information that will help improve next year's operations - your mission is to see that this plan is presented, its information is considered during group discussion, and its best features make it into the final plan. Your role is to summarize the essentials of the plan yorn' staff has put together, make sure the group understands it, and "cross-examine" the other presenter to insure that he or she knows what they're talking about. Generating your plan: Goal: Create a sound plan based on the information that you have at your disposal 1). REVIEW ALL YOUR INFORMATION and TAKE NOTES 2). LIST SPECIFIC CHANGES that will result in more profit 3). RECORD these changes on the last page of your yellow packet. Presenting your plap: (iminmesonso) Goal: Clearly and logically explain your plan so that others in the group know what it is you want to do and why. 4). IDENTIFY problems with last year's operations 5). LIST YOUR SPECIFIC RECOMMENDATIONS for change 6). EXPLAIN your reasons 7). SUMMARIZE the advantages of your plan Critigm_ng‘ the other le (5M0 Goal: Cross-examine the other presenter to discover their reasons, understand their plan and ultimately identify weaknesses 8). FOR EACH MAJOR POINT, ASK "WHY?" 9). IDENTIFY PROBLEMS OR CONCERNS YOU HAVE WITH THEIR PLAN 10). EXPLAIN HOW YOUR PLAN AVOIDS THESE PROBLEMS 174 APPENDIX E APPENDIX E Instructions Provided to Groups in the Synthesis Dialectical Inquiry Condition Your group will use the Competing Plans approach to help you create your strategic plan. This simple approach is based on having two group members create their own individual plans and present them to the group. The Vice Presidents of Flight Operations and Industry Analysis have been randomly selected to create these plans. Note that your respective staffs have already generated a number of good ideas. At the beginning of the Group Discussion Phase, the two plans are presented and then each plan is critiqued by the other presenter. The presentations should get a number of ideas out on the table for ftuther discussion, then the critiques will help to "weed out" flawed ideas. Ideas and recommendations that survive the critique are more likely to be good ideas than those that don't. Here is the sequence you should follow to implement the Competing Plans approach: At the start of th_e Group Discussion Phase: 1. The VP of Flight Operations presents Plan A (5 minutes or so) 2. The VP of Industry Analysis presents Plan B (5 minutes or so) 3. The VP of Flight Operations critiques Plan B (5 minutes or so) 4. The VP of Industry Analysis critiques Plan A (5 minutes or so) 5. Open Discussion (Remaining 55 minutes or so) Hereare some gm'delines to follow in implementing the Commtipg Plans approach: 1). Again, the VP's of Flight Operations and Industry Analysis get to present and critique plans. The VP's of Finance and Marketing should hold any questions and comments until after the critiques. After the critiques have been conducted, everyone is free to say anything. 2). The VP's of Flight Ops. and Industry Analysis should keep in mind that a "plan" is simply a collection of workable ideas and a "critique" is just a systematic process of asking "Why?" Your materials provide guidelines for finalizing your plan, summarizing it, and critiquing the opposing plan. 3). The VP's of Marketing and Finance should listen carefully during the presentations and critiques, take notes, and attempt to extract the best features of both plans. Afterwards, these two Vice Presidents should summarize what has transpired, identify important issues, and provide some structure for the remainder of the planning session. Your materials provide guidelines for how to implement this facilitating role. 4). When everyone agrees on a final plan, record the features of the plan in the Strategic Planning Document form provided, sign it as a group, and notify the experimenter. Your plan isn't valid and will not be implemented if it isn't signed by all members. Ifthis happens, last year's plan will be implemented again by default without any changes. 5). REMEMBER: YOU ONLY HAVE 75 MINUTES TO FINISH! When the timer goes off, you must hand in what you have completed or your plan will not be valid. 175 APPENDIX F APPENDIX F Role Instructions Provided to Vice-Presidents in the Synthesis Role (SDI condition) Role Instructions Overview Your role in the group is similar to that of a facilitator and discussion leader. During the presentations and critiques, you should listen and try to understand what is being discussed. When the critiques are finished, it will be up to you and the other facilitator to summarize, clarify and focus the group so that you can come to some agreement. During the debgg Goal: Try to understand the essentials of the two plans that are presented by: 1). LISTENDIG carefully to both plans, noting any questions that you have. 2). TAKING NOTES 3). IDENTIFYING PROS AND CONS of each plan. After the debate Goal: Facilitate creation of a group plan by: 4). SUMMARIZING the advantages of each plan. 5). ASKING QUESTIONS and CLARIFYING points that are still confusing 6). SUGGESTING general approaches or specific actions 7). ORGANIZING topics for further discussion 176 APPENDIX G APPENDIX G Information Sharing Check-List Measure Group #: Coder: Date Coded: GROUP INFORMATION SHARING SHEET Instructions to the Coder: Please indicate ____any piece of information that rs mken out loud during the course of gzoap discussion by circlipg the rtem on th_e following sheets Miscellaneoua (All): Miscellaawous (an Member Only): 1. Profit Formula 8. Route Fuel Costs Formula 2. Total Revenue Formula 9. Flight Staff Modifier Formula 3. Route Revenue Formula 10. Last Year’s Marketing Efions 4. Cash Formula 11. Marketing Coverage Rule 5. Costs Formula 12. Media Redundancy Formula 6. Debts Formula A 13. Expected Value Formula 7. Cmrent Assets 14. Expected Interest Rate 15. Ground Staff Formula 16. Flight Staff Formula 17. Retained % of Aircraft Sales 18. Probabilistic Natme of Competition 177 178 APPENDIX G Lable 1. Flight Operations Data Route Aircraft # Aircraft Flight Daily Type Assigned Staff Flights Memphis B-757 3 6 5 Louisville DC-9 3 6 6 Nashville DC-9 3 6 6 Raleigh-Durham A-300 4 4 7 Dallas-Ft. Worth DC-8 4 10 7 New Orleans B-757 3 7 5 Tampa L-101 1 3 8 6 Miami B-727 5 8 9 Chicago B-747 4 10 7 New York B-727 4 9 8 Los Angeles L-1011 3 12 6 Totals/Ave. - 39 - 72 179 APPENDIX G Lable 2. Revenue Info_rrr_rati_or; ROUTE PREVIOUS AVE. OUR PASS. MARKET COMPETITION FARE FARE DEMAND SHARE Memphis Low $250 $200 200,000 43% Louisville Low $175 $250 125,000 35% Nashville Moderate $200 $225 175,000 30% Raleigh-Durham Moderate $175 $125 200,000 35% Dallas- Heavy $275 $325 500,000 24% Ft. Worth New Orleans Heavy $250 $300 200,000 8% Tampa Heavy $225 $250 225,000 16% Miami V. Heavy $250 $300 525,000 8% Chicago V. Heavy $300 $3 50 600,000 16% New York V. Heavy $300 $300 550,000 15% Los Angeles V. Heavy $400 $425 575,000 8% Totals/Ave. — $254.55 $277.27 352,273 22% Table 3. Cost Breakdown Type Cost % of Total Aviation Fuel 65.91 M 36.6 Facilities/Equipment 48.00 M 26.6 Ground Staff 28.80 M 16.0 Flight Staff 25.6] M 14.2 Maintenance 5.85 M 3.2 Loan Repayment 2.75 M 1.5 Finance Charges 2.30 M 1.3 Advertising/Marketing 1.00 M 0.6 Total 180.22 M 100.0 180 APPENDIX G Table 4. Profit Information by Route (in millions) Route Route Route Profit Profit Revenue Cost (millions) Ratio Memphis $17.20 $10.36 $6.84 .66 Louisville $10.94 $1 1.75 - $ .81 -.07 Nashville $11.81 $10.96 $ .85 .08 Raleigh-Durham $ 8.75 $10.85 - $2.10 -.19 Dallas-Ft. Worth $39.00 $17.34 $21.66 1.25 New Orleans $ 4.80 $11.13 - $6.33 -.57 Tampa $ 9.00 $14.74 - $5.74 -.39 Miami $12.60 $20.09 - $7.49 -.37 Chicago $33.60 $19.77 $13.83 .70 New York $24.75 $20.54 $4.21 .20 Los Angeles $19.55 $32.69 - $13.14 -.40 Totals/Ave. $192.00 $180.22 $11.78 .08 Summary: Annual Fuel Costs by Route 181 APPENDIX G Route Rd. Trp. Cost/ Annual Distance Flight Fuel Cost Memphis 600 $900 $1.643 M Louisville 600 $1080 $2.365 M Nashville 400 $720 $1.577 M Raleigh-Durham 500 $500 $1.278 M Dallas-Fort Worth 1000 $2300 $5.87 7 M New Orleans 800 $1200 $2.190 M Tampa 800 $2200 $4.818 M Miami 1100 $2310 $7.588 M Chicago 1300 $3250 $8.304 M New York 1400 $2940 $8.585 M Los Angeles 3600 $9900 $21.68] M Totals/Ave. 1100 $2445 $65.91 M Smary: Aircraft Characteristics & Cost Information Type Accom. Flight Staff Cost/Mile Purchase Class Lim. Cost (mil). A-300 E 4 $1.00 $1.0 M DC-9 8 $1.80 $2.0 M B-757 D 7 $1.50 $2.5 M L-1011 C 12 $2.75 $3.0 M B-727 C 9 $2.10 $4.5 M DC-8 B 11 $2.30 $6.0 M B-747 B 15 $2.50 $6.0 M DC-10 A 12 $2.40 $8.0 M 182 APPENDIX G Modifier #1: Convenience (Daily Flights) COMPETITION Daily Low- Heavy- Flights Moderate Very Heavy 1 -12% -25% 2 -10% -16% 3 -8% -11% 4 -5% -7% 5 -2% -4% 6 0% -2% 7 +2% 0% 8 +4% 0% 9 +6% +1% 10 +8% +3% 11 +9% +5% 12 +10% +7% 13 +11% +8% 14 +11% +9% Modifier #2: Comfort (Accommodations) Roundtrip Distance Factors <1500 miles 1501+ miles Competition Low or Mod. Heavy or Low or Mod. Heavy or Accommodations V- Heavy V- Heavy "E" Class -2% -10% -20% -35% "D" Class 0% -5% -10% -20% "C" Class 0% 0% -l% -7% "B" Class +2% +5% +7% +5% "A" Class +3% +7% +12% +9% Previous and Projected Costs 183 APPENDIX C Item Last Year Projected Next Yr. Aviation Fuel (tot.) $65,910,000 —— Ground Staff personnel @ $40,000 $42,000 Flight Staff personnel @ $45 ,000 $49,000 Maintain Domestic Facility @ $4,000,000 $4,250,000 Start-up Domestic Facility @ -— $5 ,000,000 Start-up Foreign Facility @ -- $7,000,000 Aircraft Maintenance @ $150,000 $160,000 Marketing (tot.) $ 1,000,000 --- Loan Repayment (A) $ 1,500,000 $ 1,500,000 Loan Repayment (B) $1 ,250,000 $1 ,250,000 Finance Charges (A) $633,750 $536,250 Finance Charges (B) $1,663,750 $1,526,250 Loan I.D. Outstanding Interest Rate Principal 001-91 (A) 9,000,000 6.5% 002-94 (B) 14,500,000 11.0% 184 APPENDIX G Relationship between Lare Price and Market Shar_e. Competition Deviation from Low Moderate Heavy Very Heavy Average Fare $ 100 under ave. 53% 44% 35% 26% $75 under ave. 48% 39% 31% 25% $50 under ave. 44% 35% 28% 24% $25 under ave. 41% 32% 25% 21% Same as Ave. 40% 30% 22% 17% $25 over ave. 39% 28% 19% 13% $50 over ave. 37% 24% 16% 10% $75 over ave. 34% 20% 12% 9% $100 over ave. 30% 15% 9% 8% Marketing Modifiers Media Cost/City Effect/Route - Effect/Route - (Low/Moderate (Heavy/V. Heavy Competition) Competition) Television $5,000,000 +3-4% +2-5% Radio $3,000,000 +2-3% +2-3% Newspaper $2,000,000 +2% +l-3% Billboard $1,000,000 +1% +l-2% NONE O 0% -3% 185 APPENDIX G Existing Routes: ATLANTA— Route Demand Rd. Trp. Competition Ave. Distance Likelihood Flight Staff Memphis 195-205,000 600 70-15-15-0 6 Louisville 100-1 50,000 600 75-15-10-0 6 Nashville 170-210,000 400 60-30-10-0 5 Raleigh- 200-225,000 500 50-35-15-0 5 Durham Dallas- 425-475,000 1,000 5-20-60-15 9 Ft. Worth New Orleans 190-220,000 800 15-35-35-15 7 Tampa 215-245,000 800 5-30-50-15 9 Miami 500-530,000 1,100 0-15-65-30 10 Chicago 605-625,000 1,300 0-5-15-80 10 New York 560-580,000 1,400 0-10—15-75 1 1 Los Angeles 585-610,000 3,600 0-10-20-70 12 Potential Routes: ATLANTA- Route Demand Rd. Trp. Competition Ave. Ave. Distance Likelihood Fare Flight Staff Madrid 85-90,000 8,000 70-15-15-0 $875 12 Paris 75-125,000 9,000 55-20-20-5 $900 1 1 London ISO-200,000 9,000 50-30-10-10 $850 12 Sao Paulo-Rio 80-100,000 12,000 85-15-0-0 $1200 10 Mexico City 110-175,000 2,800 40-30-20-10 $500 11 Cancun 90-120,000 3,000 55-30-10-5 $500 13 Virgin Islands 85-145,000 4,000 45-35-15-5 $550 11 186 APPENDIX C Potential Routes: ATLANTA- Route Demand Rd. Trp. Competition Ave. A. Flight Distance Likelihood Fare Staff Seattle 305-350,000 4,500 50-30-15-5 450 12 Minneapolis- 255-295,000 1,800 65-30-5—0 325 l 1 St. Paul Cincinnati 265-295,000 700 60-20-15-5 250 7 Indianapolis 245-260,000 800 75-20-5-0 225 9 San Francisco 395-470,000 4,200 40-20-20-20 425 11 Denver 420-500,000 2,300 20-40-30-10 375 12 St. Louis 260-280.000 900 25-60-10-5 250 6 Buffalo 270-295,000 1,400 35-30-30-5 300 8 Kansas City 245-255,000 1,300 30-55-15-0 300 9 Pittsburgh 385-450,000 1,000 10-25-55-10 275 8 Philadelphia 335-365,000 1,200 15-25-50-10 300 7 Phoenix 395-490,000 3,100 10-65-20-5 375 13 Boston 475-520,000 1,800 10-15-30-45 325 9 Detroit SOD-615,000 1,200 0-10-25-65 325 8 Wzshington, 725-800,000 1,000 0-10-20-70 250 9 D. . Miscellfleous Other: APPENDIX H APPENDIX H Observer Rating Form: Intragroup Conflict 0 = Can't recall any instances 1 = A few minor, isolated instances 2 = Several major, isolated instances 3 = Ongoing occurrences throughout discussion Please indicate the extent to which you observed instances of the following during group interaction/discussion: 1. Negative Affect -Long, awkward Silence -Frustration -Physica1 agitation (fidgeting, squirming, tapping, etc.) -Annoyed tone of voice -Evident Tension 2. Hostile/Degrading Interpersonal Behavior -Mocking/sarcastic tone of voice —Condescension/"lect1ning" other members -Insults/Snide Remarks -Efforts to force others to agree or not agree 3. Irrational Thought Processes —Stubbom refusal to acknowledge appropriateness of reasonable explanations -Meandering conversations unrelated to real issues the group has identified -Settling disagreements by arbitrary means, or compromise plans that are clearly political rather than based on the merits of the ideas. 4. TOTAL (SUM of #s 1.3) 187 APPENDIX I APPENDIX I Questionnaire Items: Intragroup Conflict Response Scale: 1 = Strongly Disagree 2 = Disagree 3 = Sort of Disagree, Sort of Agree 4 = Agree 5 = Strongly Agree 11. 2. 3.** 8.** 10. There was a good deal of tension in the group during the simulation. There was no hostility in the group during this task. At times, voices were raised during discussion. At times the group didn't seem to care if we came up with a good plan. Sometimes I felt the group couldn't comprehend simple logic. There was a lot of fi'ustration in our group during this study. People got angry dming planning. We got along well as a group. There were times when the group considered ideas that were clearly unreasonable. There was a lot of arguing during the group planning phase. "Indicates items removed from original scale as a result of poor item-total correlations 188 APPENDIX J APPENDIX J Observer Rating Form: Process Facilitation 0 = No instances recalled I = A few isolated instances recalled 2 = A moderate number of instances recalled 3 = Ongoing throughout discussion Please indicate the extent to which you observed the following during group interaction/discussion: 1. Reflecting/Summarizing: Statements that paraphrase or restate what another group member has just said in one form or another for the purpose of ensuring that communication was clear and that the listener(s) understood the speaker correctly. Consider things such as paraphrasing or highlighting what OTHER members have said, or repeating back ideas, suggestions or recommendations to insure the listener or group understood them correctly. 2. Clarifying: Statements addressed to a group member or members that seek to clarify information, problems or issues faced by the entire group so that the entire group can work from a common perspective. Consider things like attempts to sort out confusing ideas expressed by other group members, asking for information that is needed to make sense of an issue, "framing" an issue or defining a problem that has the group hung up. 3. Integrating: Assertive statements that attempt to draw together two or more distinct lines of thought, and/or attempts to consolidate move the discussion forward. Consider things like attempts to combine the best ideas presented by two or more group members into an integrated plan 4. Focusing/Structuring: Statements that move the group forward by identifying areas where the group has reached agreement, and statements that identify areas which the group still needs to consider. Consider things like announcements similar to "I think we all agree that. . .," suggesting the next topic for discussion, proposing a timetable, etc. TOTAL (SUM of #s 1.4) Note: Only consider things that are noticeable enough that all or most other group members would have perceived them. 189 APPENDIX K APPENDIX K Observer Rating Form: Controversy 0 = Can't recall any instances 1 = A few minor, isolated instances 2 = Several major instances 3 = Ongoing occurrences throughout discussion Please indicate the extent to which you observed instances of the following during group interaction/discussion: 1. Indirect Challenge Questions directed to a group member that imply that the speaker doesn't agree with a suggestion or recommendation, or doesn't understand reasons that have been offered. Includes things like presenting contradictory information in response to a suggestion or recommendation, asking "Why?," or agreeing with only part of what somebody else says. 2. Explicit Disagreement DIRECT statements that show the speaker does not agree with a comment, idea or recommendation, or the information that supports it. Includes things like saying "I don't agree," "I think you're wrong," or "Not according to my information." [Do not include hints or subtle suggestions] 3. Presentation of Opposing Viewpoints Presentation of a set of specific recommendations, a general approach or a central idea that is incompatible with what has been suggested by one or more other group members. [Do not consider minor comments made in passing - must be a deliberate effort attended to by the rest of the group.] 4. TOTAL (SUM of #s 1-3) Only consider comments or behaviors that are noticeable enough that all or most other group members would have perceived them. 190 APPENDIX L APPENDIX L Observer Rating Form: Controversy (Recoded) 0 = None 1 = Superficial (Mention B) 2 = Moderate (Support for A [or B] from someone) 3 = Severe (Support for B [or A] from someone) 4 = Beyond Level 3 Controversy is defined as real or apparent disagreement within the group concerning task- specific strategies the group should use and/or how the group should structure its activities as it completes the task. Controversy is impersonal; it refers to disagreement over ideas. Do not confuse this with interpersonal conflict! Please indicate the extent to which you observed DISAGREEMENT over the following things during group interaction/discussion. Task-Related Controversy: Issues and Strategy 1. 2. Expansion v. Consolidation High Fare v. High Market Share Fuel Efficiency v. Accommodations Increasing Revenue v. Cutting Costs Where/How Should Cash Be Spent (Marketing, New Aircraft, Loans, Invest) Free-Floating (Cannot be categorized) Process-Related Controversifiz Activities and Decision Making 7. How Group Should Identify Ideas 8. How Group Should Approach Task (Serial v. Whole) 9. Who Should Make Decisions (Majority Rule v. Expert) 10. TOTAL (SUM of #s 1-9) 191 APPENDIX M APPENDIX M Task Knowledge Measures GENERAL The purpose of this check is to help you assess the degree to which you understand the information you have been provided. Please find the answers to the following questions using any and all materials provided to you. Feel free to change your answers until you are satisfied you have them right. If you can answer all or most of the questions, you will be a good representative for your department. PLEASE COMPLETE THIS FORM BY THE END OF GROUP DISCUSSION. General Concepts: Please put your answer below the question. 1. All flights must be non-stop between some city and where? 2. How many roundtrip flights can ONE aircraft make in a single day? 3. How much money does your firm currently have in CASH? 4. How many B-727's does your firm currently own? 5. Route Revenue increases as three other variables increase. What are these three variables? a. b. c. 6. Which route offered the most Daily Flights last year? 7. Which route earned the highest Market Share last year? 8. Which route had the highest Passenger Demand last year? 9. Which route lost the most money last year? 10. Which route had the worst profit ratio last year? 192 APPENDIX M FLIGHT OPERATIONS 1. What is the DIFFERENCE in Market Share between having 3 Daily Flights on a route and 9 Daily Flights when there is Heavy Competition? 2. What is the modifier to Market Share for having aircraft with "D" class Accommodations on a route with Roundtrip Distance = 2000 miles and Moderate Competition? 3. What is the modifier to Market Share on a route with Very Heavy Competition and 3 more Flight Staff on each route than the industry average? 4. How much would it cost per year in aviation fuel for a route where: a. Cost/Mile of the aircraft used = $1.90 b. Roundtrip distance = 1,000 miles c. Daily Flights = 10 5- What would be the cost SAVINGS for the above route if the Cost/Mile was 1.25 and there were only 5 Daily Flights? (Remember to find the difference.) 194 APPENDIX M FINANCE 1. How much money can you expect to make on the sale Of an aircraft listed at $6,000,000? 2. If your firm offered 100 Total Daily Flights on all routes together, how many Ground Staff would be required? 3. According to your staff‘s estimates, what would be the cost of paying 750 Ground Staff next year? 4. How many TOTAL Flight Staff personnel are needed for a route with 7 Flight/Staff per flight and 6 Daily Flights? 5- How much EXTRA does your staff estimate it would cost to Start-Up a Foreign Facility next year as compared to Maintaining an Existing Domestic Facility? (Remember to find the difference.) 195 APPENDIX M MARKETING 1. What is the Market Share you would earn if you were charging $75 LESS THAN the average on a route with Moderate Competition? 2. What is the estimated bonus effect on Market Share for a route with Low Competition where TV, radio, newspaper and billboard advertising are ALL used (assuming maximum possible benefit for each medium)? 3. What is the modifier to Market Share for routes with Heavy or Very Heavy Competition if you don't use any form of Advertising? 4. What is the expected value for a route where: Price of Fare = $300 Average Fare = $250 Competition = Very Heavy 5- Using the Passenger Revenue formula, how much EXTRA PROFIT would you make by charging $2 50 as opposed to $3 00 in the previous question, assuming Passenger Demand = 500,000 for the year? (Remember to find the difference.) 196 APPENDIX M INDUSTRY ANALYSIS 3-5. Of the cities NOT currently serviced by SouthEast (i.e., potential cities), which cities will definitely have a Passenger Demand greater than 400,000 next year? Which POTENTIAL routes have a 50% (or greater) chance of having Low Competition next year? Assuming the maximum possible Passenger Demand and the most likely Competition Level (based on probabilities given), which three potential cities would bring in the most revenue next year? APPENDIX N APPENDD( N Questionnaire Items: Implementation Quality 1. Please identify the inquiry method your group used during this study? 2. Please identify your role within the group Please use the following scale to respond to the next four items: 1 = Strongly Disagree 2 = Disagree 3 = Undecided 4 = Agree 5 = Strongly Agree 1. I feel that I did a good job performing my role in the group. 2. Most of the time, I wasn't conscious of the role I was supposed to be playing. 3. Most of the other members of my group performed their role adequately. 197 LIST OF REFERENCES LIST OF REFERENCES Ackofl, R.L. (1974). Redesim’ g the future. New York: Wiley. Aldag, R.J., & Fuller, SR. (1994). Beyond fiasco: A reappraisal of the groupthink phenomenon and a new model of group decision processes. Psychological Bulletin. 113. 533-552. Argote, L., & McGrath, J .E. (1993). Group processes in organizations: Continuity and change. In C.L. Cooper and LT. Robertson, (Eds), International Review of Organizational Psychology, 8, 333-389. Bettenhausen, KL. (1991). Five years of groups research: What we have learned and what needs to be addressed. Journal of Management, _1_7, 345-381. Brehmer, B. (1976). Social judgment theory and the analysis of interpersonal conflict. Psychological Bulletin, 8_3, 985-1003. Callaway, M.R., & Esser, J .K. (1984). GrOupthink: Effects of cohesiveness and problem-solving procedures on group decision making. Social Behflor a_na Personalig, 12, 157-164. Cartwright, C., & Zander, A. (Eds). (1968). Group dmamics: Research and theory. New York: Harper & Row. Chanin, M.N., & Shapiro, H]. (1984). Dialectical and Devil's Advocate Problem- Solving. Asia Pacific Journal of Management, 1, 159-170. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2ad ed.). Hillsdale, NJ: Erlbaum. Cook, R.L., & Hammond, KR. (1982). Interpersonal learning and interpersonal conflict reduction in decision-making groups. In R. Guzzo (Ed.), Improving group decision making in organizations: Approaches from theogy and research (pp. 13- 40). New York: Academic Press. 198 199 Cosier, R. (1978). The effects of three potential aids for making strategic decisions on prediction accuracy. Qrganizational Behavior and Human Performarae, 22, 295- 306. Cosier, R. (1980). Inquiry method, goal difficulty, and context effects on performance. Decision Sciences, _1_1_, 1-16. Cosier, R., & Aplin, J .C. (1980). A critical view of dialectical inquiry as a tool in strategic planning. Strategic Management Journal, 1, 343-356. Cosier, R., & Rechner, PL. (1985). Inquiry method effects on performance in a simulated business environment. Organizational Behavior and Human Decision Processes, 3g, 79-95. Cosier, R., & Rose, G.L. (1977). Cognitive conflict and goal conflict effects on task performance. Organizational Behavior and Human Performance fl, 378-391. Cosier, R., Ruble, T.A., & Aplin, J.C. (1978). An evaluation of the effectiveness of dialectical inquiry systems. Management Science, _2__4_, 1483-1490. Courtright, J .A. (1978). A laboratory investigation of groupthink. Communications Monogrgaphs, 5, 229-246. Devine, D. (1995). SouthEast Airlines, Inc.: An organizational simulation. Emshoff, J .R., & Finnel, A. (1978). Defining corporate strategy: A case study using strategic assumptions analysis. Wharton Applied Research Center. working paper no. 8-78. Esser, J .K., & Lindoerfer, J .S. (1989). Groupthink and the space shuttle Challenger disaster: Toward a quantitative case analysis. J omnal of Behavioral Decision Making, 2, 167-177. F ahey, L. (1981). On strategic management decision processes. Strategic Management Journal, 2, 43-60. F lowers, ML. (1977). A laboratory test of some implications of Janis' groupthink hypothesis. Journal of Personality and Social Psychology, fl, 888-896. Fodor, E.M., & Smith, T. (1982). The power motive as an influence on group decision making. Journal of Personality and Social Psycholagy, _4_2_, 178-185. Frost, J.H., & Wilmot, W.W. (1978). Interpersonala conflict. Dubuque, IA: William C. Brown & Co. 200 Futrell, D.A., & Sundstrom, E. (1993). Group composition and performance: Cognitive ability and group productivity in an assembly task. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, San Francisco. Galbraith, JR. (1973). Desigm'hg complex organizations. Reading, MA: Addison- Wesley George, A. (1972). The case for multiple advocacy in making foreign policy. American Political Science Review, _6_6, 751-785. f” Hackman, J .R. (1988). The design of work teams. In J .W. Lorsch (Ed.), Handbook of organizational behavior (pp. 315-342). Englewood Cliffs, NJ: Prentice-Hall. , L-fi Hackman, J .R. (Ed.). (1990). Groaps that work (and those that don't): Creating conditioaa for effective teaflwork. San Francisco: J ossey-Bass. Hackman, J.R., & Morris, CG. (1975). Group tasks, group interaction processes, and group performance effectiveness: A review and proposed integration. In L. Berkowitz (Ed.), Advanaces in experimental social psychology (V 01. 8, pp. 45-99). San Diego: Academic Press. Hammond K.R., Todd, F .J ., Wilkins, M., & Mitchell, TD. (1966). Cognitive conflict between persons: Applications of the "lens model" paradigm. Journal of Exgrimental Socail Psychology,2, 343-360. Hensley, T.R., & Griffin, G.W. (1986). Victims of groupthink: The Kent State University Board of Trustees and the 1977 gymnasium controversy. Journal of Conflict Resolutioh, 3_Q, 497-531. Herbert, T.T., & Estes, R.W. (1977). Improving executive planning by forrnalizing dissent: The corporate devil's advocate. Academy of Management Review, _2_, 662-667. Hill, G.W. (1982). Group versus individual performance: Are n+1 heads better than one? Psychological Bulletin, 21, 517-539. Hirokawa, R.Y. (1990). The role of communication in group decision making efiicacy: A task-contingency perspective. Small Group Research, _2_1, 190-204. Ilgen, D.R., Major, D.A., Hollenbeck, J .R., & Sego, DJ. (1993). Team research in the 1990s. In M.M. Chemers and R. Ayaman (Eds), Leadership theory and reseaach (pp. 245-270). New York: Academic Press. 201 Ilgen, D.R., Major, D.A., Hollenbeck, J .R., & Sego, DJ. (1993). Team research in the 19905. In M.M. Chemers and R. Ayaman (Eds), Leadership theoryand research (pp. 245-270). New York: Academic Press. James, LR. (1982). Aggregation bias in estimates of perceptual agreement. Journal of Applied Psychology, 6_7, 219-229. Janis, LL. (1972). Victims of groupthink. Boston: Houghton Mifllin. Janis, IL. (1982). Gr_p__ou think: Psycholggjcal studies of policy decisions and fiascos. Boston: Houghton Mimin. Janis, I.L., & Mann, L. (1977). Decision Making: A psychological analysis of conflict. choice and commitment. New York: Free Press. Katz, D., & Kahn, R.L. (1976). The social psychology of organizations (2nd ed.). New York: Wiley. Koopman, P.L., & Pool, J. (1990). Decision making in organizations. International Review of Industrial and Organizational Psychology, 5, 101-148 Larson, J. R. & Christensen, C. (1993). Groups as problem-solving units. Towards a new meaning of social cognition. British Journal of Social Psychology,__, 32 5- 30. Latane, B. (1986). Responsibility and effort in organizations. In P.S. Goodman (Ed.), m effective work groups. (pp. 277-304). San Francisco: Jossey-Bass. Latane, B., Williams, K., & Harkins, S. (1979). Many hands make light the work: The causes and consequences of social loafing. J ourryal of Person_ality and Social Psychology, 3_7_, 822-832. Laurenco, S.V., & Glidewell, J.C. (1975). A dialectical analysis of organizational conflict. Administrative Science Quarterly, 29, 489-508. Leana, CR. (1985) A partial test of Janis' groupthink model: Effects of group cohesiveness and leader behavior on defective decision making. Journal of Management, fl, 5-17. Levine, J .M., & Moreland, R.L. (1990). Progress in small group research. Annual geview of Psychology, fl, 585-634. Lorge, I., & Solomon, H. (1955). Two models of group behavior in the solution of eureka-type problems. byehometrilg, Q, 139-148. 202 Marquart, DJ. (1955). Group problem solving. Journal of Social Psycholggy, 4_1, 103- 113. Martin, R.J. (1983). A skills and strategies handboojk for working with people. Englewood Cliffs, NJ: Prentice-Hall. Mason, RD. (1969). A dialectical approach to strategic planning. Management Science, Q, 404-414. Mason, R.D., & Mitrofi, 1.1. (1981). Challengm' g strategic planning assumptions. New York: Wiley-Interscience. McGrath, J .E. (1984). Groups: Interaction and performance. Englewood Clifl‘s, NJ: Prentice-Hall. McGrath, J .E., & Altman, I. (1966). Small ggogp research: A synthesis and cringe of the field. New York: Holt, Rinehart & Winston. Milliken, F.J., & Martins, LL. (1996). Searching for common threads: Understanding the multiple effects of diversity in organizational groups. Academy of Management Review. A, 402-433.. Mintzberg, H., Raisinghani, D., & Theoret, A. (1976). The structure of 'unstructured' decision processes. Administrative Science Quarterly. 2_1_, 246-275. Mitroff, 1.1. (1982). Dialectic squared. Decision Sciences. 13, 222-224. Mitroff, 1.1., Barabba, V.P., & Kilmann, RH. (1977). The application of behavioral and philosophical technologies to strategic planning: A case study of a large federal agency. Management Science. 2_4, 44-58. Mitrofl‘, I.I., & Emshoff, J .R. (1979). On strategic assumption-making: A dialectical approach to policy and planning. Academy of Management Review. 4, 1-12. Mitrofl‘, I.I., Emshofl‘, J.R., & Kilmann, RH. (1979). Assumptional analysis: A methodology for strategic problem solving. Management Science. 25, 583-593. Mitroff, I., & Mason, R. (1981). The metaphysics of policy and planning: A reply to Cosier. Academy of Management Review. 9, 649-652. Moorhead, G. (1982). Groupthink: Hypothesis in need of testing. Group and Organization Studies, 1, 429-444. 203 Moreland, R.L., & Levine, J.M. (1992). The composition of small groups. In L. Berkowitz (Ed.), Advances in Group Processes (V 01. 9, pp. 237-280). New York: JAI Press. Mullen, B., & Copper, C. (1994). The relation between group cohesiveness and performance: An integration. Psychological Bulletin,. Patton, B.R., Griffin, K., & Patton, EN. (1989). Decision making and goup interaction (third edition). New York: Harper Collins. Pearce, J .A., H, & Robinson, RB., Jr., (1988). Strategic management: Strategy formul_a£ion and implementation (3rd edition). Homewood, IL: Irwin. Posner-Webber, C. (1987). Update on groupthink. Sm“ Group Behavior. 3, 118-125. Priem, R., & Price, K.H. (1991). Process and outcome expectations for the dialectical inquiry, devil's advocacy, and consensus techniques of strategic decision making. Gropp and Organizaiion Studies, _1_6, 206-225. Pruitt, D.G., & Rubin, J.Z. (1986). Social conflict: Escalation. stalemate and settlement. New York: Random House. Quinn, J.B. (1980). Strategies for change: Logical incremergalism. Homewood: Irwin. Rahim, M.A. (1986). Manag1p' g conflict in organization; New York: Praeger. Roberts, K.H., Hulin, C.L., & Rousseau, D.M. (1978). Developing an interdisciplinary science of organizations. San Francisco: J ossey-Bass. Rousseau, D. (1985). Issues of level in organizational research: Multilevel and cross- level perspectives. In L.L. Cummings & B.M. Staw (Eds), Research in Organizational Behavior, (Vol. 7, pp. 1-37). Greenwich, CT: JAI Press. Salas, E., Dickinson, T.L., Converse, S.A., & Tannenbaum, S. (1992). Toward an understanding of team performance and training. In R.W. Sweezey & E. Salas (Eds), Teams: Their training and mrformance. Norwood, NJ: Ablex. Saveedra, R., Barley, P.C., & Van Dyne, L. (1993). Complex interdependence in task- performing groups. Journal of Applied Paychology, 18, 61-72. Schweiger, D., & Finger, P. (1984). The comparative effectiveness of dialectical inquiry and devil's advocacy: The impact of task biases on previous reseal of Behavioral Decision Making, 3, 229-245. 204 Schweiger, D., Sandberg, W., & Ragan, J. (1986). Group approaches for improving strategic decision making: A comparative analysis of dialectical inquiry, devil's advocacy, and consensus. Agdemy of Management Journal, 2, 51-71. Schweiger, D.M., Sandberg, W.R., & Rechner, PL. (1989). Experiential effects of dialectical inquiry, devil's advocacy, and consensus approaches to strategic decision making. Academy of Management Journal, 3_2, 745-772. Schwenk, CR. (1982). Effects of inquiry methods and ambiguity tolerance on prediction performance. Decision Sciences, 1, 841-855. Schwenk, OR. (1984). Inquiry method effects on prediction performance: Task involvement as a mediating variable. Decision Sciencea, Q, 449-462. Schwenk, CR. (1985). Effects of planning aids and presentation media on performance and affective responses in strategic decision making. Management Science, 39, 263-272. Schwenk, CR. (1988). The essence of strat_egic decision making. Lexington, MA: D.C. Heath Co. Schwenk, OR. (1990). Effects of devil's advocacy and dialectical inquiry on decision making: A meta-analysis. Organizational Behaviflnd Human Decision Processes fl, 161-176. Schwenk, C.R., & Cosier, R.A. (1980). Efi‘ects of expert, devil's advocate, and dialectical inquiry methods on prediction performance. Organizational Behavior and Human Performance, E, 409-424. Schwenk, C.R., & Cosier, R.A. (1993). Effects of consensus and devil's advocacy on strategic decision making. Journal of Applied Social Psychology, Q, 126-139. Schwenk, C.R., & Thomas, H. (1983). Efl‘ects of conflicting analyses on managerial decision making. Decision Sciences. 11, 467-482. Shaw, M. (1932). A comparison of individuals and small groups in the rational solution of complex problems. American Journal of Paycholgy, fl, 491-504. Shaw, ME. (1981). Ma: The psychology of small gioup behavior (3rd Ed.). New York: McGraw-Hill. Shirley, RC. (1982). Limiting the scope of strategy: A decision based approach. Academy of Management Review, 1, 262-268. 205 Smith, S. (1984). Groupthink and the hostage rescue mission. British Journal of Political Science, 1_5, 117-126. Smith, K.K., & Berg, Organizational Behavior & Human Decision Processes. 41, 161- 176. Smith, K.K., & Berg, D.N. (1987). A paradoxical conception of group dynamics. Human Relations. fl, 633-657. Stanley, JD. (1981). Dissent in organizations. fiademy of Management Reyiew, a, 13-19. Stasser, G., & Stewart, D. (1992). Discovery of hidden profiles by decision making groups: Solving a problem versus making a judgment. Journal of Personality aaci Social Psychology, 6_3, 426-434. Stasser, G, Taylor, L.A., & Hanna, C. (1989). Information sampling in structured discussions of three- and six-person groups. Journal of Personality and Social Psychology, 51, 67-78. Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personaligy and Social Psychology, fl, 1467-1478. Stasser, G., & Titus, W. (1987). Effects of information load and percentage of shared information on the dissemination of rmshared information during group discussion. Journal of Personality m Social Psychology, fl, 81-93. Steiner, ID. (1972). Group process and productivity. Orlando, FL: Academic Press. Taylor, RN. (1992). Strategic decision making. In M.D. Dunette and L.M. Hough (Eds), Handbook of Industrial and Organizational Psychology (Vol. II, 2nd edition). Palo Alto, CA: Consulting Psychologists Press. Tetlock, RE. (1979). Identifying victims of groupthink from public statements of decision makers. Journal of Personality and Social Psychology, 3;, 1314-1324. Tetlock, P.E., Peterson, R.S., McGuire, C., Change, S., & Feld, P. (1992). Assessing political group dynamics: A test of the groupthink model. Journal of Personaliiy and Social Psychology, 6_3, 403-425. Thomas, H. (1984). Strategic decision analysis: Applied decision analysis and its role in the strategic management process. Strategic Management Journal, Q, 139-156. 206 Tjosvold, D. (1985). Managerial implications of controversy research. Journal of Mmgement, fl, 221-238. Tjosvold, D. (1986). Working together to get things done: Managipg for organizational productivity. Lexington, MA: Lexington Books. Tziner, A., & Eden, D. (1985). Effects of crew composition on crew performance: Does the whole equal the sum of the parts? J oumai of Applied Psychology, 10, 85-93. Wall, V.D., Galanes, G.J., & Love, SB. (1987). Small, task-oriented groups: Conflict, conflict management, satisfaction and decision quality. Small Group Behavior, _1_8, 31-55. Walsh, J .P., & Fahey, L. (1986). The role of negotiated belief structures in strategy making. Journal of Management, Q, 325-338. Walsh, J.P., Henderson, C.M., & Deighton, J. (1988). Negotiated Belief Structures and Decision Performance: An empirical investigation. Organizational Behavior & Hurpan Decision Processes, 32, 194-216. Whyte, G. (1989). Groupthink reconsidered. Academy of Management Review, 15, 40- 56. Wood, W. (1985). Meta-analytic review of sex differences in group performance. Psychological Bulletin. 102. 53-71. "71111111111111.11111111111“