.9 . . . . my“ :0 .1 3.6 . . A wan! :- 3“ ‘ n u. e. I ... v 9 v. 2'! ..1x1.. . K1“..- t t &. .D .u I vi . 3" VB fir. f-oi : 311.34 .~ 2.1.x... 90v 3 ‘96 .3. ‘13”: larval. . N... .z. .. f. :- vuJVJ'HKn‘ .Io. . _ . . I 37.37.... a , u. t... y ‘ 119.1»; 01.. .| I» .. . § flyi . c. Kw”... » , i . 7 10.00”). «4.1 - .. n..u.....‘...‘w....~...w\m;.$n.. ‘ t r v . amt...“ p. , .vErrK var: v. .575; ~: (“I I ‘L- .\ ‘x .{ '19.:9 I “it Q 3% v 5. i I 5'7. 9 91' Pi t”?! f 7! *K IL . li‘r; .zL THESlS WE‘RS SITY LIBRARIES 211711!“ \lel HI 11W ii 11111111 31293 0142 This is to certify that the dissertation entitled Antecedents and Consequences of Leader Utilization of Staff Information in Decision Making Teams: Addressing a Leadership Dilemma presented by Jean M. Phillips has been accepted towards fulfillment of the requirements for Ph . D . degree if, Business Administrat ion W (//¢//va:— Major professor Date January 3, 1997 MSU is an Affirmative Action/Equal Opportunity Institution 0- 12771 PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MTE DUE I DATE DUE DATE DUE 1/” WWW“ ANTECEDENTS AND CONSEQUENCES OF LEADER UTILIZATION OF STAFF INFORMATION IN DECISION MAKING TEAMS: ADDRESSING A LEADERSHIP DILEMMA By Jean M. Phillips A DISSERTATION Submitted to Michigan State University In partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Management 1997 ABSTRACT ANTECEDENTS AND CONSEQUENCES OF LEADER UTILIZATION OF STAFF INFORMATION IN DECISION MAKING TEAMS: ADDRESSING A LEADERSHIP DILEMMA By Jean M. Phillips This dissertation focused on the effectiveness of hierarchical decision making teams with distributed expertise. Leaders of this type of group face a dilemma; making accurate decisions often requires differentially utilizing staff members based on their accuracy, but this differential utilization may lead to negative staff reactions. This dissertation consisted of two experiments, which together allowed for the examination of the antecedents and consequences of leader utilization of the information provided by staff members in this type of team. Participants in both studies learned a computerized decision making task requiring the classification of aircraft. In Experiment I, the presence or absence of staff judgment confidence and cumulative past performance feedback were crossed in a context in which differential utilization of staff members was optimal. Hypotheses related to the effect of providing leaders with staff past performance and/or staff judgment confidence information on the differential and accurate weighting of their staff were examined. Experiment 11 tested hypotheses relevant to the consequences of leaders' utilization of staff members for staff member reactions. Leader weighting strategy (equal or differential) was crossed with team performance (low or high). The effect of team performance and different types of leader utilization of staff member judgments on staff member reactions was tested. Results suggest that providing leaders staff past accuracy information is related to greater variability in staff utilization and greater staff weighting accuracy. Leader knowledge of staff members’ judgment confidence did not lead to greater leader weighting variability nor improved weighting accuracy, although staff judgment confidence level was positively related to leaders’ weighting of staff judgments. No interaction effects between the availability of staff past judgment accuracy and staff judgment confidence were found. Experiment 11 found that team performance was the primary determinant of staff members’ reactions. The findings of previous decision influence research were found to generalize to higher-performing, but not to lower-performing teams. Team performance interacted with utilization level, utilization relative to the other staff members and utilization accuracy in predicting some staff reactions. Implications and future research directions are discussed. Copyright by Jean M. Phillips 1997 ACKNOWLEDGEMENTS I would like to thank my family and friends who supported and encouraged me throughout the completion of this project. I would also like to thank the members of my committee, Dan Ilgen, John Hollenbeck, Alison Barber and Rick DeShon. I appreciate the time and thought each of them put into this research and their willingness to develop my skills as a researcher. I would also like to acknowledge the support of the Office of Naval Research, Grant #N00014-93-O983, John R. Hollenbeck and Daniel R. Ilgen, Principal Investigators. While I am grateful for their support, the ideas expressed in this dissertation are not necessarily endorsed by the Navy. I would particularly like to thank Dan Ilgen, whose quick return of drafts enabled this project to maintain its momentum throughout the process. The quality and depth of the comments on each draft continually challenged my thinking, and greatly improved the quality of the research. Each drafi was a learning experience, and I want to thank him for his many contributions. John Hollenbeck also deserves special mention as someone who has contributed greatly over the last few years to my skills as a researcher and was a lot of fun to work with. Many other people have influenced my thinking in this area over the last five and a half years, and I would be remiss not to mention them. Doug Sego and Debbie Major were fantastic during the “early days” of the Team Effectiveness Research Lab, and I learned a great deal from them about the research process. Jack Hunter also challenged and expanded my thinking on the issues raised in this dissertation. I also appreciate the time many fellow graduate students spent listening to my ideas and the comments they provided that stretched my thinking and improved the quality of this research. I would also like to acknowledge the work of Anders Johansen and Tom Peters in their development and maintenance of the task and network used to collect the data. They were quick with and solutions whenever problems arose, and are terrific people. I also appreciate the efforts of Spence Tower and Lori Sheppard in helping me to keep my sanity during the many hours of data collection. Kevin and Melanie Ford were also terrific at helping me keep things in perspective throughout the process. I am also indebted to Matt Taylor, who served as the confederate in the second study. His efforts allowed data collection to run as smoothly as possible, and his positive attitude was contagious throughout the study. I would also like to thank Stan Gully for listening to, empathizing with, and supporting me throughout the dissertation and throughout graduate school. Your enthusiasm and willingness to help me track down answers to questions helped keep me excited about this research. I will always welcome and appreciate your insights and opinions. In closing, I would like to thank everyone who contributed in any way to the completion of this dissertation who I did not mention by name. This project was the result of the efforts of many people, and could never have been accomplished alone. vi TABLE OF CONTENTS LIST OF TABLES ......................................................................................................... x LIST OF FIGURES ................................................................................................... xiii CHAPTER 1 INTRODUCTION AND BACKGROUND ................................................................... 1 Lens Model ................................................................................................................. 7 Social Judgment Theory ........................................................................................... 10 Types of Leader Utilization of Staff Member Judgments ......................................... 12 Dyadic LUSJ .................................................................................................... 12 Dyadic LUSJ Accuracy .................................................................................... 12 Relative Dyadic LUSJ ...................................................................................... 12 Dyadic LUSJ Variability .................................................................................. 13 Team Effectiveness Consequences of Leader Utilization of Staff Judgments ......... 14 Decision Accuracy ........................................................................................... 14 Staff Development ........................................................................................... 17 Staff Members’ Reactions to the Leader’s Utilization of Their Judgments 1 8 Team Viability ................................................................................................. 21 Summary of Consequences of LUSJ ........................................................................ 23 Antecedents of Leader Utilization of Staff Judgments ............................................. 23 Leader Decision-Making Theories ................................................................... 24 Conflicting Findings Regarding the Antecedents of LUSJ .............................. 26 Resolving the Conflict: The Effects of Staff Past Accuracy and Confidence in Judgment ......................................................................... 27 Limitations ....................................................................................................... 31 Summary .................................................................................................................. 32 CHAPTER 2 RESEARCH PROBLEM AND HYPOTHESIS .......................................................... 34 Antecedents of LUSJ ................................................................................................ 35 Staff Member Past Judgment Accuracy and Judgment Confidence ................ 3S Consequences of LUSJ ............................................................................................. 41 Leader Utilization of Staff Member Judgments ............................................... 41 CHAPTER 3 EXPERIMENT I: ANTECEDENTS OF LUSJ .......................................................... 49 V11 Method Experimental Design ........................................................................................ 49 Participants ....................................................................................................... 49 Task Description .............................................................................................. 50 Task Training ................................................................................................... 53 Measures .......................................................................................................... 53 Time .......................................................................................................... 53 Cognitive Ability ...................................................................................... 53 Past Accuracy Availability ....................................................................... 54 Judgment Confidence Availability ........................................................... 54 Staff Member Past Judgment Accuracy Level .......................................... 54 Self-Report Staff Member Past Accuracy ................................................. 55 Staff Confidence Level ............................................................................. 55 Dyadic LUSJ ............................................................................................. 55 Self-Report Weighting of a Staff Member ................................................ 57 LUSJ Accuracy ......................................................................................... 57 Dyadic LUSJ Variability ........................................................................... 58 Procedure ......................................................................................................... 59 Results .............................................................................................................. 61 Discussion ........................................................................................................ 81 CHAPTER 4 EXPERIMENT II: CONSEQUENCES OF LUSJ ...................................................... 85 Method ...................................................................................................................... 85 Experimental Design ........................................................................................ 85 Participants ....................................................................................................... 85 Task Description .............................................................................................. 86 Task Training ................................................................................................... 88 Measures .......................................................................................................... 88 Team Performance ...................................................................................... 88 Dyadic LUSJ ............................................................................................... 89 Self-Report LUSJ ........................................................................................ 89 Dyadic LUSJ Accuracy ............................................................................... 89 Relative Dyadic LUSJ ................................................................................. 90 Staff Member Willingness to Return ........................................................... 90 Staff Member Desire for Change on the Next Task ..................................... 91 Staff Member Satisfaction with Leader ....................................................... 91 Task Withdrawal .......................................................................................... 91 Staff Member Self-Efficacy ......................................................................... 91 Procedure ......................................................................................................... 92 Results ....................................................................................................................... 93 Data Analysis .................................................................................................. 93 Discussion ............................................................................................................... 116 viii CHAPTER 5 IMPLICATIONS, LIMITATIONS, AND AREAS FOR FUTURE RESEARCH....122 Antecedents of LUSJ .............................................................................................. 122 Consequences of LUSJ ........................................................................................... 126 Limitations .............................................................................................................. 129 FOOTNOTES ............................................................................................................ 131 LIST OF REFERENCES ........................................................................................... 132 APPENDIX A TESTS OF LUSJ INDICES ....................................................................................... 142 APPENDIX B MATERIALS USED IN BOTH EXPERIMENTS I AND II: CONSENT FORM, DEMOGRAPHIC QUESTIONNAIRE, GENERAL TRAINING MANUAL AND TRAINING SCRIPT .................................................................................................. 145 APPENDIX C EXPERIMENT I MATERIALS: LEADER’S POSITION-SPECIFIC TRAINING MANUAL, POST-SESSION QUESTIONNAIRE AND DEBRIEFING FORM ..... 153 APPENDIX D EXPERIMENT II MATERIALS: POSITION-SPECIFIC TRAINING MANUALS FOR STAFF, POST-SESSION QUESTIONNAIRES, AND DEBRIEFING FORM ...................................................................................... 157 APPENDIX E REPEATED MEASURES REGRESSION ANALYSES FOR EXPERIMENT II USING EFFECTS CODING ................................................................................. 168 ix LIST OF TABLES Table 1a - Means, Standard Deviations and Intercorrelations of Variables in Table 1b .62 Table 1b - Repeated Measures Regression Analysis of Time and the Presence of Cumulative Past Accuracy Information on LUSJ Variability ............................. 62 Table 2a - Means, Standard Deviations and Intercorrelations of Variables in Table 2b .63 Table 2b - Repeated Measures Regression Analysis of Time, the Level of Staff Member Past Judgment Accuracy and the Presence of Cumulative Past Accuracy information on Dyadic LUSJ ............................................................................... 65 Table 3a - Means, Standard Deviations and Intercorrelations of Variables in Table 3b .69 Table 3b - Repeated Measures Regression Analysis of Time and the Presence of Staff Cumulative Past Accuracy Information on Leaders’ LUSJ Accuracy Across Staff Members .............................................................................................................. 70 Table 4a - Means, Standard Deviations and Intercorrelations of Variables in Table 4b .71 Table 4b - Repeated Measures Regression Analysis of time and the Presence of Staff Judgment Confidence on LUSJ Variability ......................................................... 71 Table 5a - Means, Standard Deviations and Intercorrelations of Variables in Table 5b .72 Table 5b - Hierarchical Regression Analysis of Time, Staff Member Past Accuracy Level and the Level of Staff Member Judgment confidence on Dyadic LUSJ .............. 72 Table 6a - Means, Standard Deviations and Intercorrelations of Variables in Table 6b .74 Table 6b - Repeated Measures Regression Analysis of Time and the Presence of Staff Judgment Confidence on Leaders’ LUSJ Accuracy Across Staff Members ....... 74 Table 7a - Means, Standard Deviations and Intercorrelations of Variables in Table 7b .75 Table 7b - Repeated Measures Regression Analysis of Time, the Availability of Cumulative Staff Past Accuracy Information and the Presence of Staff Judgment Confidence on Leaders’ LUSJ Variability ........................................................... 75 Table 8a - Means, Standard Deviations and Intercorrelations of Variables in Table 8b .79 Table 8b - Repeated Measures Regression Analysis of Time, the Availability of Cumulative Staff Past Accuracy Information and the Presence of Staff Judgment Confidence on Leaders’ LUSJ Accuracy Across Staff Members ........................ 80 Table 9 - Means, Standard Deviations, and Intercorrelations of Variables Used in Experiment 11 ....................................................................................................... 94 Table 10 - Repeated Measures Regression Analysis of LUSJ and Team Performance on Willingness to Return .......................................................................................... 95 Table 11 - Repeated Measures Regression Analysis of LUSJ and Team Performance on Desire to Change for Next Task ........................................................................... 98 Table 12 - Repeated Measures Regression Analysis of LUSJ and Team Performance on Satisfaction With Leader ...................................................................................... 99 Table 13 - Repeated Measures Regression Analysis of LUSJ and Team Performance on Task Withdrawal .................................................................................................. 102 Table 14 - Repeated Measures Regression Analysis of LUSJ and Team Performance on Self-Efficacy ........................................................................................................ 103 Table 15 - Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Willingness to Return ................................................................ 106 Table 16 - Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Desire to Change for Next Task ................................................ 107 Table 17 - Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Satisfaction With Leader ........................................................... 108 Table 18 - Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Task Withdrawal ........................................................................ 108 Table 19 - Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Self-Efficacy .............................................................................. 109 Table 20 - Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Willingness to Return ................................................................ 110 Table 21 - Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Desire to Change for Next Task ................................................ 112 xi Table 22 - Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Satisfaction With Leader ........................................................... 113 Table 23 - Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Task Withdrawal ........................................................................ 114 Table 24 - Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Self-Efficacy .............................................................................. 116 xii LIST OF FIGURES Figure 1 - Brunswick’s (1956) Lens Model of Individual Decision Making .................. 8 Figure 2 - Brehmer and Hagafors’ (1986) Team Lens Model ......................................... 11 Figure 3 - Hypothesized Staff Member Past Judgment Accuracy Level by Availability of Staff Cumulative Past Judgment Accuracy Information to Leader Interaction on Dyadic LUSJ ........................................................................................................ 37 Figure 4 - Hypothesized Staff Member Dyadic LUSJ Level by Team Performance Interaction on Staff Member Reactions ............................................................... 43 Figure 5 - Hypothesized Staff Member Relative Dyadic LUSJ by Team Performance Interaction on Staff Member Reactions ............................................................... 45 Figure 6 - Hypothesized Staff Member Dyadic LUSJ Accuracy by Team Performance Interaction on Staff Member Reactions ............................................................... 47 Figure 7 - Hypothesis la: Interaction Effect of Time and Cumulative Past Accuracy on LUSJ Variability .................................................................................................. 64 Figure 8 - Hypothesis 1b: Interaction Effect of Time and Staff Past Accuracy Level on LUSJ .................................................................................................................... 67 Figure 9 - Hypothesis 1b: Interaction Effect of Cumulative Past Accuracy Availability and Staff Past Accuracy Level on LUSJ .............................................................. 68 Figure 10 - Hypothesis 3: Interaction Effect of Confidence Availability and Cumulative Past Accuracy Availability on LUSJ Variability ................................................. 77 Figure 11 - Hypothesis 3a: Interaction Effect of Time and Cumulative Past Accuracy Availability on LUSJ Variability ......................................................................... 78 Figure 12 - Hypothesis 4a: Interaction Effect of Team Performance and LUSJ on Willingness to Return .......................................................................................... 96 Figure 13 - Hypothesis 4d: Interaction Effect of Team Performance and LUSJ on Task Withdrawal ........................................................................................................... 101 xiii Figure 14 - Hypothesis 5a: Interaction Effect of Team Performance and Relative LUSJ on Willingness to Return ..................................................................................... 105 Figure 15 - Hypothesis 6a: Interaction Effect of Team Performance and Dyadic LUSJ Accuracy on Willingness to Return ..................................................................... 111 Figure 16 - Hypothesis 6e: Interaction Effect of Team Performance and Dyadic LUSJ Accuracy on Self-Efficacy ................................................................................... 115 xiv Chapter 1 INTRODUCTION AND BACKGROUND A pervasive finding in the decision making literature is that individuals have a limited capacity for processing information, resulting in difficulties in coping with complex decision problems (Kahneman, Slovic, & Tversky, 1982; Slovic & Lichtenstein, 1971). Organizations often seek to improve their handling of complex decision situations by assigning problems to groups (Brehmer & Hagafors, 1986; Salas, Dickinson, Converse, & Tannenbaum, 1992). By increasing the number of information processors, assigning a complex decision task to multiple people can improve effective decision making by improving the acquisition, encoding, storage, and retrieval of information (Michaelsen, Watson, & Black, 1989; Duffy, 1993). Whether groups can process information better than individuals, however, is not clear. Steiner (1972) recognized that groups have the potential to enhance as well as to degrade individual decision making processes. Research has supported both positions (see, e.g., Dyer, 1984; Tindale, 1993). The factors that make a group more likely to perform effectively, or even to outperform individual decision makers, are far from well understood. There are many types of groups (e. g., hierarchical vs. non-hierarchical) and many types of group tasks (e.g., decision making or production). Progress in understanding 2 groups and group processes has been most pronounced when research has focused on a specific type of group performing a specific type of task. For example, juries have been extensively studied (Davis, 1992; Thompson, 1993), resulting in a relatively solid knowledge base on this type of small decision making group. This knowledge can be used to generalize to non-jury groups that share a similar structure and face a similar decision making task where every member has the same information about the decision problem and an equal vote when it comes to reaching a final decision. This dissertation focuses on another type of small group which has been less well studied: hierarchical decision making teams with distributed expertise (HTDE). This type of group has several distinguishing characteristics. Status differences exist among the members of such teams, with the responsibility for decisions distributed unequally among team members. Specifically, responsibility for the final decision lies at the top of the hierarchy. This characteristic differentiates these teams from teams relying on consensus in decision making (e. g., juries). Distributed expertise refers to the characteristic that team members differ in the amount of knowledge and information each brings to the decision problem (Hollenbeck, Ilgen, Sego, Hedlund, Major, & Phillips, 1995). In such hierarchical teams, the decision problem becomes divided into a number of subproblems, and each subproblem is the responsibility of an expert (Brehmer & Hagafors, 1986). Staff members forward their interpretation of their particular subproblem to the leader, and the leader's decision is usually based, at least in part, on the information provided by the leader's staff of experts. The results of the decisions made by the leader have consequences for both the leader and the staff. 3 This type of group is best characterized as a team rather than a set of independent decision makers because of members' interdependence, common goals, and shared fate. The fact that members of the team can influence each other in the course of making a decision also makes this type of group a team rather than a set of independent decision makers (Hollenbeck et al., 1995). An HTDE can also be considered a special type of team due to its specific features. The type of task confronting the team is strictly decision making (as opposed to production). Although the leader often depends on the judgments of his or her staff, the ultimate responsibility for the decision rests with the leader. The status hierarchy and differential nature of each staff member's expertise also differentiates this type of team from teams in general (Hollenbeck et al., 1995). These teams are ubiquitous in military, medical, industrial and government contexts because of their ability to process larger amounts of information and therefore combat information overload. Most top executives of corporations and military commanders would not be able to operate without appropriate staff support (Potter & F iedler, 1981). Although these teams are prevalent in a variety of organizations, little is known about the leader- and team-related processes that contribute to their effectiveness. Leaders of HTDE face an important dilemma. On one hand good decisions usually require that the leader pay closest attention to those staff members who make the best judgments. In cases in which staff members vary in ability or in which the information available to staff members is of different quality, this would lead to differential utilization of staff members based on their past accuracy or ability. On the other hand, staff members' perceptions that they are being listened to and utilized are 4 likely to affect their satisfaction with and commitment to the team. Research on decision influence has consistently found that greater subordinate influence leads to greater subordinate satisfaction (Graen & Cashman, 1975), greater satisfaction with the leader (Deluga & Perry, 1991), greater organizational commitment (Wakabayshi, Minami, Sano, Graen & Novak, 1980) and lower turnover (Graen & Ginsburgh, 1977; Graen, Liden & Hoel, 1982). Because staff members often are aware of the influence they and others in the team have on the leader, staff members' individual utilization as well as their relative level of utilization by the leader as compared to their fellow staff members may similarly influence their affect and commitment to the team. It may also affect other outcomes for staff members such as satisfaction with the leader, self-efficacy and task withdrawal. Despite the prevalence of this dilemma in HTDE, there has been little theory or research aimed specifically at decision making in HTDE in general, or at the dilemma described above, in particular. In 1986, Brehmer and Hagafors wrote: One possible reason why there has been so little psychological research on staff decision making may be that there has been no theory to guide research in this area, nor even a pretheoretical framework. Indeed, there has not even been an experimental paradigm for the study of staff decision making. (p. 182) Unfortunately, advancements in our understanding of the conflict have been limited since this statement was written in 1986. Brehmer and Hagafors (1986) were among the first to focus on HTDE. In an exploratory study they found that, in this context, leaders struggled to make accurate decisions when staff members provided judgments based on their subset of the available cues, particularly when the accuracy of the staff members differed. 5 Sniezek and Buckley have presented a paradigm for J udge—Advisor decision making focusing on the effect of a staff member's confidence in his/her own judgment on the leader's utilization of the staff member's judgment (Buckley & Sniezek, 1990 (as cited in Sniezek & Buckley, 1995); Sniezek & Buckley, 1995). Hollenbeck et a1. (1995) also focused on this type of hierarchical group, and proposed that the leader's appropriate utilization of staff members' judgments (called hierarchical sensitivity) is one of three core team-level constructs central to decision making accuracy. Although Hollenbeck et a1.'s (1995) multilevel theory addresses the importance of the leader's staff utilization policy on decision performance, it is a theory of team decision making rather than of leader utilization of subordinate judgments (LUSJ). LUSJ is the degree to which a subordinate's judgments are utilized by the leader in the team decision making context', and is the focus of this dissertation. While valuable for calling attention to LUSJ, the existing knowledge base is limited by two factors. First, most existing approaches to the understanding of LUSJ emphasize only the effects of LUSJ on team decision making accuracy (e. g., Brehmer & Hagafors, 1986; Hollenbeck et al., 1995; Sniezek & Buckley, 1995). However, as Hackman (1987) discusses, effectiveness has been shown to be a multidimensional construct, particularly in groups. In this dissertation, different types of LUSJ are addressed, as well as their implications for different aspects of team effectiveness. Second, conflicting findings of studies investigating LUSJ regarding leaders' propensity to differentially utilize staff member recommendations lend confusion rather than clarity to our understanding of the antecedents and processes of LUSJ. This confusion is important in that there are unique characteristics of these different research 6 approaches that may shed light on the antecedents of LUSJ. These characteristics were explored further in this dissertation. The purpose of this dissertation was to address the dilemma facing leaders of HTDE: the conflict between decision accuracy requirements of differentially utilizing staff members according to their accuracy and team viability costs associated with differentiating among staff members. This dissertation will also extend the literature on LUSJ by exploring the antecedents and consequences of LUSJ and by addressing the two limitations discussed above. In the next section, two paradigms relevant to LUSJ will be presented, the LUSJ construct will be developed, available literature addressing antecedents to LUSJ will be reviewed, and the literature on the consequences of LUSJ for multiple aspects of team effectiveness will be discussed. 7 BACKGROUND Two decision making paradigms that are relevant to LUSJ are Brunswick's (1943; 1955; 1956) lens model of individual decision making and Brehmer and Hagafors' (1986) adaptation of this model to teams. Each will be discussed in detail, then available literature addressing the antecedents and consequences of LUSJ will be reviewed. Lens Model Brunswick (1943; 1955; 1956) proposed a widely used lens model to illustrate the process by which an individual judge integrates n cues (predictors) into an overall judgment (e.g., see Sniezek & Reeves, 1986). A schematic of this model is presented in Figure 1. The lens model of Figure 1 is based on the premise that individuals are rational decision makers. The lens model proposes that decision makers obtain information on the decision problem by consulting cues related to the decision criterion (Ye). These cues have values that are intervally scaled, and have some linear relationship to Y,, which is also intervally scaled. Leaders consult the relevant set of cues for any particular decision, assign appropriate weights to the cues, combine the weighted cues in some fashion, and reach an overall judgment. Overall system predictability is the ability of the set of cues to predict the criterion and is the squared multiple correlation between Y, and the n cues. Cue validig is rd, the product-moment correlation between cue Xi and the criterion variable Y,. Cue utilization is the extent to which the judge's selection of a response alternative, Y8, is (linearly) related to cue Xi, and is represented by rsi. The descriptive model of the judge's strategy is therefore modeled by a least-squares multiple regression equation for explaining responses (or judgments) as a function of the n cue variables: Cues . X1 \ Cue I \ Utilization ,/// _// X2 \*~._ \\\\ Criterion Y, é? - A , X3 - e , - % Y,3 Decision \‘~\‘ . "/,.. .- -” \\ . //,_,_ X1 M” Ecological \lalidity ; Utilization Validities \re,i ' - \ Figure 1. Brunswick's (1956) Lens Model of Individual Decision Making Ys=b50+blel+'"+bsan (1) where bso is the Y intercept and bsl through bsn are the b weights associated with the judge’s utilization of each XI to Xn cue. The descriptive model can then be compared to the normative model for predicting the criterion from the n cues: Y e=bc0+belxl + +ben’Yn (2) where bco is the Y intercept and bel through bcn are the b weights associated with the optimal utilization of each Xl to Xn cue. The consistency index is the degree of linear agreement between cue values and judgment responses (rs = rst), and the achievement 131;; is a measure of the linear correspondence between judgment responses and actual criterion outcomes (r8 = rYeYs). The correlation between predicted values from the descriptive model (Eq. (1)) and the normative model (Eq. (2)) is called the matching index. The results of studies utilizing the lens model have suggested that individuals have greater difficulty learning and using cues when the cue-criterion relationship is non- linear and non-positive than they do learning and using cues with positive linear cue- criterion relationships (Brehmer, 1973; Deane, Hammond, & Summers, 1972; Slovic, Fischhoff & Lichtenstein, 1977; Sniezek & Naylor, 1978). 1 0 Sorfl Judgment Theory Brehmer and Hagafors' (1986) paradigm for the study of staff (or distributed expertise) decision making discussed earlier is based on an extension of the general "lens model" paradigm presented above. The general characteristics of this paradigm are outlined in Figure 2. As diagrarnmed, the team decision maker, or leader, has three experts on his/her staff, each with specialized knowledge pertaining to a subset of the decision cues. Each expert is responsible for two of the six cues and arrives at a judgment based on his/her interpretation of these cues. The experts can therefore serve to reduce the six-cue judgment task to a three-cue task for the team leader. This paradigm can easily be extended to any number of cues or experts. The leader's task is to make the team decision for the particular decision making problem using one of three strategies: relying on the experts; ignoring the experts and coping directly with the six-cue task; or using some combination of these two strategies. The team leader must therefore first decide whether or not s/he wants to rely on each expert at all, then determine what relative weight to give to each expert. Brehmer and Hagafors' (1986) paradigm focuses attention on the cognitive aspects of leader decision making, and offers a preliminary framework from which to approach decision making processes in teams. However, this framework does not directly address the factors influencing the leader's ultimate utilization of the judgments provided by the experts, nor does it address the consequences of LUSJ. These deficiencies will be addressed later via a review of the literature on the antecedents and consequences of LUSJ. 11 , Yo Correct Leader’s Decision Decision Judgments of Subordinates A, B. and C Figure 2. Brehmer and Hagafors' (1986) Team Lens Model 12 Types of Leader Utilization of Staff Member Judgments At the core of LUSJ lies the actual weight a leader gives the judgments provided by a subordinate to assist in the leader's decision making. These weights can be conceptualized in several ways and at different levels of analysis. Different types of LUSJ speak to different types of outcomes, which will be considered in detail later. Dyadic LUSJ. The first way of evaluating leader weighting of staff judgments that will be considered in this dissertation is dyadic LUSJ. Dyadic LUSJ is the weight the leader gives a staff member's judgments independent of the other staff members. Dyadic LUSJ is the most basic type of LUSJ, and all other bases for evaluating the leader’s utilization of staff judgments build upon it. Dyadic LUSJ Accuracy, A second basis for evaluating the leader’s utilization of staff member pertains to the accuracy of the LUSJ weights given to each staff member's judgments. Dyadic LUSJ accuracy is the difference between a staff member's actual dyadic LUSJ weight and the staff member's appropriate dyadic LUSJ weight. Because each staff member can have a different accuracy of dyadic LUSJ score, this construct is also at the dyadic level of analysis. Team-level LUSJ accuracy, termed hierarchical sensitivity by Hollenbeck et a1. (1995) is the average of the dyadic LUSJ accuracy scores for each of the leader’s staff members. Higher hierarchical sensitivity scores reflect greater misweighting of the staff. Because each leader (and therefore each team) will have only one team-level LUSJ accuracy score, this construct is at the team level of analysis. Relative Dyadic LUSJ. Another way to conceptualize leader utilization of staff judgments concerns the weight a leader gives a subordinate's judgments relative to the 13 weight given to the other staff members' judgments. For example, does the leader utilize the target staff member more than, about the same, or less than the average extent to which s/he utilizes the other staff members? Staff members with higher relative dyadic LUSJ scores have therefore been given greater influence by the leader in the leader's decisions than staff members with lower relative dyadic LUSJ scores. As each staff member has a unique relative dyadic LUSJ score, this construct is at the dyadic level of analysis. Dyadic LUSJ Variability. Another way to conceptualize leader utilization of staff judgments is the variability in dyadic LUSJ that exists in a leader's dyadic LUSJ weights across his/her staff members. For example, does the leader tend to weight all staff members the same, or is there wide variation in his/her utilization of various staff members? A leader who weights all staff members equally would have a low dyadic LUSJ variability score. Alternatively, a leader who weights one staff member low, another moderately, and a third high would have a high dyadic LUSJ variability score as the dyadic LUSJ weights differ across staff members. Each leader will have only one dyadic LUSJ variability score, making this a team level construct. As illustrated by the fact that the same low value of dyadic LUSJ variability can result from a leader giving the staff all high or all low weight, this is an index of variability, not of level. The interaction of the average of a leader's dyadic LUSJ across all staff members with the leader's dyadic LUSJ variability would reflect both the level and variability of a leader's dyadic LUSJ weights across staff members, should this information be desired. 14 Team Effectiveness Consequences of Leader Utilization of Staff Judgments Effectiveness has been shown to be a multidimensional construct, particularly in teams (Hackman, 1987). Hackman (1987) identified three effectiveness criteria of teams: (a) output quality, (b) member need fulfillment, and (c) team viability, or the capability of members to work together on subsequent team tasks. Sundstrom, De Meuse and Futrell (1990) similarly defined work team effectiveness as both performance and viability, which includes at a minimum members' satisfaction, participation, and willingness to continue working together. Hackman and Oldham (1980) also recognized the possibility that unresolved conflict or divisive interaction can leave members unwilling to work together. The consideration of social and personal criteria in teams is becoming more common as their central role in long- term team performance and viability is increasingly recognized. It is important for theories of team effectiveness to consider multiple outcomes, including performance and team viability as well as reactions on the part of individual staff members such as satisfaction, turnover and commitment. The importance of each of these effectiveness outcomes and the role that different types of LUSJ may play in influencing each of these outcomes will be discussed in the following paragraphs. Decfion Accurm Because of the detrimental effects of complexity and large information loads on information processing (Kahneman, Slovic & Tversky, 1982; Simon, 1978), relying on staff judgments to reduce the cognitive demands of the task may lead to more accurate decisions even if the experts' accuracy is less than perfect (Brehmer & Hagafors, 1986). 15 Hackman's (1987) normative model discussed earlier focuses on the potentially malleable aspects of a group and its environment that are factors promoting effectiveness. Among other things, this model addresses the process loss, or failure of a group to reach its potential, that can result from the solicitation and weighting of member contributions in a way that is inconsistent with members' expertise. Hackman proposes that to the extent a group is able to weight members' contributions appropriately the group will take better advantage of its resources and will perform more effectively. While not explicitly addressing the processes by which a group can better identify and utilize the contributions of its members, Hackman's (1987) model clearly highlights the importance of appropriate LUSJ at both the dyadic and team levels in reducing process losses and improving decision accuracy. Bottger and Yetton (1988) also proposed and found that effective problem solving group performance depended on the group's strategies in utilizing the group's resources, which include member abilities. The ability of dyadic LUSJ accuracy to influence decision accuracy is likely to depend on many factors. For example, if staff members each simply smnmarize all of the relevant decision information for the leader (i.e., the team is not characterized by distributed expertise), or if the judgments of one staff member are extremely highly correlated with the criterion, the leader only needs to appropriately weight one staff member to achieve high decision accuracy. Vroom and Yetton (1973) suggest that the appropriateness of utilizing staff judgments is dependent in part on the staffs knowledge of the decision problem, or ability to contribute to the decision at hand. If a staff member's recommendations are grossly inaccurate, relying on them does not help the leader accurately process larger amounts of decision related information. Thus, in all 16 types of teams, not utilizing poor staff members can be as critical to decision performance as appropriately weighting good staff members. Also, if the leader cannot obtain or process enough of the information relative to the decision by him/herself, even moderate dyadic LUSJ accuracy can have significant positive effects on decision performance. Thus, as Brehmer and Hagafors (1986) describe, relying on staff judgments can lead to more accurate decisions on the part of the leader even if staff accuracy is less than perfect. It is the appropriate weighting of staff judgments based on each judgment's accuracy that is likely to lead to higher decision performance. Research has supported the proposition that the ability of equal status group members to assess fellow group members' judgment accuracy affects group performance (Bottger & Yetton, 1988; Libby, Trotman, and Zimmer, 1987). It seems likely that the ability of a leader to appropriately assess and weight staff judgments will similarly affect decision making accuracy. A study by Hollenbeck et a1. (1995) found that staff validity can positively affect the leader's decision making performance, and that a similar effect may exist for the appropriateness of the staff weighting strategy used by the leader. Hollenbeck et a1. (1995) also found that, in four-person teams, the interaction of greater staff validity and more appropriate leader weighting of staff information in making the final decision led to higher leader decision making performance than the presence of either staff validity or the appropriateness of leader weighting alone. The implications of LUSJ for decision accuracy are that under high information load conditions, decision performance may increase if leaders correctly utilize staff judgments in arriving at a final decision (dyadic LUSJ accuracy). 17 Staff Development. Heller (1992) presented a staff development approach to decision influence. Heller posited that allowing subordinates greater decision influence can increase organizational efficiency by making better use of staff members' existing competence (experience and skill), developing new competencies and liberating dormant motivation in staff members. The author is not aware of any data on the accuracy of this proposition. Also related to staff development are the Pygmalion and Golem effects. The Pygmalion effect occurs when raising a leader's performance expectations of a subordinate results in an increase in that subordinate's performance (Rosenthal & Jacobson, 1968). The Golem effect refers to the negative effect of low leader expectations on subordinate performance, or performance improvements lower than those otherwise attainable (Babad, Inbar, & Rosenthal, 1982). Research on the Golem and Pygmalion effects has consistently found the predicted effects: low leader expectations result in restricted subordinate achievement (Oz & Eden, 1994; see Rosenthal, 1985, 1991 for reviews) and compromised staff development. Because consistently low judgment weights can be reflective of a leader's low performance expectations of that staff member, staff members who are weighted low by the leader (low dyadic LUSJ) or even just lower than their teammates (low relative LUSJ) might show decreased performance improvements or lower self-efficacy than if their judgments were given higher weights by the leader. Receiving high judgment weights might be interpreted by the staff member to mean that the leader has high expectations for that staff member and might lead to greater performance improvements or self-efficacy. The role of LUSJ in staff development illustrates its importance to team effectiveness. 18 Staff Members' Reactions to the Leader's Utilization of Their Iggments. Vertical dyad linkage theory (V DL), or leader-member exchange theory (LMX) describes how role—making processes result in leaders developing different types of relationships with different staff members, including different amounts of decision influence (Cashman, Dansereau, Graen & Haga, 1976; Graen & Cashman, 1975; Graen & Scandura, 1987). Findings concerning leadership differences across staff members have suggested that the different relationships a leader establishes with different staff members can influence staff member satisfaction, commitment (Graen, Liden & Hoel, 1982; Katerberg & Horn, 1981; Vecchio, 1982) and turnover (Graen & Ginsburgh, 1977; Graen, Liden & Hoel, 1982). Much of the LMX research has investigated the relationship between satisfaction and commitment and a global measure of LMX, of which decision influence is only a part. Further research is necessary to identify the unique influence of various components of LMX, such as decision influence, on staff member satisfaction and commitment in hierarchical decision making teams. Because decision influence can differ across the staff of a hierarchical team with distributed expertise, it is possible that the degree to which a leader utilizes a staff member's judgments (dyadic, relative, and accuracy of dyadic LUSJ) will also have implications for that staff member’s reactions that are different from the reactions of the other staff. Research on groups and individuals suggests that decision influence is positively related to increased satisfaction and commitment (Drake & Mitchell, 1977; Vroom, 1964; Wood, 1973). Preliminary research has found that greater upward staff member influence effectiveness is related to greater staff member satisfaction with the supervisor (Deluga & 19 Perry, 1991; Scandura, Graen & Novak, 1986) and greater overall satisfaction (Drake & Mitchell, 1977). Although research is limited on the relationship between decision influence and satisfaction in HTDE, these results suggest that decision influence may affect staff reactions in these teams as well. Preliminary research has been done in the area of strategic management teams that indicates that the manner in which leaders elicit, receive, and respond to team members' input affects team members' attitudes toward the leader and other team members, and their commitment to the decision made by the leader (Korsgaard, Schweiger & Sapienza, 1995). Although they did not examine the processes involved, Kim and Mauborgne (1993) found a relationship between subsidiary managers' reactions to the multinational's strategic decision processes and their cooperation in implementing the multinational's strategic decisions. This research suggests that staff members who are dissatisfied with the way their judgments are utilized by the leader may have negative reactions toward the leader and the team, and, if possible, may interfere with the successful implementation of the leader's decision. Strategic management teams differ from the type of team of interest here in that their members are responsible for the execution of the leader's decisions. In the type of teams of interest in this dissertation, however, staff members' involvement in the decision ends once the leader's final decision is made; there is no staff implementation of the decision as exists in strategic teams. This difference is important because the degree to which the leader and the staff members have the same goals and the degree to which there is a more than one correct solution might influence staff reactions to being utilized by the leader. 20 In the case of strategic management teams, there is not necessarily a single correct decision. Multiple alternatives may prove to be equally successful as decisions. The nature of the decision made by leaders of strategic teams also affects the allocation of resources _tp staff members. Staff members in strategic management teams may therefore differ in their preferred leader decisions, and may have different goals as advisors. For example, consider the case of a team consisting of the president of a multinational company and a staff of advisors who are the heads of operations in different countries facing the decision of how to allocate the next year's budget. Each advisor wants as large a part of the budget as possible for his/her operation, leading to conflicting goals among staff members. Money allocated to one operation (and thus one advisor) is money not allocated to another. Advisors in strategic teams might therefore react more negatively than advisors in teams in which the leader's decision does not differentially affect the advisors when their judgments are utilized less by the leader than the judgments of the other staff. In the types of teams of interest here, however, the only differential treatment across staff members that can occur is in dyadic LUSJ. In such teams, the only implications of the leader's decision for the staff are the consequences of team decision accuracy, which are shared by all team members. Unlike staff members of strategic teams who may be motivated to influence the leader's decision for personal gain (e. g., receiving a larger share of the overall budget), staff members and leaders in HTDE have the same goal: an accurate decision. Because of this shared goal (leader decision accuracy), staff may not react as negatively as staff in strategic management teams to leader underutilization of their judgments, as long as team goals are being met. 21 The literature on participative decision making also suggests that participation in decisions improves satisfaction (Miller & Monge, 1986) and reduces turnover, absenteeism, and conflict, although the size of the effect has been shown to depend on the type of participation investigated (Locke & Schweiger, 1979; Wagner, 1994). The greater the degree of actual influence, the stronger the effect. Many investigations of the consequences of decision participation for staff members have considered individuals' perceived rather than their actual influence, resulting in the data being obtained from the same respondents using the same questionnaire at the same time (percept-percept procedures). Attributed influence has been shown to be only weakly correlated with actual influence (March, 1956), however. A meta-analysis by Wagner & Gooding (1987a) found a correlation of .39 for studies measuring both participation and its outcomes when the data were obtained from percept- percept procedures and a correlation of .12 when multiple sources were used to gather the data. Wagner and Gooding (1987a; 1987b) discuss the possibility that percept-percept bias might inflate the effect size between participation and its outcomes, and suggest that the actual effect size of participation may be substantially lower than frequently believed. Team Viability. Team effectiveness involves more than just output or performance. Equally important are the group's future prospects as a work unit (Hackman, 1987; Hackman & Oldham, 1980; Sundstrom, De Meuse & Futrell, 1990), or team viability. Sundstrom, De Meuse and Futrell define team viability as member satisfaction, participation, and willingness to continue working together. A team that is unable or unwilling to work together in the future cannot be said to be effective, regardless of its performance level. 22 Components of team viability have been identified at both the team (cohesiveness, teamwork, problem-solving, planning and communication) and staff member (job and group satisfaction, participation, peer and leader relations, job clarity, absenteeism and turnover) levels of analysis (Sundstrom, De Meuse & F utrell, 1990). The effects of individual staff member reactions (e.g., turnover, job or group satisfaction, participation, and absenteeism) can be seen at the team level (e. g., dissolving of the team, high member replacement recruitment and training costs, poor quality decisions, low productivity, low staff member satisfaction with the leader or group). Thus, although team viability is a team level construct, it is highly dependent on individual staff member reactions. In the case of HTDE, staff reactions to the leader's utilization of their judgments are important as these reactions may affect future team processes as well as the future existence of the team. Staff members' reactions to low LUSJ may include withdrawal behaviors from the task (e.g., less participation) as well as from the team (e. g., absenteeism, turnover). For instance, a staff member who feels the recommendations s/he is providing the leader are not being utilized may stop making them (Foushee, 1984). Should the leader wish to utilize a recommendation of that staff member in the future, it may not be available. If a dissatisfied staff member is absent from or quits the team, additional resources must be expended by the team to recruit, select, and train a replacement. If each team member holds a unique role in the team (e.g., in the case of distributed expertise), team performance may suffer until the replacement is fully knowledgeable about his/her task and role within the team. If several or all staff members withdraw from the team or quit, the team may be dissolved. Team viability is 23 thus a critical aspect of team effectiveness, and is highly dependent on individual staff member reactions, including staff member responses to LUSJ. Summm of Consequences of LUSJ Decision making teams are a common way of coping with the demands of complex decision problems. Preliminary evidence has supported the proposition that utilizing staff judgments according to their accuracy can improve leader decision performance. The fact that increased decision participation and decision influence may be related to greater satisfaction and commitment, however, creates a dilemma for leaders. Differentially weighting staff members according to their accuracy might increase decision performance at the expense of the satisfaction, commitment, and development of the staff member(s) who have low LUSJ or low relative LUSJ. However, if the team performs poorly, equally weighting staff members might also jeopardize team viability. This potential tradeoff between team performance and team viability indicates that research on LUSJ is important in understanding and improving team effectiveness. More detailed investigations of the consequences of influence in teams need to be performed before any real conclusions can be drawn, as differences in the nature of the teams studied (e.g., decision making and implementation vs. decision making as an end in itself, etc.) and differences in the role of staff judgments in team decision making processes (e.g., central vs. peripheral) may limit the replicability of previous findings. Antecedents of Leader Utilization of Staff Judgments Although theory and research on LUSJ in teams is relatively limited, the literature addressing the extent to which an individual is influenced by the judgments of others in 24 making a decision in both group and individual contexts will be discussed in the following paragraphs. Leader Decision Making Theories. Although several group, team, and leadership theories mention the importance of LUSJ in influencing team effectiveness and team processes (e.g., Blake & Mouton, 1964; Hackman, 1987; Heller, 1992; Heller & Yukl, 1969; Vroom & Yetton, 1973), few attempt to describe the process through which LUSJ happens. Four theories or paradigms in this area introduced to date are Brehmer and Hagafors' (1986) team lens model, leader-member exchange theory (Graen, Liden & Hoel, 1982), Sniezek and Buckley's (1995) work on judge-advisor systems, and Hollenbeck, et al.'s (1995) multilevel theory. As discussed earlier, Brehmer and Hagafors (1986) introduced a paradigm based on social judgment theory for the study of leaders of teams consisting of a staff of experts. This paradigm suggests that over multiple decisions leaders analyze the performance of their staff and incorporate their judgments about prior staff performance in their weighting of the staff members’ current judgments. The only study that has been reported using this paradigm found that leaders had difficulty accurately weighting their staff unless staff validities called for an equal-weighting strategy across staff members (Brehmer & Hagafors, 1986). These findings suggest that the norm of equal treatment (equal weighting) may be one from which leaders have difficulty deviating, even when the leader does not know the staff very well. Leader-member exchange theory (LMX) (Graen, Liden & Hoel, 1982), an extension of the vertical-dyad linkage model (Danserau, Cashman, & Graen, 1973; Danserau, Graen & Haga, 1975; Graen & Cashman, 1975), posits that leaders select "in- 25 group" staff members to whom to allow greater decision influence and latitude. These in- group subordinates have been shown to be chosen on the basis of their ability and willingness to accept extra-role responsibilities (Graen & Scandura, 1987; Scandura, Graen & Novak, 1986). The quality of the leader-member exchange reflects the degree of influence and latitude allowed by the leader in performing job responsibilities. LMX was designed to apply to situations in which a leader can select in-group staff members to whom to allow greater job latitude and decision influence. These situations do not necessarily involve decision making, however, and the groups may not consist of members with distributed expertise. In the teams of interest in this dissertation, the contributions of all staff members are important in that each is responsible for a specific subproblem unique to his/her role. To the extent that a leader can still ignore or differentially weight staff recommendations, a certain amount of discretion in terms of the influence and latitude given to individual staff members is possible in the type of teams of interest here. For example, the fact that the leader of a HTDE can choose not to utilize a poorly performing staff member and can choose to heavily utilize a good performing staff member, relationships of differential influence can be established. Sniezek and Buckley (1995) have presented a paradigm on judge-advisor systems (JASs) that addresses leader decision making in groups when at least one person is in the role of advisor and formulates judgments or recommends alternatives that are then communicated to the person in the role of judge. Experts in JASs do not necessarily possess specialized knowledge or differential expertise, which differentiates them from the teams of central interest here, but the judge has the responsibility for making the final decision and the decision has consequences for both the advisors and the judge. 26 Two assumptions of JASS are that confidence in one's own judgment is a mechanism of influence between advisors and the judge during the decision making process, and that social influence is mediated by the subjective uncertainty of the leader about the correct decision (Sniezek & Buckley, 1995). These propositions have received preliminary support (Sniezek & Buckley, 1995). As discussed earlier, Hollenbeck et al.'s (1995) multilevel theory addresses the importance of the leader's staff weighting policy on decision performance, and is a theory of team decision making rather than of leader utilization of subordinate judgments (LUSJ). Constructs originating from the social system (e.g., group cohesion), roles (e. g., role conflict) and behavior settings (e.g., physical proximity between leaders and staff) are said to affect constructs such as the leader's average dyadic LUSJ accuracy across all staff members (termed "hierarchical sensitivity" by Hollenbeck et al.). The multilevel theory incorporates a categorization scheme from McGrath (1976) rather than theorizing about the determinants of and processes involved in LUSJ. Thus, while Hollenbeck et al. suggest some causes of LUSJ, this theory's real contribution to the understanding of LUSJ is its illustration of its critical role in decision accuracy. Conflicting Findings Regarding the Antecedents of LUSJ. Although the importance of recognizing and utilizing expertise in teams has been both recognized and demonstrated, relatively little is known about conditions that affect LUSJ. Very limited research has investigated the ability of leaders to differentially and accurately weight staff judgments. Additionally, the literature that does exist on this topic has reported conflicting findings regarding leaders' ability to differentially weight staff members. For example, Brehmer and Hagafors (1986) found that leaders had difficulty differentially 27 weighting their staff as much as they should have, and tended to utilize an equal weighting strategy. Research on VDL and LMX theory, however, has consistently suggested that leaders quickly seek out and develop relationships of differential influence with different subordinates, implying that leaders have little difficulty differentially utilizing their staff (Danserau, Graen & Haga, 1975; Graen & Cashman, 1975; Scandura, Graen & Novak, 1986). The research available on the JAS paradigm (Sniezek & Buckley, 1995) discussed above also found that leaders differentially weighted their advisors. In sum, although some research suggests that leaders tend not to discriminate across staff members (e.g., Brehmer & Hagafors, 1986), other research suggests that leaders do discriminate across staff members (Sniezek & Buckley, 1995), and even seek out relationships with subordinates involving different levels of decision influence (Graen & Scandura, 1987; Scandura, Graen & Novak, 1986). Research has suggested that factors such as a subordinate's ability and a leader's liking of that subordinate (Danserau, Graen & Haga, 1975; Graen & Cashman, 1975; Scandura, Graen & Novak, 1986), the subordinate's past performance (Bottger & Yetton, 1988; Hollenbeck et al., 1995), and the leader's and subordinate's confidence in the accuracy of their own judgments (Sniezek & Buckley, 1995) can affect the decision influence a leader allows a staff member. This confusion is important, as the ability of leaders to appropriately discriminate among staff members can be crucial to decision accuracy. Resolvigg the Conflict: The Effects of Staff Past Accuracy and Confidence in Judgment. In an effort to identify the appropriate weight to give each staff member's judgment, leaders may rely on indicators of staff member accuracy in deciding how to 28 weight their staff. For example, a staff member's past accuracy might be considered a reflection of a staff member's current judgment accuracy. Staff member judgment confidence might be considered an indicator of the staff member's perception of the likelihood that the staff member's judgment will be accurate. The possibility that the conflicting findings of Brehmer and Hagafors (1986), Sniezek and Buckley (1995) and LMX research can be explained as a result of differences in the availability of the staff accuracy indicators of staff member judgment confidence and the past accuracy of each staff member was explored in this dissertation. Initial investigations have been made into the implications of confidence in decision making for the behavior of individuals. As Sniezek and Buckley (1995) describe, however, "although empirical observation of confidence in individual judgment and choice has increased substantially in recent years, there has been little interest in linking confidence to behavior" (p. 106). Research in this area has found that a staff member's confidence in his/her judgment was strongly related to his/her ability to get the leader to choose his/her recommendation regardless of its accuracy (Sniezek & Buckley, 1995). Judges given the confidence assessments of their two advisors tended to accept the recommendation of the more confident advisor when the two advisors disagreed. Sniezek and Buckley's results provide support for their assumption that confidence is a mechanism of influence between advisor and judge, and suggest that leader knowledge of subordinate judgment confidence (assuming the confidence level differs across staff members) might stimulate leaders to discriminate across staff members. Confidence has been shown to enhance one's influence on others while uncertainty makes one more susceptible to influence from others (Sniezek & Buckley, 29 1995). A staff member with a high level of confidence in his/her own judgment has been shown to be given greater influence by the leader (Buckley & Sniezek, 1990; Deutsch & Gerard, 1955). Deutsch and Gerard's (1955) social decision making theory suggests that a leader's confidence about the correctness of staff members' judgments (which can be influenced by staff members' own confidence in their judgment accuracy) moderates the relationship between a leader's confidence in his/her own judgment and LUSJ. A staff member's judgment confidence may be interpreted by the leader as an indication of the accuracy of the staff member's judgment. The availability of staff judgment confidence might therefore help to explain why Sniezek and Buckley's leaders showed a greater propensity to discriminate across staff judgments than leaders in Brehmer and Hagafors' (1986) study. LMX research is typically characterized by field studies in which the leader and staff have worked together for a period of time and the leader has been able to observe each staff member's performance over time and on a variety of tasks. The longitudinal nature of the relationship between the leader and staff members implies that these leaders have had multiple opportunities to observe staff members' performance over time and across tasks. When available, knowledge of staff members' past performance might be used by the leader as an indicator of current staff member accuracy, and influence the differential weighting of staff members accordingly. Research on the ability of leaders to identify and utilize their most competent staff members has indicated that high past performance does serve to increase the weight that person's input is given by the leader (Croner & Willis, 1967; Hollenbeck et al., 1995; Kelman, 1950). Poor staff performance has been shown to lead to more autocratic 30 behavior on the part of the leader in general (Heller & Yukl, 1969) and less delegation (Dewhirst, Metts, & Ladd, 1987, as cited in Yukl, 1989). Low staff member judgment accuracy might similarly lead to lower utilization of that staff member's judgments by the leader. In an analogous fashion, it seems likely that staff members who perform well on a task may be more likely to be given increased decision influence by the leader in the future. Vertical-dyad linkage theory has supported the proposition that staff members who perform well on a task may be more likely to be given increased decision influence by the leader in the future (Croner & Willis, 1967; Hollenbeck et al., 1995; Mausner, 1954a; 1954b). Other research has supported the proposition that people tend to use a relative- weight averaging model when combining the judgments or recommendations of others, where the weights are primarily determined by source credibility (Bimbaum, Wong & Wong, 1976; Birnbaum & Stegner, 1979). To the extent that confidence and past performance might be interpreted by the leader as a reflection of the accuracy of the current recommendation, it is possible that a leader's knowledge of these factors will enable him/her to discriminate across staff members in weighting staff judgments and making a final decision. Further theorizing and research is needed to determine the effects of a leader's knowledge of subordinate judgment confidence and past performance on leaders' decisions to utilize a staff member at all (dyadic LUSJ), in addition to their effects on leaders' discrimination across staff members in assigning weights to subordinate judgments (variance in LUSJ), and to appropriately assign these weights (dyadic LUSJ accuracy). 31 Steiner (1972) suggested that one reason why groups misweight the input of their individual members is that proficient members may have low confidence in their own ability to perform the task. On the other hand, researchers have consistently found that humans tend to be overconfident in the accuracy of their decisions (Fischhoff, Slovic & Lichtenstein, 1977; Gigerenzer, 1991; Lichtenstein & Fischhoff, 1977; Sniezek, Paese & Switzer, 1990). Interestingly, Sniezek and Buckley (1995) reported a nonsignificant correlation of .15 between advisors' accuracy and their confidence ratings. The fact that advisor confidence and performance may not be highly correlated suggests that while they might each contribute to leaders' discrimination across subordinates in assigning weights to staff judgments, they may contribute differently to leaders' ability to use this information to accurately discriminate across staff members. Thus, the extent to which the availability of both confidence and past performance enable leaders to discriminate in an m sense (LUSJ variance) as well as to appropriately weight staff members in such a way that decision accuracy is maximized (LUSJ accuracy) was investigated in this dissertation. Limitations. Past research addressing the utilization of the information provided by others in making decisions has suffered from several limitations. First, of the literature that exists on social decision making, most is concerned with groups whose members' roles are undifferentiated (Sniezek, 1992; Sniezek & Buckley, 1995). Yet role differentiation is an integral aspect of teams, and has even been observed in ad hoc experimental groups (Bales & Cohen, 1979 as cited in Sniezek and Buckley, 1995). Second, research on LUSJ has fi‘equently suffered from a methodological limitation. Empirical efforts directed at LUSJ have not always controlled for the leader's 32 independent knowledge or personal judgment prior to receiving stafi judgments when computing dyadic LUSJ (e.g., Brehmer & Hagafors, 1986; Hollenbeck et al., 1995). As discussed earlier, this poses a potential problem in interpreting previous research results as these factors can influence the results of the regression equation identifying the level of dyadic LUSJ. To the extent that the leader's preliminary judgment prior to receiving staff judgments or the information known by the leader is collinear with the information available to a staff member, the staff member's dyadic LUSJ weight will be inflated if the leader's initial judgment prior to receiving staff judgments is not controlled. In this dissertation, this was controlled by giving leaders no one information other than the summary judgments provided by staff members, making leaders completely dependent on staff members for decision information. Summm Despite a vast amount of theory and research on teams and leadership, as well as group and individual decision making, very little is known about how leaders of HTDE utilize the information provided by staff members in making decisions. Theory and research consistently point to the importance of the accurate weighting of staff judgments in ultimate team and decision making effectiveness, but rarely have the factors and processes through which this weighting is determined, nor the consequences of this weighting, been addressed. In fact, an important conflict exists in the literature addressing leaders' tendency and ability to discriminate across staff members in weighting staff judgments. Interestingly, however, unique elements of each study provide a means of generating and testing potential antecedents to LUSJ. The past performance and judgment confidence of 33 each staff member might influence leaders' weighting of individual staff members (dyadic LUSJ) and ability to discriminate across staff members (variance in LUSJ), when some variability exists across staff members in terms of past performance and judgment confidence. The ability of staff confidence and past performance information, both individually and jointly, to enable a leader to accurately weight staff members' judgments (dyadic LUSJ accuracy) also has yet to be determined. Sniezek and Buckley's (1995) findings of a nonsignificant correlation between advisor confidence and performance suggest that the two constructs might have different effects on leaders' ability to accurately discriminate across staff judgments. The unique and combined ability of staff confidence and past judgment accuracy information to affect a leader's dyadic LUSJ, dyadic LUSJ variability, and LUSJ accuracy were investigated in this dissertation. Chapter 2 RESEARCH PROBLEM AND HYPOTHESES At the core of HTDE lies a leadership dilemma. On one hand, making accurate decisions frequently requires differentially weighting staff members' judgments according to their accuracy. On the other hand, there is reason to believe that differentially weighting staff input may lead to negative reactions on the part of subordinates. If the team performs poorly, however, even an equal weighting strategy may jeopardize team viability. A review of the available literature on individual, group and team decision making, as well as team effectiveness, identified a number of theories and issues that have direct applicability to how leaders use staff member information, and the consequences of LUSJ. The conclusions that can be drawn from this literature are limited, however, due to the conflicting findings regarding the antecedents of LUSJ and the limited research that has been done on the short- and long-term consequences of LUSJ for staff members. Thus, the purpose of this dissertation was twofold. First, it identified unique elements of the studies that have been performed on HTDE and tested whether providing the leader with staff members' past accuracy and current judgment confidence levels affected different types of LUSJ. Second, the consequences of different types of LUSJ for staff members were explored. 34 35 Antecedenth LUSJ Staff Member Past Judgment Accurac_y_and Judgment Confidence. Because the primary objective of the type of team of interest in this dissertation is to maximize decision accuracy, leaders may rely on indicators of staff accuracy in assigning LUSJ weights. As suggested by the literature review, the staff member characteristics of past judgment accuracy and judgment confidence may influence the leader's ability to accurately discriminate across staff members in assigning weights to staff judgments. Bottger and Yetton (1988) found that members influence group decisions in proportion to their ability. The past performance or competence of an individual on the same or on a related task has been shown to increase the influence s/he is given by a decision maker (Croner & Willis, 1967; Hollenbeck et al., 1995; Kelman, 1950; Mausner, 1954a; Mausner, 1954b). LMX research has also identified a tendency of leaders to establish relationships of differential influence with subordinates (Cashman, Dansereau, Graen & Haga, 1976; Graen & Cashman, 1975; Graen & Scandura, 1987). As past judgment accuracy can be considered a reflection of ability as well as of current judgment accuracy, it might similarly influence the weight a staff member is given by the leader. Leaders of HTDE may not always be aware of staff members' cumulative past judgment accuracy, however. Leaders of HTDE are confronted with a large amount of information, which might limit their ability to keep track of the performance of each staff member. Information not available to the leader is unlikely to influence the leader's decisions. Feeding back staff members' cumulative past judgment accuracy to the leader gives the leader information that can serve as a basis on which to rate the likely accuracy of current staff judgment accuracy and to weight staff members' judgments as a whole. 36 Because the availability of cumulative staff past judgment accuracy information enables the leader to more easily assess the task ability of each staff member, when past judgment accuracy differs across staff members, and differential weighting (as opposed to equal weighting) is the appropriate staff weighting strategy, it is proposed that: Hypothesis la: The availability of staff members' cumulative past judgment accuracy will result in a wider variability of dyadic LUSJ weights (dyadic LUSJ variability) than if staff judgment accuracy information is not available to leaders. Hypothesis 1b: A staff member's cumulative past judgment accuracy level will interact with the availability of the staff member's past judgment accuracy to the leader to influence the leader's utilization of that staff member's judgments (dyadic LUSJ) (see Figure 3 for a summary). 1: When the leader is provided a staff member's cumulative past judgment accuracy, higher past judgment accuracy will be positively related to higher dyadic LUSJ. 2: When the leader is not provided a staff member’s cumulative past judgment accuracy level, higher past judgment accuracy will be less positively related to dyadic LUSJ than when this information is provided to the leader Dyadic LUSJ High Low 37 Staff Cumulative Past Judgment Accuracy Available Staff Cumulative Past Judgment Accuracy Not Available Low High Staff Past Accuracy Level Figure 3. Hypothesized Staff Member Past Judgment Accuracy Level by Availability of Staff Cumulative Past Judgment Accuracy lnforrnation to Leader Interaction on Dyadic LUSJ 38 Staff member past judgment accuracy can be considered an indicator of staff member ability and the likelihood that the staff member's current judgment will be accurate. Feeding back staff past judgment accuracy information to the leader therefore gives the leader objective information that can be used to rate the likelihood that the staff member will be accurate on the current decision. When staff member judgment accuracy is relatively constant, and differential weighting (as opposed to equal weighting) is the appropriate staff weighting strategy (either because of differences in the validity of the information available to each staff member or because of differences in staff members’ ability), this information may therefore improve the accuracy of the leader's staff weighting strategy. It is proposed that: Hypothesis 1c: The availability of staff members’ cumulative past judgment accuracy to the leader will be positively related to the accuracy of the leader's utilization of staff members’ judgments (LUSJ accuracy). The degree to which decisions are influenced by the judgments of others has also been shown to depend largely on the amount of uncertainty associated with those judgments (Buckley & Sniezek, 1990; Deutsch & Gerard, 1955). Because confidence in one's own judgment has been found to enhance one's influence on others (Buckley & Sniezek, 1990; Deutsch & Gerard, 1955; Sniezek & Buckley, 1995), assuming there is variation in staff members’ confidence levels it is proposed that: 39 Hyp_othesis 2a: The availability of staff members' judgment confidence will result in a wider variability of dyadic LUSJ weights (dyadic LUSJ variabilig) than if staff judgment confidence information is not provided to leaders. Hypothesis 2b: A staff member’s confidence level will be positively related to the leader’s utilization of that staff member's judgments (dyadic LUSJ ). Staff member judgment confidence can be considered a reflection of the staff member's perception of the likelihood that the staff member's judgment will be accurate. Providing staff judgment confidence information to the leader therefore gives the leader subjective information that can be used to rate the likelihood that the staff member will be accurate on the current decision. Judgment confidence information may therefore improve the accuracy of the leader's staff weighting strategy. However, as Paese and Sniezek (1991) discuss, although confidence has been shown to be a means of social influence, if the confidence assessments associated with judgments are biased or inconsistently related to judgment quality, decision quality is likely to suffer. The general conclusion of the individual decision making and social judgment literatures concerning confidence is that people tend to be overconfident in their decisions (Dunning, Griffin, Milojkovic & Ross, 1990; Lichtenstein, F ischhoff & Phillips, 1982; Sniezek & Buckley, 1995; Vallone, Griffin, Lin & Ross, 1990). Thus, not enough is known to propose a directional hypothesis relating the presence of staff member judgment confidence to LUSJ accuracy. The relationship of staff judgment confidence to LUSJ accuracy will be investigated. 40 It was proposed earlier that past judgment accuracy and staff judgment confidence information will independently lead to greater LUSJ variance when greater discrimination is appropriate for maximizing decision accuracy because this information provides the leader a basis from which to derive different LUSJ weights across staff members. But past judgment accuracy and judgment confidence provide different types of information. While past judgment accuracy is objective and is more reflective of ability, judgment confidence is subjective and reflects the staff member's self-perceived accuracy on the current decision. The provision of both staff judgment confidence and past judgment accuracy information to the leader may allow greater and more accurate discriminability across staff members than if either type of information was provided alone or if neither type of information is provided. Because confidence is a less objective indicator of ability or judgment accuracy than past judgment accuracy, it is proposed that: Hypothesis 3: The availability of both staff cumulative past judgment accuracy and confidence information to the leader will result in a wider variability of LUSJ weights (greater dyadic LUSJ variabiligg) than if only one or neither type of information is provided to the leader. As discussed earlier, not enough is known to propose a directional hypothesis relating the presence of staff member judgment confidence to LUSJ accuracy, much less in combination with the presence of staff past judgment accuracy. The combined 41 influence of the availability of staff past judgment accuracy information and staff judgment confidence information on LUSJ accuracy will be investigated. Consequences of LUSJ Leader Utilization of Staff Member Judgments. The literature review indicates that higher LUSJ may lead to positive reactions for staff members, including satisfaction and commitment (Bass, 1981; Drake & Mitchell, 1977; Locke & Schweiger, 1979). Virtually all of the research performed to date on staff member reactions to decision influence has been done on teams whose members' roles in the team are not unique, however. Staff members' roles in the decision making teams addressed by this dissertation require the provision of a judgment to the leader based on a unique subset of available decision cues. As decision participation is thus required for this information to be reflected in the team decision, it is possible that staff members will be more affected by the degree of influence their judgments are ultimately given than individuals for whom participation in decision making is not a formal part of their job or role. It is therefore possible that the different types of LUSJ in a decision making team context will lead to lower commitment for staff members than for staff members in non- decision making teams because they hold a role in the team that is central to decision making. Although the relationship between LUSJ and staff member responses in HTDE has yet to be investigated, past findings will be proposed to generalize to these types of teams. Also, much of the past research on the effects of decision participation has studied teams that do not have a correct answer to the decision problem. Workgroup performance has also been shown to be positively related to group member satisfaction (Zeffane, 1994) and turnover (Jackofsky, 1984). When team 42 performance has reward consequences for both the leader and the staff, the decision performance of the leader may influence staff members' reactions regardless of the level of the staff member's dyadic LUSJ. It is proposed that: Hypothesis 4: Team performance will moderate the relationship between a staff member's dyadic LUSJ and the staff member's reactions to the leader and the team. Specifically, for teams where performance is high, staff members' reactions to the team will be high regardless of the level of dyadic LUSJ. For teams where performance is low, staff reactions will be high only when dyadic LUSJ is high, and low when dyadic LUSJ is low (see Figure 4 for a summary). This hypothesis will hold for the following reactions of staff members: a: Long-term withdrawal from the team. p: Immediate withdrawal from the team. : Satisfaction with the leader. 10 ID- : Task withdrawal. : Self-efficacy. In The degree to which the leader utilizes the judgments provided by a staff member relative to the other staff members might also affect the staff member's reactions to LUSJ. Differences in reactions may exist across staff members who are weighted differently by the leader. It is proposed that: 43 High High Team (Positive) Performance 7 Staff Member's Reactions Low Low Team Performance (Negative) Low High Staff Member's Dyadic LUSJ Level Figure 4. Hypothesized Staff Member Dyadic LUSJ Level by Team Performance Interaction on Staff Member Reactions 44 Hypothesis 5: Team performance will moderate the relationship between a staff member's relative dyadic LUSJ and the staff member's reactions to the leader and the team. Specifically, for teams where team performance is high, staff members' reactions will be high regardless of their level of relative dyadic LUSJ. For teams where team performance is low, staff reactions will be high only when relative dyadic LUSJ is high, and low when relative dyadic LUSJ is low (see Figure 5 for a summary). This hypothesis will hold for the following responses of staff members: a: Long-term withdrawal from the team. p: Immediate withdrawal from the team. : Satisfaction with the leader. 10 a: Task withdrawal. a: Self-efficacy. Team performance might also influence the relationship between dyadic LUSJ accuracy and staff member satisfaction. For example, if a staff member's accuracy is poor and the leader appropriately discounts the judgments of that staff member and the team performs well as a result, satisfaction and commitment may result for that staff member despite low LUSJ. It is proposed that: Hypothesis 6: Team performance will moderate the relationship between a staff member's dyadic LUSJ accuracy and the staff member's reactions to the leader and the team. Specifically, for teams where performance is high, staff members' 45 High High Team Performance (Positive) Staff Member's Reactions Low Low Team Performance (Negative) Low High Staff Member's Relative Dyadic LUSJ Figure 5. Hypothesized Staff Member Relative Dyadic LUSJ by Team Performance Interaction on Staff Member Reactions 46 reactions will be high regardless of the level of dyadic LUSJ accuracy. For teams where performance is low, staff reactions will be high only when dyadic LUSJ accuracy is high, and low when dyadic LUSJ accuracy is low (see Figure 6 for a summary). This hypothesis will hold for the following responses of staff members: a: Long-term withdrawal from the team. p: Immediate withdrawal from the team. 9: Satisfaction with the leader. a: Task withdrawal. a: Self-efficacy. Two studies were performed to test these hypotheses. Experiment I will focus on leaders and will examine the antecedents of dyadic LUSJ, LUSJ accuracy and LUSJ variability. Experiment 11 will focus on staff reactions to dyadic LUSJ, dyadic LUSJ accuracy and relative dyadic LUSJ. To maximize the benefits of the laboratory setting, Experiment 11 will involve the use of a confederate leader. The experiments were conducted in a laboratory setting for several reasons. First, the purpose of Experiment I is to determine if the availability of staff past judgment accuracy and staff judgment confidence affect leaders' ability to differentially and accurately weight staff members when a differential weighting strategy is appropriate. The laboratory is well suited for testing hypotheses of this "can it happen" nature (Ilgen, 1986; Mook, 1983). Second, research on the antecedents of LUSJ is difficult if not impossible in the field. Because of the necessity of minimizing the intercorrelation 47 High High Team (Positive) Performance 7 Staff Member's Reactions LOW Low Team Performance (Negative) Low High Staff Member's Dyadic LUSJ Accuracy Figure 6. Hypothesized Staff Member Dyadic LUSJ Accuracy by Team Performance Interaction on Staff Member Reactions 48 among staff judgments, a great deal of control over the experimental task is required. This type of control would rarely be possible in a field setting. Third, ethical considerations exist when studying the nature of staff reactions to LUSJ (Ilgen, 1986). Because negative reactions are hypothesized on the part of staff members in Experiment II, ethical concerns regarding the manipulation of LUSJ in the field are extremely relevant. The two experiments will be discussed in detail in the next two chapters. Chapter 3 EXPERIMENT I: ANTECEDENTS OF LUSJ Method Experimental Design Experiment I tested the first three hypotheses regarding the antecedents of LUSJ. The availability of staff member past judgment accuracy (fed back or not fed back to the leader) and the availability of staff member judgment confidence (fed back or not fed back to the leader) were crossed in an environment in which differential staff weighting was the appropriate staff weighting strategy (resulting from differences in the validity of the information available to the staff members). Participants Eighty-four undergraduates enrolled in introductory management, psychology, and communications classes at Michigan State University participated as leaders in this study in exchange for course credit. Forty-five percent of the sample were male. The mean age of the sample was 21.54. Anticipating moderate to large effect sizes, a power analysis indicated a sample size of 84 would be adequate for a power of at least .80 across all analyses at alpha < .05 (Cohen, 1988). Post hoc power analyses indicated that the power was greater than .90 across all analyses at alpha < .05 (Cohen, 1988). 49 50 Task Description The task was a team version of the TIDE2 (Team Interactive Decision Exercise for Teams Incorporating Distributed Expertise; Hollenbeck, Sego, Ilgen, Maj or, Hedlund & Phillips, 1995) computerized decision making task. TIDE2 is a software program for a decision-task simulation. Research participants are presented with values on a number of attributes (cues) about an object or situation and then make judgments and decisions regarding that object (e.g., the medical status of a patient or the threat of an aircraft). Alternatively, the team leader can be presented only with the summary judgments of staff members and make decisions regarding the object on the basis of these summary judgments. Participants are taught in pre-task training how to utilize the program and, if the leader, how to interpret and combine the cues to make judgments and decisions about the decision object. In this experiment, the program was configured to represent a military decision making situation where participants were leaders who, together with three staff members, were charged with the responsibility of interpreting how to react to aircraft in their airspace on a seven-point passive to aggressive scale. Each of the three staff members (the commanding officers of a coastal air defense team (CAD), Air reconnaissance plane (AWAC) and a Cruiser) measured, interpreted and combined a unique subset of three cues about an incoming aircrafl, interpreted this cue information, sent to the leader a summary aggressiveness judgment to the leader and, if the condition called for it, a confidence assessment regarding his or her self-perceived judgment accuracy. The leader was not able to measure any cue information directly but relied on the three staff members for summary judgments on their subset of the decision problem. The leader 5 1 was in the same room with the staff members and was responsible for combining the staff recommendations into the team decision. The leader's decisions were the team’s decisions, and were the decisions on which team decision accuracy (and any performance rewards) was based. The CAD’s responsibility was to summarize information relevant to the target’s location, the AWAC’s was to summarize information about the target’s movement, and the CRUISER summarized information relevant to the classification of the target (what type of plane it is). Each staff member therefore had something unique to contribute to the decision. Each staff member saw three unique pieces of information, and their roles did not overlap. Teams addressed a total of 63 aircraft targets, presented one at a time. After performing three practice trials lasting 300, 240, and 100 seconds, teams had 60 seconds to perform each of the remaining 60 trials. Staff members measured cue attributes and formulated and registered, via computer, a summary judgment to the leader on a seven point aggressiveness scale (see Appendix B) along with a judgment confidence assessment if the condition called for it. When sent, each staff member's judgment confidence was received by the leader at the same time as the staff member's judgment. Confidence was communicated as a number reflecting the staff member's perception that his/her judgment is accurate, in percentage terms. Thus, a communicated confidence of 90 reflected that staff member's belief that they were 90% confident that their judgment is accurate. Confidence ratings were communicated on a 1% (reflecting low confidence) to 100% (reflecting extreme confidence) scale. The leader then registered the team decision, and a feedback screen appeared for 5 seconds. This feedback screen contained decision 52 accuracy information for both the leader and the staff members on the previous target as well as cumulative performance information for the team (based on the leader's decisions). For conditions requiring staff past judgment accuracy to be fed back to the leader, a red validity bar appeared on the leader’s computer screen beneath the icon reflecting each of the positions in the team. The bars were on the screen for the duration of each target, and were removed only during the 5 second feedback period. Each bar reflected the validity of that staff member's previous judgments (the correlation of that staff member's judgments with the criterion), with a longer bar reflecting a higher correlation between the staff member's judgments and the decision criterion. A number reflecting the actual correlation between the staff member’s judgments and the decision criterion appeared to the left of each red bar. The correlation and length of each bar were updated after each trial, and varied naturally during the course of the experiment. The correlation and colored bar were accurate beginning on the seventh target. When the condition called for staff members to send the leader a judgment confidence assessment, the assessment appeared on the leader’s computer screen just below that station’s red validity bar. Each staff member processed three unique cues, which combined additively to predict the criterion. The cues were created such that the correlation between each staff member's combination of three cues and the criterion was different (approximately .75 for the CAD, .50 for the AWAC, and .25 for the Cruiser). It was stressed during training that each staff member had something unique to contribute to the team’s decision. 53 Task Training Two types of training were conducted for this experiment. The first involved asking participants to read a brief general training manual (see Appendix B) and a position-specific training manual. The general training manual consisted of an overview of the simulation and the roles in the team. The position-specific manual consisted of information pertaining to the leader’s role in the team, instructions on how to combine staff judgments into the team decision, and performance information. The second part of the training was hands-on focusing on the mechanics of performing the task, where the participants performed three practice trials as a team under the guidance of the researcher (see Appendix B for a script of this training). Questions were answered only during this initial training period. Measures Lima. The data were divided into two time periods: early and late. Target 7 was chosen as the first relevant target because this was the first target for which the staff members’ red validity bars were found to be stabilizing and valid in pilot work. The early time period consisted of trials 7-34. The late time period consisted of trials 35-63. Cognitive Ability. Participants’ college admissions scores on the American College Test (ACT) were used as a measure of cognitive ability. The reliability of the ACT is .96 (American College Testing Program, 1989). There is almost unanimous consensus among researchers in the area of general cognitive ability that such tests are highly g-loaded (Jensen, 1986; Gottfredson & Crouse, 1986; Hunter, 1986). Participants authorized the researchers to obtain these scores from the university by signing the consent form. One participant did not have ACT scores on record but did have Scholastic 54 Aptitude Test (SAT) scores. The ACT score was estimated from the participant’s SAT score. Three ACT scores were obtained from the participant by phone or by mail because the university did not have the ACT or SAT scores on record. For the eight participants who had neither ACT nor SAT scores on record and who could not be contacted by phone or by mail, the mean of participants’ ACT scores across the sample was substituted. Past Accuracy Availability. The availability of staff members’ past accuracy to the leader was manipulated through the presence or absence of red validity bars and accompanying correlation between the staff member’s past judgments and the correct decision. degment Confidence Availability. Staff members’ judgment confidence was manipulated through the configuration of the computer program. Staff members either could or could not send a confidence assessment to the leader along with their judgment, depending on the condition. Spaff Member Past Judgment AccuracLLevel. The past judgment accuracy of each staff member was operationalized as the unstandardized b weight for each staff member resulting from a regression of the correct decision on staff members' judgments. Because the effects of time were investigated, staff member past judgment accuracy was divided into accuracy on targets 7-34 (early) and targets 35-63 (late). To facilitate computations, the decimal was dropped from the unstandardized b weight. While the b weight operationalization of staff members’ past accuracy was used in the analyses, during the simulation leaders were fed back staff past judgment accuracy in the form of a red bar and accompanying number reflecting the correlation between the 55 staff member’s judgments and the correct decisions. The correlation between the correlational and b weight operationalizations of staff past judgment accuracy was .81. When the analyses were rerun using the correlational operationalization, the results were consistent with those found using the b weights. Self-Report Staff Member Past Accuracy. Afier the simulation, leaders were asked to indicate how accurate (on a scale of 0-100) they felt each of their staff members had been in predicting the correct responses. Their response was used as their self-report of that staff member’s past accuracy. _S_taff Confidence Level. Staff members in conditions requiring the provision of staff member judgment confidence to the leader entered a confidence assessment on a scale of 1 (reflecting low confidence) to 100 (reflecting extreme confidence) on each target reflecting their confidence that their judgment would be correct. Each staff member's average confidence level across targets 7-34 and across targets 35-63 was used as his/her confidence score for the early and late targets. As two of the four conditions did not allow for the provision of confidence information to the leader, only half of the leaders have staff confidence level data. Dyadic LUSJ. Dyadic LUSJ was operationalized as the unstandardized b weight for a staff member resulting from a regression of the leader's decision on all three staff members' judgments entered as a block. Each leader therefore has three unique dyadic LUSJ scores, one for each staff member. Dyadic LUSJ can be positive or negative, with higher b weights reflecting higher dyadic LUSJ. A negative dyadic LUSJ score would indicate that the leader tended to do the opposite of the recommendations of that staff member. 56 There are two issues that cannot be overlooked with regard to dyadic LUSJ. First, multicollinearity, or the existence of a substantial correlation among independent variables (e.g., staff judgments), can lead to three main problems (Cohen & Cohen, 1983). First, the substantive interpretation of the partial coefficients (dyadic LUSJ) will be difficult. Because the independent variables (staff judgments), by definition, lay claim to largely the same portion of the variance in the dependent variable (leader decisions), individual staff judgments may not make much of a unique contribution to the prediction of leader decisions. Second, multicollinearity will make the dyadic LUSJ b weights unstable by increasing the standard error of the b weights (dyadic LUSJ). As the standard error increases, the confidence interval widens for individually predicted leader decisions. This widened confidence interval lessens the probability of rejecting the null hypothesis that any partial correlation or regression coefficient for a given predictor (e.g., staff member A's judgments) is zero. Third, as the intercorrelation among staff judgments approaches 1.0, errors associated with the computation of the regression weights (dyadic LUSJ scores) can become potentially serious (Cohen & Cohen, 1983; Dillon & Goldstein, 1984). In research involving LUSJ, then, it is important that staff judgments be as unrelated as possible. In this dissertation this is accomplished by utilizing a laboratory environment and retaining control over the relationships among the underlying cues, and the cues available to each staff member. In this way, independent relationships among staff judgments were built into the study, reducing the likelihood of multicollinearity among staff judgments. Second, to the extent that the leader's preliminary judgment prior to receiving staff judgments or the information known by the leader is collinear with the information 57 available to a staff member, the staff member's LUSJ weight will be inflated if the effect of the leader's initial judgment prior to receiving staff judgments is not controlled. The fact that the leader may have a personal judgment prior to making the final team decision means that there is a factor other than staff judgments that may influence the actual weight given these judgments by the leader. This initial personal judgment is information that is critical to the calculation of the weight the leader then gives staff judgments, and must be controlled before the weights given staff judgments can be calculated. In this dissertation, this was controlled by limiting the leader's knowledge of the decision situation to the information provided by staff members' summary judgments. Leaders in this study had no decision information other than that provided by staff members' judgments. This approach was also employed by Hollenbeck et al. (1995), although they did not partial the leader's initial judgment before entering staff judgments into the regression. Again, to facilitate computations the decimal was dropped from the unstandardized b weight. Self-Report Waightingaf Staff Member. After the simulation, leaders were asked to indicate how much they felt they weighted (utilized) the judgments of each of staff member in making their decisions. They were told to divide 100 points across each of the staff members in a manner that reflected how they thought they weighted the judgments of each staff member during the simulation. LUSJ Accuracy. The accuracy of a leader's dyadic LUSJ weights for staff members was operationalized as the sum of the absolute values of the differences between the unstandardized b weights for each staff member resulting from a regression 58 of the correct decision on staff members' judgments (appropriate weight) and each staff member's dyadic LUSJ (actual weight). Higher scores reflect greater leader misweighting of their staff members, while lower scores reflect more accurate staff weighting. The closer LUSJ accuracy is to zero, the closer the match between appropriate and actual leader weighting across all staff members. Again, the decimal was dropped from each score to facilitate computations. Each leader has one accuracy of LUSJ score. This approach was also used by Hollenbeck et al. (1995). Dyadic LUSJ Variability Dyadic LUSJ variability is the degree of variability across a leader's dyadic LUSJ scores. It was operationalized as the mean absolute deviation, or the average of the absolute deviations of each staff member's dyadic LUSJ from the average dyadic LUSJ weight for that leader across all staff members (dyadic LUSJ average. Each leader therefore has only one dyadic LUSJ variability score. The formula used for the computation of dyadic LUSJ variability was: Dyadic 3 LUSJ = Z |Dyadic LUSJ Average-Dyadic LUSJSI Variability 3:1 3 Higher scores therefore reflect greater dyadic LUSJ variability, or greater variability in the dyadic LUSJ weights the leader gave to each of his/her four staff members. Lower scores reflect less variability in the dyadic LUSJ weights used by the leader (or a more equal weighting strategy). The decimal was dropped from the score to facilitate computations. 59 Appendix A illustrates the application of all of the LUSJ indices across a variety of possible data. Procedure Experiment I lasted approximately three hours. Participants signed up during class time for a three-hour session. Upon arriving for the experiment, participants were asked to sign in (to receive course credit). If other experiments were taking place simultaneously (in separate rooms), participants were randomly assigned to one of the experiments. Participants were given a consent form to read and sign (see Appendix B) and told that the top three teams in each condition would receive a cash prize as detailed in the consent form. To create a more realistic environment in which the leader could be concerned that his/her behavior could have ramifications in terms of staff behavior, participants were told that they would be performing two decision making tasks over the next three hours. They were informed that the first task would be done as a team, and that the second may or may not be performed as a team or even with the same teammates. It was stressed that while positions would be assigned for the first task, they would be given input as to how they wanted to perform the second task. Any questions were answered, and participants were given an individual differences questionnaire to complete. To enhance the credibility of the leader, participants were told that some of their responses to this questionnaire would be used to assign them to one of the positions in the team. When all participants had finished, the questionnaires were collected and participants were given 10 minutes to read a brief description of the task they would be performing (see Appendix B) while the researcher went in the other room to “score” the 60 questionnaires. The researcher randomly assigned participants to the four positions in the team, and gave participants a manual of position-specific training information (see Appendix C). Participants completed a demographic questionnaire (see Appendix B) and were given 10 minutes to study their position-specific training manual. Both the general and specific training manuals were kept by the participants when they performed the simulation. When this was completed, questions were again answered and participants received approximately 15 minutes of hands-on computer training on the task (see Appendix B for a script of this training). After completing the three practice trials, participants began the experimental simulation. At the conclusion of the simulation, participants were asked to complete a brief questionnaire that assessed their perceptions of staff accuracy and their weighting of staff members (see Appendix C). After performing a second, shorter decision making task as individuals, participants were debriefed (see Appendix C), thanked, and released. All participants were treated in accordance with the ethical standards of the American Psychological Association (APA, 1992). 6 1 Results Experiment I was designed to test hypotheses relevant to the antecedents of leader utilization of staff judgments. Time was incorporated as a factor in the design, and, when appropriate, leader cognitive ability was used as a covariate in the analyses. Unless otherwise stated, repeated measures regression was used for the analyses (Cohen & Cohen, 1983). The repeated measures regression technique divides the overall variance in the dependent variable into within- and between-portions and systematically analyzes each portion separately. Statistical inferences are then determined on the basis of the relevant sources of variance (Gully, 1994). This approach was also used by Hollenbeck, Ilgen and Sego (1994) in a longitudinal study of teams. Because sample sizes differed across the analyses due to the variables used, tables of means, standard deviations and intercorrelations are presented for each analysis. Hypotheses 1a proposed that the presence of staff cumulative past judgment accuracy would lead to greater LUSJ variability than the absence of this information. The dependent variable for the analysis was LUSJ variability. The independent variables for Hypothesis la were time (a within-leader variable), entered first, followed by leader ability and the presence or absence of cumulative staff past accuracy information (both between-leader variables). The interaction of the availability of staff past accuracy and time was entered last. The means, standard deviations and intercorrelations of the variables used in testing Hypothesis 1a are presented in Table 1a. The results of Hypothesis 1a are presented in Table 1b. 62 Table la. Means, Standard Deviations and Intercorrelations of Variables in Table 1b Variable Mean SD 1 . 2. 3. 1. Time .50 .50 2. Ability 21.90 2.92 .00 3. Past Accuracy (Present or Absent) .50 .50 .00 .20* 4. LUSJ Variability 12.15 6.82 .34“ -.02 .23** Note. N=168 *p<.05. **p<.01. Table 1b. Repeated Measures Regression Analysis of Time and the Presence of Cumulative Past Accuracy Information on LUSJ Variability A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time 4.59 .114“ .302" 35.86 (1,83) Ability -.05 .001 .002 .13 (1,82) Past Accuracy (present or absent) 3.29 .056“ .090” 8.03 (1,81) Past Accuracy X Time 3.49 .016 .042“ 5.29 (1,82) Total R2 .187" .342" .092* Note. The higher the score is, the greater the LUSJ variability. N=168. Between-subjects variance=46.28 (62%), within-subjects variance=l7.49 (38%). Total df within- subjects=84, total df between-subjects=83. *p<.05. **p<.01. Variance partitioning revealed that 62% of the total variance in LUSJ variability was due to between-subjects variance and 38% was due to within-subjects variance. This indicates that almost two-thirds of the total variance in LUSJ variability was due to situational factors or individual differences of the leaders. The results show that time accounted for 30% of the within-subjects variance. LUSJ variability increased from the first to the second half of the simulation. In support of Hypothesis 1, the provision of staff member cumulative past judgment accuracy accounted for 9% of the between- 63 subjects variance. Leaders who were provided staff member cumulative past judgment accuracy showed greater LUSJ variability. The interaction of the provision of staff past judgment accuracy and time accounted for 4% of the within-subjects variance, and is illustrated in Figure 7. This interaction shows that leaders provided staff past judgment accuracy showed a larger increase in LUSJ variability over time than leaders without staff past judgment accuracy. Hypothesis 1b proposed that for leaders provided staff member cumulative past judgment accuracy, higher staff member past judgment accuracy would be more positively related to dyadic LUSJ than it would for leaders not provided this information. First, dyadic LUSJ was regressed on the within-leader variables of time, the past judgment accuracy level of staff members, and their interactions. The between-leader variable of the presence of staff member cumulative past accuracy information was then entered, and the within by between variable interactions were entered last. The means, standard deviations and intercorrelations of the variables used in testing Hypothesis 1b are presented in Table 2a. The results of Hypothesis lb are presented in Table 2b. Table 2a. Means, Standard Deviations and Intercorrelations of Variables in Table 2b Variable Mean SD 1 . 2. 3. 1. Time .50 .50 2. Past Accuracy Level 40.93 39.43 .12" 3. Past Accuracy (Present or Absent) .50 .50 .00 .02 4. LUSJ 39.72 16.46 -.01 .42" -.03 Note. N=504 *p<.05. **p<.01. LUSJ Variability 15 A -L -a- w —~—-~ {FF—w —-7 4 ___ ‘ *fii——— - —-+» —fi —- a 64 Eady Time —+— Past Late Accuracy Not Available Past Accuracy Available Figure 7. Hypothesis 1a: Interaction Effect of Time and Cumulative Past Accuracy on LUSJ Variability 65 Table 2b. Repeated Measures Regression Analysis of Time, the Level of Staff Member Past Judgment Accuracy and the Presence of Cumulative Past Accuracy Information on Dyadic LUSJ A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time -.44 .000 .000 0 (1,419) Past Accuracy Level .18 .181" .199M 103.95 (1,418) Time X Past Accuracy Level .13 .022" .024” 13.00 (1,417) Past Accuracy (present or absent) -1.11 .001 .001 .91 (1,82) Past Accuracy Level X Past Accuracy Availability .15 .031** .034" 19.11 (1,416) Time X Past Accuracy Availability -2.21 .001 .001 .62 (1,415) Past Accuracy Level X Past Accuracy Availability X Time .10 .004 .004 2.47 (1,414) Total R2 .240" .262" .001 Note. The higher the score is, the greater the LUSJ level. N=504. Between-subj ects variance=24.66 (9%); within-subjects variance=245.87 (91%). Total (if within- subjects=420; total df between-subjects=83. ‘p<.05. **p<.01. 66 Variance partitioning revealed that only 9% of the total variance in LUSJ weights was due to between-subjects variance and 91% was due to within-subjects variance. This indicates that almost none of the total variance in LUSJ was due to situational factors or individual differences of the leaders. Given that leaders were completely dependent on staff members for information on which to base their decisions, this distribution of within and between variance in LUSJ is logical. The results for Hypothesis lb indicate that staff member past accuracy level had a strong positive relationship with dyadic LUSJ (accounting for 20% of the within-subjects variance), which increased with time. The interaction between time and staff member past accuracy level accounted for an additional 2% of the within-subjects variance in dyadic LUSJ and is illustrated in Figure 8. Over time, leaders’ weighting strategies changed such that less accurate staff members received lower weight, and more accurate staff members received greater weight. The largest difference over time is the leaders’ discounting of the less accurate staff members rather than their increased weighting of the more accurate staff members. The availability of staff member cumulative past accuracy also interacted with staff member past accuracy level in influencing LUSJ, supporting Hypothesis 1b. This effect accounted for 3% of the within-subjects variance in dyadic LUSJ and is illustrated in Figure 9. Higher staff member past judgment accuracy was more positively related to dyadic LUSJ for leaders provided staff member cumulative past judgment accuracy information than for leaders not provided this information. When staff cumulative past accuracy information was not available, leaders utilized a more equal weighting strategy, although the less accurate staff members were discounted slightly, and the more accurate staff members were given slightly greater weight. The availability of staff cumulative LUSJ 55 4+- 50 40 67 + Staff Past Accuracy -1 SD (1.5) » -—I-- Staff Past Accuracy +1 SD (46) 25: l 1 20+—— - —————— — Eany Late Figure 8. Hypothesis 1b: Interaction Effect of Time and Staff Past Accuracy Level on LUSJ LUSJ 68 35 _T. + Cumulative Past 1 Accuracy Not 1 Available 30 T — --— Cumulative Past ' / Accuracy f Available 1 25 + l I 20 +~——-—~ 4 am «A i A a —- ——1 -1 SD (1.5) +1 SD (80.4) Staff Past Accuracy Level Figure 9. Hypothesis 1b: Interaction Effect of Cumulative Past Accuracy Availability and Staff Past Accuracy Level on LUSJ 69 past accuracy information led to greater discounting of less accurate staff members and increased weighting of more accurate staff members. Hypothesis 1c proposed that the availability of staff member cumulative past judgment accuracy to the leader would be positively related to LUSJ accuracy. To test this hypothesis, LUSJ accuracy was regressed on time (a within-leader variable), leader cognitive ability, the presence or absence of staff cumulative past accuracy information and their interaction (between-leader variables), and the interaction of time and the presence of staff cumulative past accuracy information. The means, standard deviations and intercorrelations of the variables used in testing Hypothesis 1c are presented in Table 3a. The results for Hypothesis 1c are summarized in Table 3b. Table 3a. Means, Standard Deviations and Intercorrelations of Variables in Table 3b Variable Mean SD 1 . 2. 3. 1. Time .50 .50 2. Ability 21.90 2.92 .00 3. Past Accuracy (Present or Absent) .50 .50 .00 .20* 4. LUSJ Accuracy 29.61 12.20 -.35** .05 -.20* lfitp, N=168 *p<.05. **p<.01. Variance partitioning revealed that 51% of the total variance in LUSJ accuracy was due to between-subjects variance and 49% was due to within-subj ects variance. This indicates that half of the total variance in LUSJ accuracy was due to situational factors or individual differences of the leaders. The results for Hypothesis 1c indicate that leader LUSJ accuracy improved over time and with the presence of staff cumulative past 70 judgment accuracy. Time accounted for 26% of the within-subjects variance and the presence of staff cumulative past judgment accuracy information accounted for 9% of the between-subjects variance in LUSJ accuracy. Hypothesis lc was therefore supported. Table 3b. Repeated Measures Regression Analysis of Time and the Presence of Staff Cumulative Past Accuracy Information on Leaders’ LUSJ Accuracy Across Staff Members A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time -8.60 .125" .256” 28.56 (1,83) Ability .20 .002 .004 .32 (1,82) Past Accuracy (present or absent) -5.20 .044" .086“ 7.65 (1,81) Past Accuracy X Time -1 .71 .001 .002 .23 (1,82) Total Rr .172" .258" .09* Note. The higher the score is, the lower the leader’s LUSJ accuracy. N=168. Between- subjects variance=75.73 (51%); within-subjects variance=72.25 (49%). Total (if within- subjects=84; total df between-subjects=83. *p<.05. **p<.01. Hypothesis 2a proposed that the presence of staff judgment confidence would result in greater LUSJ variability. To test this proposition, LUSJ variability was first regressed on the within-leader variable of time. The between-leader variables of leader ability and the presence or absence of staff judgment confidence were then entered, followed by the interaction of time and the presence or absence of staff judgment confidence. The means, standard deviations and intercorrelations of the variables used in testing Hypothesis 2a are presented in Table 4a. The results for Hypothesis 2a are summarized in Table 4b. The results indicate that the passage of time resulted in greater LUSJ variability, but staff judgment confidence availability had no effect. Thus, 71 Hypothesis 2a was not supported. Time accounted for 30% of the within-subjects variance in LUSJ variability. It is possible that the negative skewness of judgment confidence (mean of 75, standard deviation of 10) detracted from leaders’ ability to use this information as a means of determining how to weight staff judgments. Table 4a. Means, Standard Deviations and Intercorrelations of Variables in Table 4b Variable Mean SD 1 . 2. 3. 1. Time .50 .50 2. Ability 21.90 2.92 .00 3. Confidence (Present or Absent) .50 .50 .00 -.06 4. LUSJ Variability 12.15 6.82 .34" -.02 .07 Note. N=168 *p<.05. **p<.01. Table 4b. Repeated Measures Regression Analysis of Time and the Presence of Staff Judgment Confidence on LUSJ Variability A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time 4.59 .114“ .302" 35.86 (1,83) Ability -.05 .001 .002 .13 (1,82) Confidence (present or absent) .87 .004 .006 .52 (1,81) Confidence X Time -1.18 .002 .005 .63 (1,82) Total R2 .121M .307“ .008 m, The higher the score is, the greater the LUSJ variability. N=168. Between-subjects variance=28.79 (62%); within-subjects variance=l7.49 (3 8%). Total df within- subjects=84; total df between-subjects=83. *p<.05. **p<.01. Hypothesis 2b proposed that a staff member’s confidence level would be positively related to that staff member’s dyadic LUSJ. Because all variables involved in 72 this regression are within-leader variables, hierarchical regression was used in the analyses. Dyadic LUSJ was regressed on time, staff member past accuracy level, staff member confidence level and the interactions of these variables. The means, standard deviations and intercorrelations of the variables used in testing Hypothesis 2b are presented in Table 5a. The results of Hypothesis 2b are presented in Table 5b. Table 5a. Means, Standard Deviations and Intercorrelations of Variables in Table 5b Variable Mean SD 1 . 2. 3. 1. Time .50 .50 2. Past Accuracy Level 40.46 37.47 .12 3. Confidence Level 74.73 10.01 .07 .07 4. Dyadic LUSJ 40.49 16.62 -.04 .32" .23" Ma. N=252 *p<.05. **p<.01. Table 5b. Hierarchical Regression Analysis of Time, Staff Member Past Accuracy Level and the Level of Staff Member Judgment Confidence on Dyadic LUSJ A in Incremental F Variable b Total R2 (df,df) Time -1.26 .001 .36 (1,250) Past Accuracy Level .14 .104" 28.84 (1,249) Confidence Level .36 .046" 13.50 (1,248) Time X Confidence Level -.17 .003 .78 (1,247) Past Accuracy Level X Confidence Level -.01 .008 2.47 (1,246) Time X Past Accuracy Level .09 .010 3.04 (1,245) Time X Past Accuracy Level X Confidence Level .00 .000 .10 (1,244) Total R2 .172" Note. The higher the score is, the greater the LUSJ level. N=252. Total df=251. *p<.05. **p<.01. 73 The results support Hypothesis 2b and indicate that after partialling the effect of staff member past accuracy level, which accounted for 10% of the variance in dyadic LUSJ, staff member judgment confidence level had a statistically significant positive effect on LUSJ. Staff member judgment confidence level accounted for an additional 5% of the variance in dyadic LUSJ. Leaders gave staff members reporting higher judgment confidence greater weight than staff members reporting lower confidence when making the team’s decisions. Leader utilization of staff member judgment confidence did not change as a result of time. The effect of the availability of staff members’ confidence levels on the accuracy of staff members’ dyadic LUSJ weights was also investigated. In this analysis, LUSJ accuracy was first regressed on the within-leader variable of time. Ability and the presence of staff member judgment confidence (both between-team variables) were then entered, followed by the interaction of time and the presence of staff member confidence. The results indicate that the presence of staff member judgment confidence had no effect on leaders’ LUSJ accuracy, although LUSJ accuracy did improve over time. Time accounted for 26% of the within-subjects variance in LUSJ accuracy. The means, standard deviations and intercorrelations of the variables used in the analysis are presented in Table 6a. The results of this analysis are summarized in Table 6b. Hypothesis 3 proposed that the availability of both staff member cumulative past judgment accuracy and staff member judgment confidence information would be related to greater variability in leaders’ LUSJ weights across staff members than if only one or neither type of information was available. This hypothesis was tested by regressing LUSJ variability on time (a within-leader variable), leader ability (a between leader 74 variable), staff member judgment confidence availability, staff member cumulative past accuracy availability, and the interactions of time, confidence availability and cumulative past judgment accuracy availability. The means, standard deviations and intercorrelations of the variables used in the analysis are presented in Table 7a. The results for this analysis are summarized in Table 7b. Table 6a. Means, Standard Deviations and Intercorrelations of Variables in Table 6b Variable Mean SD 1 . 2. 3. 1. Time .50 .50 2. Ability 21.90 2.92 .00 3. Confidence (Present or Absent) .50 .50 .00 -.06 4. LUSJ Accuracy 29.61 12.20 -.35** .05 .02 l\_lat_e_. N=168 *p<.05. **p<.01. Table 6b. Repeated Measures Regression Analysis of Time and the Presence of Staff Judgment Confidence on Leaders’ LUSJ Accuracy Across Staff Members A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time -8.60 .125M .256* 28.56 (1,83) Ability .20 .002 .004 .32 (1,82) Staff Judgment Confidence (present or absent) .61 .001 .002 .16 (1,81) Confidence (present or absent) X Time .21 .000 .000 0 (1 ,82) Total R2 .128" .256" .006 1% The higher the score is, the lower the leader’s LUSJ accuracy. N=168. Between- subjects variance=75.73 (51%); within-subjects variance=72.25 (49%). Total df within- subjects=84; total df between-subjects=83. *p<.05. **p<.01. 75 Table 7a. Means, Standard Deviations and Intercorrelations of Variables in Table 7b Variable Mean SD 1 . 2. 3. 4. 1. Time .50 .50 2. Ability 21.90 2.92 .00 3. Confidence (Present or Absent) .50 .50 .00 -.06 4. Past Accuracy (Present or Absent) .50 .50 .00 .20* .00 5. LUSJ Accuracy 29.61 12.20 -.35** .05 .02 -.20 Note. N=168 *p<.05. **p<.01. Table 7b. Repeated Measures Regression Analysis of Time, the Availability of Cumulative Staff Past Accuracy Information and the Presence of Staff Judgment Confidence on Leaders’ LUSJ Variability A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time 4.59 .114" .302W 35.86 (1,83) Ability -.05 .001 .002 .13 (1,82) Staff Judgment Confidence (present or absent) .87 .004 .006 .52 (1,81) Past Accuracy (present or absent) 3.27 .056** .09“ 7.98 (2,80) Confidence X Past Accuracy -4.90 .032” .051* 4.78 (1,79) Time X Confidence -1.18 .002 .005 .63 (1,82) Time X Past Accuracy 3.49 .016 .042* 5.27 (1,81) Confidence (present or absent) X Past Accuracy X Time -5.24 .009 .024 3.04 (1,80) Total R2 .234“ .373" .149* Note. The higher the score is, the greater the leader’s LUSJ variability. N=168. Between-subjects variance=28.79 (62%); within-subjects variance=l7.49 (3 8%). Total df within-subjects=84; total df between-subj ects=83. *p<.05. **p<.01. 76 The results support Hypothesis 3. Greatest LUSJ variability was shown by leaders provided staff past accuracy but not staff judgment confidence information. Lowest LUSJ variability was shown by leaders provided neither staff judgment confidence nor staff past accuracy information. Although positive relationships were found between LUSJ variability and time and LUSJ variability and the availability of staff member cumulative past judgment accuracy, no relationship was found between LUSJ variability and the provision of staff judgment confidence. Time accounted for 30% of the within-subjects variance in LUSJ variability, and the presence of staff member past judgment accuracy accounted for 9% of the between-subj ects variance. An effect on LUSJ variability was found for the cumulative past accuracy information by judgment confidence availability interaction. This effect accounted for 5% of the between-subjects variance in LUSJ variability, and is illustrated in Figure 10. The nature of this effect is such that the relationship between confidence availability and LUSJ variability is negative when staff past accuracy information is available, and positive when staff past accuracy information is not available. A significant time by the availability of staff cumulative past accuracy interaction was also found, and is illustrated in Figure 11. This effect accounted for an additional 4% of the within-subjects variance in LUSJ variability, and mirrors the effect found in Hypothesis 1a. LUSJ variability increased more over time when leaders were fed back staff members’ cumulative past judgment accuracy than when this information was not fed back to leaders. The effect of the availability of staff members’ judgment confidence levels on the accuracy of staff members’ dyadic LUSJ weights was also investigated. In this analysis, LUSJ Variability 77 + Cumulative 11 1 Past Accuracy j Not 9 ' Available 1 1 - - Cumulative I Past l Accuracy 7 . Available 1 l 5 l_____ _ R. L _ . , ._ ._ _. Not Available Available Confidence Availability Figure 10. Hypothesis 3: Interaction Effect of Confidence Availability and Cumulative Past Accuracy Availability on LUSJ Variability LUSJ Variability 78 + Cumulative Past 9 Accuracy 1 Not T Available '. - 9| - Cumulative 7 T Past T Accuracy 5 Available 5 1%.... MAW..- A... _-._ __.__. m. Early Late Time Figure 11. Hypothesis 3a: Interaction Effect of Time and Cumulative Past Accuracy Availability on LUSJ Variability 79 LUSJ accuracy was first regressed on the within-leader variable of time. Ability and the presence of staff member judgment confidence (both between-team variables) were then entered, followed by the interaction of time and the presence of staff member confidence. The means, standard deviations and intercorrelations of the variables used in the analysis are presented in Table 8a. Table 8a. Means, Standard Deviations and Intercorrelations of Variables in Table 8b Variable Mean SD 1 . 2. 3. 4. 1. Time .50 .50 2. Ability 21.90 2.92 .00 3. Confidence (Present or Absent) .50 .50 .00 -.06 4. Past Accuracy (Present or Absent) .50 50 .00 .20* .00 5. LUSJ Variability 12.15 6.82 .34” -.02 .07 .23** 131% N=168 *p<.05. **p<.01. The results, summarized in Table 8b, indicate that the presence of staff member judgment confidence had no effect on leaders’ LUSJ accuracy alone or in combination with staff member past judgment accuracy. The presence of staff member past accuracy did increase leaders’ LUSJ accuracy, and accounted for 9% of the between-subjects variance in LUSJ accuracy. LUSJ accuracy also improved over time, which accounted for 26% of the variance in LUSJ accuracy. The ordering of cell means indicated that leaders provided only staff cumulative past accuracy were the most accurate in weighting their staff (31.61), followed by leaders provided both staff member judgment confidence and staff cumulative past accuracy 80 Table 8b. Repeated Measures Regression Analysis of Time, the Availability of Cumulative Staff Past Accuracy Information and the Presence of Staff Judgment Confidence on Leaders’ LUSJ Accuracy Across Staff Members A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Time -8.60 .125” .256” 28.56 (1,83) Ability .20 .002 .004 .32 ( 1,82) Staff Judgment Confidence (present or absent) .61 .000 .000 0 (1,81) Past Accuracy (present or absent) -5.21 .044** .086" 7.56 (1,80) Confidence X Past Accuracy -1 .49 .001 .002 .17 ( 1,79) Time X Confidence .21 .000 .000 0 (1,82) Time X Past Accuracy -1 .71 .001 .002 .22 ( 1,81) Confidence (present or absent) X Past Accuracy X Time -4.00 .002 .004 .44 (1,80) Total R2 .175M .262” .092 11(11- The higher the score is, the lower the leader’s LUSJ accuracy. N=168. Between- subjects variance=75.73 (51%); within-subjects variance=72.25 (49%). Total (If within- subjects=84; total df between-subjects=83. *p<.05. **p<.01. 81 (32.27), leaders with neither (35.76) and leaders with only staff member judgment confidence (35.97). Discussion The results of Experiment I indicate that in an environment in which differentially weighting staff judgments leads to greater decision accuracy, the provision of staff member cumulative past accuracy to leaders is related to greater weighting variability and weighting accuracy than if this information is not available. Leaders’ weighting accuracy improved over time, and the variability of leaders’ weights across the staff members increased over time. However, leaders who were fed back staff cumulative past accuracy information used it in deciding how to weight staff judgments, and did so in a way that led to more appropriate staff utilization and greater weighting variability. Although all leaders saw feedback on the accuracy of each staff member after each target, leaders relied more on staff member cumulative past accuracy in making their decisions when this information was fed back to them than when it was not. Staff member judgment confidence information was also utilized by leaders in making the team’s decisions, although its availability was unrelated to both LUSJ variability and LUSJ accuracy. In fact, the correlation between staff member judgment confidence level and staff member judgment accuracy was only .07 (n.s.), yet the correlation between staff member judgment confidence level and leader weighting of that staff member was .23 (p<.01). Comparable miscalibration of confidence is also found when the relationship between staff confidence level and staff judgment accuracy is examined by staff member. The correlation between judgment confidence and judgment 82 accuracy for staff members with the most accurate cue information was .06 (p<.01); for staff members with the least accurate cue information was .01 (n.s.), and for staff members whose cue accuracy was in the middle it was -.01 (n.s.). Thus, although leaders’ weighting variability and weighting accuracy improved with time, their utilization of judgment confidence, which was unrelated to judgment accuracy, detracted from leaders’ appropriate weighting of staff members. An additional finding concerns leaders’ self-reported perceptions of their staff members’ accuracy levels. The higher a staff member’s judgment confidence, the greater the leader’s self-report of that staff member’s judgment accuracy after the simulation had ended (r=.20; p<.01). This indicates that despite its poor calibration, staff member judgment confidence was not only used by the leader in making the team’s decisions, it contributed to a positive bias in leaders’ recollections of staff member judgment accuracy. This is supportive of the argument that leaders consider confidence to be an indicator of accuracy, and that this is why it is utilized by leaders in weighting staff input. The availability of staff judgment confidence and staff cumulative past accuracy to leaders did not interact in affecting staff weighting accuracy nor staff weighting variability. Ability was not found to be a significant predictor of how effectively leaders utilized their staff. Effects for ability were not detected when ability was expanded in the regression equations and interacted with the other variables, with the exception of a statistically significant (p < .05) three-way interaction effect of ability, past accuracy availability, and confidence availability on LUSJ variability. The nature of this effect was such that confidence availability was slightly negatively related to LUSJ variability 83 when past accuracy was available regardless of leader cognitive ability. For leaders lower in cognitive ability, confidence availability was negatively related to LUSJ variability, although this relationship was positive among leaders higher in cognitive ability. This may be due to the fact that higher ability leaders had staff members report lower confidence levels than lower ability leaders (r=-.l9, p < .01). The relatively simple information processing requirements of the task for the leader probably restricted the role played by cognitive ability in this study. As cognitive ability is thought to be related to greater information processing capacity (Kanfer & Ackerman, 1989), it is possible that a leader’s cognitive ability would be related to his/her ability to appropriately discriminate among staff members in weighting their judgments and making the team decision with a more complex decision task. This study helps to shed light on the conflict in the existing literature on leaders’ tendency to differentially weight staff members. Leaders in this study exhibited greater weighting variability and weighting accuracy when provided staff cumulative past accuracy information. This is consistent with work on LMX and VDL (Dansereau, Graen & Haga, 1975; Graen & Cashman, 1975; Scandura, Graen & Novak, 1986) that has shown that leaders tend to seek out relationships of differential influence with staff members based, at least in part, on staff member performance and ability. The fact that leaders were positively influenced by the confidence assessments of staff members in making the team’s decisions is consistent with work on judge-advisor systems that has shown that leaders are willing to differentially utilize staff members, and that confidence is a mechanism for this influence. 84 This study’s findings are also consistent with Brehmer and Hagafors’ (1986) work that found that leaders without staff member judgment confidence or staff member cumulative past accuracy information were unable to adopt a differential weighting strategy when it was appropriate. In this study, particularly on the first half of the simulation, leaders without staff past accuracy information utilized a more equal weighting strategy than did leaders provided this information. Thus, it is reasonable to conclude that staff member cumulative past judgment accuracy, when provided to the leader, promotes the adoption of a more appropriate differential weighting strategy. Judgment confidence, on the other hand, is utilized by the leader, but the lack of calibration of judgment confidence makes this information unable to consistently improve leaders’ staff weighting accuracy. The implications of these findings, and directions for future research, will be discussed in the last chapter. The next chapter will discuss Experiment II. Chapter 4 EXPERIMENT II: CONSEQUENCES OF LUSJ Mama Experimental Design Experiment II tested Hypotheses 4-6. The experimental design consisted of the crossing of high and low team performance with an equal or differential leader weighting strategy. A confederate played the role of leader, while participants were staff members on four-person teams. In equal weighting conditions, the leader averaged the three staff responses in making the team decision. In differential accuracy conditions, the leader weighted the CAD judgment .5, the AWAC judgment .33 and the Cruiser judgment .17. In low performance conditions, the confederate leader added a pre-determined random error term to the decision. Participants Two hundred twenty-eight participants were recruited from introductory management, psychology and communications classes and received course credit in return for their participation in this study. Fifty-five percent of the sample were male, forty-five percent were female. The mean age of the sample was 21.2. Anticipating a moderate effect size, a power analysis indicated that a sample size of 228 would result in a power of at least .80 across all analyses at alpha < .05 (Cohen, 1988). Post hoc power analyses indicated that the power exceeded .80 for all analyses (Cohen, 198 8). 85 86 Task Description The task was the same team version of the TIDE2 (Team Interactive Decision Exercise for Teams Incorporating Distributed Expertise; Hollenbeck, Sego, Ilgen, Major, Hedlund & Phillips, 1995) computerized decision making task discussed for Experiment I. The computer program was again configured as a military decision making simulation, but participants were one of three staff members (rather than the leader): either the C0 of the Aegis cruiser (Cruiser), the Coastal Air Defense unit (CAD), or the AWACS reconnaissance plane. Each of these three staff members was responsible for measuring, interpreting and combining the same unique subset of three cues about an incoming aircraft as in Experiment I. The leader (Carrier) was a confederate stationed in another room who either equally or differentially weighted staff recommendations in making the team’s decision. The same person was the confederate for all teams. The leader's decisions were again the team’s decisions, on which team performance (and any performance rewards) was based. Participants were asked to make judgments about a total of 63 aircraft targets, presented one at a time. The first three targets were practice trials lasting 300, 240, and 100 seconds. Staff members measured cue attributes and formulated and registered, via computer, a summary judgment to the leader on a seven point aggressiveness scale (see Appendix B) along with a judgment confidence assessment. The last 60 targets counted toward the bonus money. Participants were given 60 seconds for each target to measure cue attributes and to formulate and register, via computer, a summary judgment to the leader. These judgments were made on a seven point scale representing increasing amounts of force (see Appendix D). 87 After the leader registered the team decision, a feedback screen appeared for 5 seconds. This feedback screen contained decision accuracy information for both the leader and the staff members on the previous target as well as cumulative performance information for the team (based on the leader's decisions). Team performance was determined by the accuracy of the leader's final decision. The point scoring system is described in Appendix D. The level of the leader's past utilization (weighting) of their judgments were fed back to staff members in both conditions. Information regarding the leader's past utilization of each staff member was in the form of a green bar. The length of the green bar reflected the weight the leader had given that staff member's judgments on all targets encountered to that point (the correlation between that staff member’s judgments and the leader’s decisions). The number reflecting the actual correlation between that staff member’s judgments and the leader’s decisions was positioned immediately to the left of each green bar. Each staff member saw the green bars and correlations for all three staff members. These bars remained on the computer screen for the duration of each target, and were removed only during the 5 second feedback period. The length of the bars was updated after each trial, and were accurate beginning on the eighth target. The cues and targets were the same as in Experiment I. Staff members sent confidence assessments to the leader as described in Experiment I, and were told that the leader’s screen displayed a red bar reflecting the accuracy of each staff member’s previous judgments. At the conclusion of the experiment, participants were debriefed (see Appendix D), thanked, and released. Monetary awards earned by participants were delivered within 88 three weeks of the end of the experiment. All participants were treated in accordance with the ethical standards of the American Psychological Association (APA, 1992). Task Training The training for Experiment 11 was the same as for Experiment 1. Participants first read a brief training manual (see Appendix B) containing general information pertaining to the type of task the participants would be performing, their role as a staff member of their team and performance information. They then read a position-specific manual containing information on their role in the team and the cues for which they would be responsible (see Appendix D). The second part of the training was hands-on task training focusing on the mechanics of performing the task, where the participants performed three practice trials under the guidance of the researcher (see Appendix B for a script of this training). Questions were answered only during this initial training period. Measures _T_e_arn Performance. Team performance was manipulated in two ways. For teams in the low accuracy condition the confederate leader added a pre-determined random error to the decision, decreasing the team’s point total. Additionally, at the end of the simulation participants in the low team performance condition were told by the researcher that they had not performed well enough to be contenders for the bonus money. Participants in the high team performance condition were told that they had done well and had a good chance of earning bonus money. Team performance was operationalized as the point total earned by the team over the 60 non-practice targets. The scoring system is described in Appendix D. Team performance could range from -l20 to +120 points. 89 Dyadic LUSJ. Because the on-screen display of how the leader had weighted staff members on all previous targets was displayed as the correlation between that staff member’s judgments and the leader's decisions on all 63 targets, this correlation was utilized as the dyadic LUSJ score. Each staff member therefore had a unique dyadic LUSJ score. The correlation reflects the strength of association between the staff member’s judgments and the decisions ultimately made by the leader. This approach has also been used in previous research (e.g., Brehmer & Hagafors, 1986; Tucker, 1964). To facilitate computations, the decimal was dropped from the dyadic LUSJ score. Self-Report LUSJ. After the simulation, staff members were asked to indicate how much they felt the leader weighted (utilized) the judgments of each of staff member in making decisions over the course of the simulation. They were told to divide 100 points across each of the staff members in a manner that reflected how they thought the leader had weighted the judgments of each staff member during the simulation. Their rating of the leader’s weighting of their judgments was used as their self-report LUSJ. Dyadic LUSJ Accurzgy, The accuracy of a leader's dyadic LUSJ was operationalized as the absolute difference between the dyadic LUSJ score for a particular staff member and the staff member's appropriate LUSJ score based on their judgment accuracy. It was calculated by taking the absolute value of the result of subtracting the correlation between the staff member’s judgments and the correct decisions from the staff member’s correlational dyadic LUSJ (actual weight). A higher score therefore reflects greater misweighting of a staff member. A score of zero reflects perfect utilization of that staff member. Each staff member has one dyadic LUSJ score. The decimal was again removed from the score to facilitate computations. 90 Relative Dyadic LUSJ. Relative dyadic LUSJ was operationalized as the target staff member's correlational dyadic LUSJ score centered about the mean of all of the dyadic LUSJ scores of that team. Centering is a technique that puts variables in deviation score form so that their mean is zero (Aiken & West, 1991). The relative dyadic LUSJ score is calculated by subtracting the mean of the staff member's leader's raw (not absolute) dyadic LUSJ scores (leader's dyadic LUSJ mean) from the target staff member's raw dyadic LUSJ score. Staff members with high relative dyadic LUSJ scores have therefore been given greater influence by the leader in the leader's decisions than staff members with lower relative dyadic LUSJ scores. Again, the decimal was removed from the score. By using raw, rather than absolute, dyadic LUSJ scores, this operationalization of relative dyadic LUSJ preserves information about both the magnitude and the direction of the target staff member's dyadic LUSJ relative to the leader's dyadic LUSJ across all staff members. Because information about the direction of the dyadic LUSJ scores (positive or negative) is important in determining a staff member's utilization by the leader relative to the other staff members, it is important that raw dyadic LUSJ scores rather than the absolute values be utilized (unless, of course, all dyadic LUSJ scores are positive). Staff Member Willingness to Return. Staff member willingness to return to the lab in the future for pay was measured with a six-item scale developed for this study. Items reflect participants' willingness to work with the leader and staff members again in the future, as well as their willingness to work on the task again. Scale items are listed in Appendix D. The reliability of this scale was .95, with 6 items retained. The range of the 91 scale was 1, reflecting lower willingness to return, to 5, reflecting greater willingness to return. _S_t_aff Member Desire for Change on the Next TasL Staff members’ desire to change the way they performed the next task was measured with a three-item scale developed for this study. Scale items are listed in Appendix D. The reliability of this scale was .58, with 3 items retained. The range of the scale was 0, reflecting less desire for change on the next task, to 3, reflecting greater desire for change. Staff Member Satisfaction with Leefiarp Staff member satisfaction with the leader was assessed using a scale adapted from Seashore, Lawler, Mirvis and Carnmann (1982). Scale items are presented in Appendix D. The reliability of this scale was .89, with 8 items retained. The range of the scale was 1, reflecting lower satisfaction with the leader, to 5, reflecting greater satisfaction with the leader. Task Withdrawal. Staff member task withdrawal was measured with an ll-item scale adapted from Baker (1991) and Gilliland (1992). Scale items are listed in Appendix D. The reliability of this scale was .82, with 9 items retained. The range of this scale was 1, reflecting lower task withdrawal, to 5, reflecting greater withdrawal. Staff Member Self-Effica_cy_. Staff member self-efficacy was measured with an 8- item scale adapted from Locke, Frederick, Lee & Bobko (1984). Scale items are listed in Appendix D. The reliability of this scale was .90, with 8 items retained. The range of this scale was 1, reflecting lower self-efficacy, to 5, reflecting greater self-efficacy. 92 Procedure Experiment 11 lasted approximately three hours. Participants signed up during class time for a three-hour session. Upon arriving for the experiment, participants were asked to sign in (to receive course credit). If other experiments were taking place simultaneously (in separate rooms), participants were randomly assigned to one of the experiments. Participants were given an appropriate consent form to read and sign (see Appendix B) and told that the top three teams in each condition would receive a cash prize as detailed in the consent form. To create a more realistic environment in which the staff could react to what occurred in their team, participants were told that they would be performing two decision making tasks over the next three hours. They were informed that the first task would be done as a team, and that the second may or may not be performed as a team or even with the same teammates. It was stressed that while positions would be assigned for the first task, they would be given input as to how they wanted to perform the second task. Any questions were answered, and participants were given an individual differences questionnaire to complete. To enhance the credibility of the leader, participants were told that some of their responses to this questionnaire would be used to assign them to one of the positions in the team. When all participants had finished, the questionnaires were collected and participants were given 10 minutes to read a brief description of the task they would be performing (see Appendix B) while the researcher went in the other room to “score” the questionnaires. The researcher randomly assigned participants to the three staff member positions in the team, and gave participants a manual of position-specific training information (see Appendix D). Participants completed a demographic questionnaire (see 93 Appendix B) and were given 10 minutes to study their position-specific training manual. Both the general and specific training manuals were kept by the participants when they performed the simulation. When this was completed, questions were again answered and participants received approximately 15 minutes of hands-on computer training on the task (see Appendix B for a script). After completing the three practice trials, participants began the experimental simulation. At the conclusion of the simulation, participants were asked to complete a series of questionnaires. After performing a second, shorter decision making task as individuals, participants were debriefed, thanked, and released. All participants were treated in accordance with the ethical standards of the American Psychological Association (APA, 1992). 3.6% Data Analysis Experiment 11 was designed to test hypotheses relevant to staff reactions to different forms of LUSJ. Study means, standard deviations and intercorrelations are presented in Table 9. All hypotheses for Experiment 11 (Hypotheses 4a-6e) were tested using repeated measures regression analysis. The dependent variable was the appropriate staff reaction for the particular hypothesis, and variance partitioning was used in determining statistical significance. Appendix F contains tables of means, standard deviations and intercorrelations for each analysis. Hypothesis 4 proposed that leader decision accuracy (team performance) would moderate the effect of LUSJ on five different staff reactions. To test the hypotheses, 94 .58.... .movm. .v _ NHZ 523 he Ewe-_- onz :o owfifiu Lou 260D 3088 833.5, =m .5.“ 3ng .082 1.0m.- mo. no. :3. 3. a3. 2. If. Iva. hm. 3+ SEEM-bow A: *aovr .ILN. tam.- N—f not I. —N.- .I.mm.- .3..th v0. NTN _wBNHwfima xmmh d IE..- 1.2. co. oo. 8. 1.2“. aamm. on. _N.m Sumo-H 5:5 cocoafiumm .w 8.- No. 5.- _ m m m- .3ri mm. as. xmfl. :82 co omSEU 8% “Emma .N. co. we. no. *2. If. N: 36 530m 8 magma—=3 .o 3.- a... K. tam. no. 3.2 3.3” BSA toned-bum .m no. mo. 3.- wod— v6: .8833. 334 omega .v $3.. co. 3.2 cc. RDA Sega gum—om .m 1mm. v3: 3.? RDA .m 2.6— mmdm ouggotom Each.— .a .w <- 6 .m .v .m .m ._ Om :82 033:3 = EoEtonm E “#53 333:; mo macaw—ohoobafi use .mcocmSoD wee—O55 £532 6 033- 95 LUSJ (a within-team variable) was entered first, followed by team performance (a between team variable), followed by their interaction. Hypothesis 4a investigated staff member willingness to return, and the results are summarized in Table 10. Table 10. Repeated Measures Regression Analysis of LUSJ and Team Performance on Willingness to Return A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ .01 .026M .042“ 6.61 (1,151) Team Performance .01 .013* .034 2.62 (1,74) Performance X LUSJ .00 .014* .023 3.62 (1,150) Total R2 .053" .065" .034 Mtg, The higher the score is, the greater the willingness to return. N=228. Between-team variance=.48 (3 8%); within-team variance=.78 (62%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p<.01. Variance partitioning revealed that 38% of the total variance in staff member willingness to return was due to between-team variance and 62% was due to within-team variance. This indicates that just over one-third of the total variance in staff member willingness to return was due to between-team factors such as team performance or situational factors, and just under two-thirds of the total variance was due to within-team factors such as LUSJ or individual differences of the team members. The results of Hypothesis 4a indicate that the level of the leader’s weighting of a staff member was positively related to that staff member’s willingness to return, accounting for 4% of the within-team variance. Team performance accounted for 3% of the between-team variance. Team performance also interacted with LUSJ in predicting staff member willingness to return, accounting for 2% of the within-team variance Willingness to Return 96 ‘- ' ,, I 4.3 l - +Team - Performance 33 i -1 SD (14.6) l -I — - Team Performance +1 SD (46) I 2,-qu _ __ __ _ _ __ _ , -1 SD (24.8) +1 SD (61.3) LUSJ Figure 12. Hypothesis 4a: Interaction Effect of Team Performance and LUSJ on Willingness to Return 97 supporting Hypothesis 4a. This effect is illustrated in Figure 12, and shows that when team performance was low, LUSJ had a negative relationship with staff member willingness to return. When team performance was high, LUSJ was positively related to staff member willingness to return. The nature of this effect is slightly different from the hypothesized effect. Rather than LUSJ being positively related to staff reactions for both high and low team performance, LUSJ was negatively related to staff reactions when team performance was low. Staff members whose judgments received higher weight but whose teams performed poorly were the least willing to return, while staff members whose judgments received greater weight but whose teams performed well were the most willing to return. The effect sizes changed slightly when team performance was entered into the regression equation first, followed by LUSJ and the interaction term. Team performance accounted for 3% (p<.01) of the total variance in staff member willingness to return and 9% (p_<.OI) of the between-team variance. LUSJ accounted for 1% (n.s.) of the total variance, and 1% (n.s.) of the within-team variance. Hypothesis 4b investigated the moderating effect of team performance on the relationship between LUSJ and staff member desire for change on the next task. The results of the analyses are summarized in Table 11. Because N=214 for all analyses involving staff member desire to change on the next task due to incomplete responses, the mean for LUSJ in this analysis was 31.62 with a standard deviation of 10.66, and for performance the mean was 30.85 with a standard deviation of 15.55. Variance partitioning revealed that 43% of the total variance in staff member desire to change for the next task was due to between-team variance and 57% was due to 98 within-team variance. This indicates that just under half of the total variance in staff member desire to change for the next task was due to between-team factors such as team performance or situational factors, and just over half of the total variance was due to within-team factors such as LUSJ or individual differences of the team members. Providing partial support for Hypothesis 4b, LUSJ accounted for 3% of the within-team variance and team performance accounted for 15% of the between-team variance in staff members’ desire to change for the next task. Staff were more interested in changing the way the next task was performed (e.g., changing the leader, changing the team or working as an individual) when the team either did poorly or when the staff member had lower LUSJ. However, no moderating effect of team performance was found on the effect of LUSJ on desire for change on the next task. Table 11. Repeated Measures Regression Analysis of LUSJ and Team Performance on Desire to Change for Next Task A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ -.01 .018* .032* 4.61 (1,141) Team Performance -.02 .063M .146" 11.96 (1,70) Performance X LUSJ .00 .000 .000 0 (1,140) Total R2 .081 ** .032 .146" Note. The higher the score is, the greater the desire to change for the next task. N=214. Between-team variance=.37 (43%); within-team variance=.49 (57%). Total df within- team=142; total df between-team=71. *p<.05. "p<.01. The effect sizes changed when the order of entry for LUSJ and team performance was reversed. Team performance accounted for 8% (p<.01) of the total variance, and 19% (p<.01) of the between-team variance in staff member desire for change on the next 99 task. LUSJ did not account for any of the variance afier controlling for team performance. Hypothesis 4c investigated the moderating effect of team performance on the relationship between LUSJ and staff member satisfaction with the leader. The results of Hypothesis 4c are summarized in Table 12. Table 12. Repeated Measures Regression Analysis of LUSJ and Team Performance on Satisfaction With Leader A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ .02 .137“ .349" 81.01 (1,151) Team Performance .02 .170" .280" 28.75 (1,74) Performance X LUSJ .00 .004 .010 2.39 (1,150) Total RI .311” .359” .280“ Note. The higher the score is, the greater satisfaction with the leader. N=228. Between- team variance=.35 (61%); within-team variance=.23 (39%). Total df within-team=152; total df between-team=75. *p<.05. **p<.01. Variance partitioning revealed that 61% of the total variance in staff member satisfaction with their leader was due to between-team variance and 39% was due to within-team variance. This indicates that just under two-thirds of the total variance in staff member satisfaction with their leader was due to between-team factors such as team performance or situational factors, and just over one-third of the total variance was due to within-team factors such as LUSJ or individual differences of the team members. The results for Hypothesis 4c show that both LUSJ and team performance were positively related to staff member satisfaction with the leader, providing partial support for Hypothesis 4c. LUSJ accounted for 35% of the within-team variance in satisfaction 100 with the leader, and team performance accounted for 28% of the between-team variance. No interaction effect between team performance and LUSJ on satisfaction with the leader was observed. The results for Hypothesis 4c changed when the order of entry for LUSJ and team performance was reversed. Team performance accounted for 30% (p<.01) of the total variance in staff member satisfaction with the leader, and 49% (p<.01) of the between-team variance. LUSJ accounted for an additional 1% (n.s.) of the total variance, and 2% (p<.05) of the within-team variance after controlling for team performance. Hypothesis 4d investigated the moderating effect of team performance on the relationship between LUSJ and task withdrawal. Variance partitioning revealed that 44% of the total variance in staff members’ self-reported task withdrawal was due to between- team variance and 56% was due to within-team variance. This indicates that just under half of the total variance in staff member self-reported task withdrawal was due to between-team factors such as team performance or situational factors, and just over half of the total variance was due to within-team factors such as LUSJ or individual differences of the team members. The results are summarized in Table 13, and indicate that LUSJ was negatively related, and team performance was unrelated, to task withdrawal. LUSJ accounted for 20% of the within-team variance in LUSJ. In support of Hypothesis 4d, the interaction of team performance and LUSJ was significant, accounting for 4% of the within-team variance in task withdrawal, and is illustrated in Figure 13. The nature of the interaction Task Withdrawal 101 i +Team , Performance l -1 SD (14.6) l — —-- -Team l Performance + +1 SD (46) 2.4 l l l l 2.2 «‘- l l ‘ l l 2 l l l 18 j . + , ——4 -1 SD (24.8) +1 SD (61.3) LUSJ Figure 13. Hypothesis 4d: Interaction Effect of Team Performance and LUSJ on Task Vlfithdrawal 102 is slightly different than that proposed, in that LUSJ was more negatively related to task withdrawal when team performance was higher than when team performance was lower. Task withdrawal was lowest when team performance and LUSJ were both high. Having a higher LUSJ weight was unable to compensate for the higher task withdrawal reported by staff members on more poorly performing teams. Interestingly, the highest task withdrawal was reported by the lower weighted staff members on high performing teams. The results for Hypothesis 4d changed slightly when the order of entry for LUSJ and team performance was reversed. Team performance accounted for 7% (p<.01) of the total variance in task withdrawal, and 16% (p<.01) of the between-team variance when entered into the regression equation first. LUSJ accounted for an additional 5% (p<.01) of the total variance, and 9% of the within-team variance (p<.01). Table 13. Repeated Measures Regression Analysis of LUSJ and Team Performance on Task Withdrawal A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ -.01 .110** .196" 36.82 (1,151) Team Performance -.01 .011 .025 1.90 (1,74) Performance X LUSJ -.01 .021* .037W 7.32 (1,150) Total R2 .142" .233” .025 Note. The higher the score is, the greater the task withdrawal. N=228. Between-team variance=.18 (44%); within-team variance=.23 (56%). Total dfwithin-team=152; total df between-team=75. *p<.05. *"‘p<.01. Hypothesis 4e proposed that team performance would moderate the effect of LUSJ on staff member self-efficacy. Variance partitioning revealed that 45% of the total variance in staff members’ self-efficacy was due to between-team variance and 55% was 103 due to within-team variance. This indicates that just over one-third of the total variance in staff member self-efficacy was due to between-team factors such as team performance or situational factors, and just under two-thirds of the total variance was due to within- team factors such as LUSJ or individual differences of the team members. The results for this hypothesis are summarized in Table 14, and show that both LUSJ and team performance are positively related to staff member self-efficacy, providing partial support for Hypothesis 4e. LUSJ accounted for 6% of the within-team variance, and team performance accounted for 7% of the between-team variance in self- efficacy. However, team performance and LUSJ did not interact in predicting self- efficacy. Table 14. Repeated Measures Regression Analysis of LUSJ and Team Performance on Self-Efficacy A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ .01 .033M .06“ 9.60 (1 ,151) Team Performance .01 .031 ** .069* 5.55 (1,74) Performance X LUSJ .00 .011 .02 3.25 (1,150) Total R2 .075” .08“ .069* Note. The higher the score is, the greater the self-efficacy. N=228. Between-team variance=.14 (45%); within-team variance=.18 (55%). Total dfwithin-team=152; total df between-team=75. *p_<.05. “p<.01. When team performance was entered into the regression first, followed by LUSJ, team performance accounted for 6% (p_<.01) of the total variance in staff member self- efficacy, and 13% (p<.01) of the between-team variance. LUSJ accounted for less than 104 1% (n.s.) of the total and 1% (n.s.) of the within-team variance in staff member self- efficacy after team performance was partialled. Hypothesis 5 investigated the moderating effect of team performance on the relationship between staff members’ relative LUSJ and the same five staff reactions tested in Hypothesis 4. To test the hypotheses, relative LUSJ (a within-team variable) was entered first, followed by team performance (a between-team variable) and the interaction. Hypothesis 5a proposed that the relationship between a staff member’s relative LUSJ and the staff member’s willingness to return would be moderated by team performance. The results for Hypothesis 5a are summarized in Table 15. Team performance was positively related and relative LUSJ was unrelated to willingness to return. Team performance accounted for 9% of the between-team variance in willingness to return. Relative LUSJ and team performance interacted in affecting staff members’ willingness to return, accounting for 4% of the within-team variance. This effect is illustrated in Figure 14. The nature of this effect was such that a positive relationship existed between relative LUSJ and willingness to return among members of higher performing teams, while the relationship was negative for members of lower-performing teams. Staff members with higher LUSJ on higher-performing teams were the most willing to return, while staff members with lower LUSJ on lower-performing teams were the least willing to return. The results did not change when the order of entry for relative LUSJ and team performance was reversed. Wilingness to Return 105 4.8 T I l 4.3 f— l - , - , -' "’ "I +Team ' ,- , ,, ' I Performance 1 .__ -, _. ~ -1 SD(14.6) -—I--Team Performance +1 so (46) 3.3 T '- l 2.8 l_-,. T. T ,, - f -,v___- __ . -1 SD (-13.6) +1 so (13.6) Relative LUSJ Figure 14. Hypothesis 5a: Interaction Effect of Team Performance and Relative LUSJ on Willingness to Return 106 Table 15. Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Willingness to Return A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Relative LUSJ .01 .005 .008 1.23 (1,151) Team Performance .01 .033” .087" 7.03 (1,74) Performance X Relative LUSJ .01 .025" .040* 6.36 (1,150) Total R2 .063" .048* .087" Mtg, The higher the score is, the greater the willingness to return. N=228. Between- team variance=.48 (3 8%); within-team variance=.78 (62%). Total df within-team=152; total df between-team=75. *p_<.05. Mp<.01. Hypothesis 5b proposed that team performance would moderate the effect of relative LUSJ on staff member desire to change for the next task. Again, because N=214 for analyses involving staff member desire to change for the next task, the mean for relative LUSJ was .03 with a standard deviation of 13.70, and the mean for performance was 30.85 with a standard deviation of 15.55. Results for Hypothesis 5b are summarized in Table 16, and do not support Hypothesis 5b. Although a negative relationship was found between team performance and desire for change on the next task, accounting for 19% of the between-team variance, no relationship was found to exist between relative LUSJ and desire for change on the next task. The interaction between the two variables was not significant. The higher the team performed, the less staff members wanted to change in the way the next task was performed. The results did not change when the order of entry for relative LUSJ and team performance was reversed. 107 Table 16. Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Desire to Change for Next Task A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Relative LUSJ .01 .012 .021 3.04 (1,141) Team Performance -.02 .080" .185" 15.92 (1,70) Performance X Relative LUSJ .00 .003 .005 .76 (1,140) Total R2 .095" .026 .185" N91; The higher the score is, the greater the desire to change for the next task. N=214. Between-team variance=.37 (43%); within-team variance=.49 (57%). Total df within- team=l 42; total df between-team=71. *p_<.05. **p<.01. Hypothesis 5c proposed that team performance would moderate the effect of relative LUSJ on staff member satisfaction with the leader. The results of Hypothesis 5c are summarized in Table 17, and indicate that team performance had a strong positive effect on staff satisfaction with the leader, accounting for 49% of the between-team variance. Contrary to Hypothesis 5c, relative LUSJ and the interaction between relative LUSJ and team performance had no effect on staff satisfaction with the leader. The results did not change when the order of entry of team performance and relative LU SJ was reversed. Hypothesis 5d proposed that the effect of relative LUSJ on task withdrawal would be moderated by team performance. The results are summarized in Table 18, and provide partial support for this hypothesis. Relative LUSJ and team performance were both negatively related to task withdrawal, but the interaction of the two was not significant. Relative LUSJ accounted for 8% of the within-team variance, and team performance 108 accounted for 16% of the between-team variance in task withdrawal. The results did not change when the order of entry for relative LUSJ and team performance was reversed. Table 17. Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Satisfaction With Leader A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Relative LUSJ .00 .000 .000 0 (1,151) Team Performance .03 .298“ .49M 71.22 (1,74) Performance X Relative LUSJ .00 .003 .008 1.16 (1,150) Total R2 .301 ** .008 .49" N_o_t§, The higher the score is, the greater satisfaction with the leader. =228. Between- team variance=.35 (61%); within-team variance=.23 (39%). Total dfwithin—team=152; total df between-team=75. *p_<.05. Mp<.01. Table 18. Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Task Withdrawal A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Relative LUSJ -.01 .044" .078W 12.85 (1,151) Team Performance -.01 .071 ** .162" 14.28 (1,74) Performance X Relative LUSJ .00 .006 .011 1.76 (1 ,150) Total R2 .121" .089" .162M Note. The higher the score is, the greater the task withdrawal. N=228. Between-team variance=.18 (44%); within-team variance=.23 (56%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p<.01. Hypothesis 5e proposed that team performance would moderate the effect of relative LUSJ on self-efficacy. Table 19 summarizes the results. In partial support of Hypothesis 5e, both team performance and relative LUSJ were positively related to self- 109 efficacy. Relative LUSJ accounted for 3% of the within-team, and team performance accounted for 13% of the between-team variance in self-efficacy. The interaction between relative LUSJ and team performance did not have an effect. The results did not change when the order of entry for relative LUSJ and team performance was reversed. Table 19. Repeated Measures Regression Analysis of Relative LUSJ and Team Performance on Self-Efficacy A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) Relative LUSJ .01 .016* .029* 4.51 (1,151) Team Performance .01 .060" .134" 11.44 (1,74) Performance X Relative LUSJ .00 .001 .002 .28 (1,150) Total RT .077" .031 .134" Note. The higher the score is, the greater the self-efficacy. N=228. Between-team variance=.14 (45%); within-team variance=.18 (55%). Total dfwithin-team=152; total df between-team=75. *p<.05. "p<.01. Hypothesis 6 tested the proposition that the accuracy of the leader’s weighting strategy would affect the five staff reactions explored in Hypotheses 4 and 5, moderated by team performance. To test these hypotheses, dyadic LUSJ accuracy (a within-team variable) was entered into the regressions first, followed by team performance (a between team variable) and their interaction. Hypothesis 6a proposed that the effect of dyadic LUSJ accuracy on staff member willingness to return would be moderated by team performance. Table 20 summarizes the results for this hypothesis. Team performance was found to be positively related to staff member willingness to return, accounting for 9% of the between-team variance. 110 Although dyadic LUSJ accuracy had no direct effect, the interaction between dyadic LUSJ accuracy and team performance accounted for 7% of the within-team variance in staff member willingness to return. The interaction is illustrated in Figure 15. The nature of the interaction is such that dyadic LUSJ accuracy was positively related to willingness to return in lower-performing teams, but slightly negatively related to willingness to return in higher-performing teams. Receiving a more accurate weight by the leader led to a fairly high willingness to return regardless of team performance, while being inaccurately weighted on a lower-performing team led to the lowest willingness to return. The results changed slightly when the order of entry for LUSJ accuracy and team performance was reversed. Team performance accounted for 3% (p<.01) of the total variance in staff member willingness to return, and 9% (p<.01) of the between-team variance. Dyadic LUSJ accuracy accounted for less than 1% (n.s.) of the total and 1% (n.s.) of the within-team variance in willingness to return after partialling the effect of team performance. Table 20. Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Willingness to Return A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ Accuracy .00 .001 .002 0 (1,151) Team Performance .01 .035M .092" 6.56 (1,74) Performance X LUSJ Accuracy -.01 .042” .068W 11.47 (1,150) Total R2 .078" .07** .092” Egg; The higher the score is, the greater the willingness to return. N=228. Between- tearn variance=.48 (3 8%); within-team variance=.78 (62%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p_<.01. VWlingness to Return 111 4.3 —9— Team 3'8 Performance -1 SD (14.6) » «I— -Team T Performance l +1 SD (46) 3.3 (r l l l l l 2.8 L— ___- T T _ «___-.- WA. A.___ .3 -1 SD (5.1) +1 SD (30.4) Dyadic LUSJ Accuracy Figure 15. Hypothesis 6a: Interaction Effect of Team Performance and Dyadic LUSJ Accuracy on VWllingness to Return 112 Hypothesis 6b proposed that dyadic LUSJ accuracy’s effect on staff member desire to change for the next task would be moderated by team performance. Because N=214 for this analysis, the mean for LUSJ accuracy was 17.48 with a standard deviation of 12.91, and the mean for performance was 30.85 with a standard deviation of 15.55. The results of the analyses are presented in Table 21. The results do not support Hypothesis 6b. Only team performance was negatively related to staff member desire to change for the next task, accounting for 19% of the between-team variance. The results did not change when the order of entry for dyadic LUSJ accuracy and team performance was reversed. Table 21. Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Desire to Change for Next Task A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ Accuracy .00 .000 .000 0 (1,141) Team Performance -.02 .081" .188M 16.16 (1,70) Performance X LUSJ Accuracy .00 .002 .004 .49 (1,140) Total R2 .083" .004 .188" Note. The higher the score is, the greater the desire to change for the next task. N=214. Between-team variance=.37 (43%); within-team variance=.49 (57%). Total df within- team=142; total df between-team=71. *p<.05. **p<.01. Hypothesis 6c proposed that the effect of dyadic LUSJ accuracy on staff member satisfaction with the leader would be moderated by team performance. The results are presented in Table 22, and do not provide support for the hypothesis. Only team performance was positively related to staff member satisfaction with the leader, 113 accounting for 50% of the between-team variance. Dyadic LUSJ accuracy and its interaction with team performance had no effect on staff satisfaction with the leader. Table 22. Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Satisfaction With Leader A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ Accuracy .00 .000 .000 0 (1,151) Team Performance .03 .301" .495" 72.64 (1,74) Performance X LUSJ Accuracy .00 .001 .003 .38 (1,150) Total R2 .302" .003 .495" Note. The higher the score is, the greater satisfaction with the leader. N=228. Between- team variance=.35 (61%); within-team variance=.23 (39%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p<.01. The results changed slightly when the order of entry of team performance and dyadic LUSJ accuracy was reversed. Team performance accounted for 30% (p<.01) of the total and 49% (p<.01) of the between-team variance and LUSJ accuracy accounted for less than 1% (n.s.) of the total and 1% (n.s.) of the within-team variance. Hypothesis 6d proposed that team performance would moderate the effect of dyadic LUSJ accuracy on task withdrawal. The results of the analyses are presented in Table 23. Contrary to Hypothesis 6d, only team performance had a statistically significant negative effect on task withdrawal, accounting for 17% of the between-team variance. Neither dyadic LUSJ accuracy nor the interaction of dyadic LUSJ accuracy and team performance had a significant effect. 114 Table 23. Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Task Withdrawal A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ Accuracy -.01 .006 .011 1.63 (1,151) Team Performance -.01 .076* .173** 15.50 (1,74) Performance X LUSJ Accuracy .01 .010 .018 2.75 (1,150) Total RI .092" .029 .173" N_ot_;e._ The higher the score is, the greater the task withdrawal. N=228. Between-team variance=.18 (44%); within-team variance=.23 (56%). Total df within-team=152; total df between-team=75. *p<.05. **p<.01. The results changed slightly when the order of entry of dyadic LUSJ accuracy and team performance was reversed. Team performance accounted for 7% (p<.01) of the total variance in task withdrawal, and 16% (p<.01) of the between-team variance. Dyadic LUSJ accuracy did not account for any significant amount of variance in task withdrawal once the effect of team performance had been partialled. Hypothesis 6e proposed that the effect of dyadic LUSJ accuracy on staff member self-efficacy would be moderated by team performance. The results are presented in Table 24. Providing partial support for Hypothesis 6e, both team performance and its interaction with dyadic LUSJ accuracy had statistically significant effects on self- efficacy. Team performance accounted for 14% of the between-team variance, and its interaction with dyadic LUSJ accuracy accounted for 3% of the within-team variance in self-efficacy. Dyadic LUSJ accuracy did not have a direct effect on self-efficacy. The interaction effect of dyadic LUSJ accuracy and team performance in predicting staff member self-efficacy is illustrated in Figure 16. The nature of this effect Self-Efficacy 115 + Team Performance -1 SD (14.6) 3.7 : ——I--Team i Performance l +1 SD (46) l 3.5 45%—___---“ __. ._____‘,____ __,.V___ __, '1 30 (5-1) +1 so (30.4) Dyadic LUSJ Accuracy Figure 16. Hypothesis 6e: Interaction Effect of Team Performance and Dyadic LUSJ Accuracy on Self-Efficacy 116 is such that the relationship between dyadic LUSJ accuracy and self-efficacy was slightly negative for staff members on higher-performing teams, and positive for staff members on lower-performing teams. Less accurately weighted staff members on higher- performing teams reported the highest self-efficacy, and less accurately weighted staff members on lower-performing teams reported the lowest self-efficacy. Table 24. Repeated Measures Regression Analysis of Dyadic LUSJ Accuracy and Team Performance on Self-Efficacy A in A in A in Incremental F Variable b Total R2 Within R2 Between R2 (df,df) LUSJ Accuracy .00 .005 .009 1.38 (1,151) Team Performance .01 .064M .143” 12.33 (1,74) Performance X LUSJ Accuracy -.01 .015* .027* 4.23 (1,150) Total R2 .084" .036 .143” Note. The higher the score is, the greater the self-efficacy. N=228. Between-team variance=. 14 (45%); within-team variance=. 18 (55%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p<.01. The results changed when the order of entry of team performance and dyadic LUSJ accuracy was reversed. Team performance accounted for 6% (p<.01) of the total variance in self-efficacy, and 13% of the between-team variance. Dyadic LUSJ accuracy accounted for an additional 1% (n.s.) of the total variance, and 2% (n.s.) of the within- team variance after controlling for the effect of team performance. Discussion The results for Experiment 11 indicate several findings. The level of the leader’s weighting of a staff member’s judgments interacted with team performance in affecting 117 only two of the five staff reactions investigated: willingness to return and task withdrawal. The pattern of the moderating effect of team performance on the relationship between LUSJ and task withdrawal indicated that being weighted more heavily by the leader had less influence on task withdrawal than did team performance. In fact, contrary to previous research on the effects of decision influence, in poorer-performing teams being weighted more heavily was negatively related to willingness to return. In teams that performed well, however, LUSJ was related to greater willingness to return and lower task withdrawal. This may be because staff members in lower-performing teams who were being weighted more heavily by the leader felt responsible for the team’s poor performance and this influenced their reactions. LUSJ had statistically significant direct effects on all five outcomes studied when the effects of team performance were not controlled for. Because LUSJ is significantly correlated with team performance (.53, p<.01), the effects on task withdrawal and satisfaction with the leader were weakened when team performance was entered in to the regression first. The effects disappeared for willingness to return, desire to change for the next task and self-efficacy once team performance was controlled for. These results highlight the dominant role of team performance in affecting participants’ reactions. Staff members’ relative LUSJ only interacted with team performance in affecting staff member willingness to return, although staff members’ relative LUSJ did have a positive direct effect on self-efficacy and a negative direct effect on task withdrawal. The nature of the interaction is such that the relationship between relative LUSJ and willingness to return was positive in higher-performing teams, and negative in lower- performing teams. Receiving a high weight from the leader relative to the other team 118 members led to the greatest willingness to return in higher-performing teams, and the lowest willingness to return in lower-performing teams. Interestingly, the correlation between staff members’ self-report of the weight given their judgments by the leader and their actual dyadic LUSJ was .58 (p<.01). The correlation between staff members’ self- report LUSJ and their relative dyadic LUSJ was a slightly higher .71 (p<.01). The accuracy of a staff member’s dyadic LUSJ weight did not directly affect any of the outcomes studied. LUSJ accuracy did interact with team performance in affecting willingness to return and self-efficacy. In both of these interactions the effect was carried by the positive relationship between LUSJ accuracy and the reaction among members of lower-performing teams. In other words, the accuracy of staff members’ LUSJ weights was not as positive a factor in influencing willingness to return or self-efficacy in higher- performing teams as it was in lower-performing teams. In terms of the five staff reactions that were investigated, for the staff reaction of being willing to return in the future, team performance had a direct effect and interacted with dyadic LUSJ level, relative dyadic LUSJ and dyadic LUSJ accuracy. Once team performance was controlled for, the effects of dyadic LUSJ on willingness to return were greatly reduced. Relative dyadic LUSJ and dyadic LUSJ accuracy did not influence willingness to return directly. The findings suggest that high team performance leads to the highest staff willingness to return. In higher-performing teams, relative LUSJ and dyadic LUSJ have a strong positive relationship with willingness to return, but dyadic LUSJ accuracy is slightly negatively related to willingness to return. In lower- performing teams, dyadic LUSJ accuracy is positively related while dyadic LUSJ and relative LUSJ are negatively related to willingness to return. In other words, staff 119 willingness to return was affected negatively by weighting level, but positively by weighting accuracy in lower-performing teams. In higher-performing teams, staff willingness to return was affected positively by weighting level and only slightly negatively by weighting accuracy. Only dyadic LUSJ level and team performance had an effect on staff desire to change for the next task. After controlling for team performance, dyadic LUSJ level had no effect. Dyadic LUSJ level, relative dyadic LUSJ and dyadic LUSJ accuracy did not interact with team performance in predicting staff desire to change for the next task. Team performance was the main influence on staff desire to change. Team performance was also the most important factor influencing staff satisfaction with the leader. The better the team performed, the more satisfied the staff were with the leader. Dyadic LUSJ also had a small positive effect even after controlling for the effects of team performance, but the bulk of the variance in staff satisfaction with the leader was accounted for by team performance. Dyadic LUSJ, relative LUSJ and dyadic LUSJ accuracy did not interact with team performance in predicting staff satisfaction with the leader. Staff member task withdrawal was affected by dyadic LUSJ, relative LUSJ, and team performance. Withdrawal was lower when dyadic LUSJ or relative LUSJ was higher, and when the team had higher performance. The interaction of dyadic LUSJ and team performance also contributed to the prediction of task withdrawal. Higher team performance led to generally lower task withdrawal regardless of the level of LUSJ. Staff members given more weight by the leader reported lower task withdrawal than staff members given lower weight regardless of team performance. Weighting accuracy was 120 unrelated to task withdrawal, and did not interact with team performance in predicting staff reports of task withdrawal. Staff member self-efficacy was positively related to dyadic LUSJ, relative dyadic LUSJ and team performance. Although dyadic LUSJ accuracy did not have a direct effect on self-efficacy, it interacted with team performance to contribute to the prediction of staff self-efficacy. On higher-performing teams, dyadic LUSJ accuracy was negatively related to self-efficacy. On lower-performing teams, dyadic LUSJ accuracy was positively related to self—efficacy. The contribution of weighting level over team performance in predicting the staff reactions is blurred by the fact that the error manipulation created a correlation of .53 (g < .01) between LUSJ and team performance. To obtain a clearer indication of the role of the leader’s staff utilization strategy in predicting staff reactions, effects coding of both team performance and leader utilization of staff members was used to evaluate the effects of the team performance and leader weighting strategy manipulations on the five staff reactions after partialling team performance. The results of the regression analyses are presented in Appendix E. Statistically significant main effects (2 < .05) of the weighting manipulation were found for the staff reactions of desire for change on the next task and task withdrawal. The nature of the main effects were such that higher-weighted staff members were more desirous of change on the next task compared to the other team members (p < .01). Higher-weighted staff members were less likely to withdraw from the task (2 < .01), and lower-weighted staff members were more likely to withdraw from the task (9 < .05) than the other team members. 121 The interaction of the team performance manipulation with the leader weighting manipulation reached statistical significance (p < .05) only for staff member willingness to return. While the slope of the relationship between team performance and willingness to return was positive for all team members, the slope was less positive for lower- weighted staff members (1; < .05) than the other team members. Although the interaction of the team performance manipulation with the leader weighting manipulation did not reach statistical significance for withdrawal from the task, the effect for lower-weighted staff members was statistically significant (p < .05). This effect was such that the relationship between team performance and task withdrawal was negative for all team members, and the slope was less negative for lower-weighted staff members than for anyone else. No statistically significant effects were found for the leader weighting manipulation on self-efficacy or satisfaction with the leader, although the team performance by leader weighting interaction approached significance for staff self- efficacy (2 < .08). These findings indicate that leader utilization of a staff member does influence staff member willingness to return, desire for change on the next task, and withdrawal after controlling for the effects of team performance. The implications of these findings, and directions for future research, will be discussed in the next chapter. Chapter 5 IMPLICATIONS, LIMITATIONS, AND AREAS FOR FUTURE RESEARCH Antecedents of LUSJ Experiment I found that staff member judgment confidence did not improve leader decision making accuracy when provided alone or in combination with staff cumulative past judgment accuracy. Staff members’ judgment confidence was poorly calibrated with judgment accuracy, yet these assessments still influenced leaders’ staff weighting strategies. Additionally troublesome, leaders’ self-reports of staff member judgment accuracy after the simulation had ended were positively biased by the level of the staff member’s judgment confidence. Clearly, the existence of this bias could have negative implications for performance evaluations that rely on the leader’s perceptions of staff accuracy. If these performance evaluations are later used as indications of staff past accuracy or ability, the usefulness of past accuracy information could also be compromised. This study’s findings that staff judgment confidence is at best weakly related to judgment accuracy and that confidence serves as a mechanism of influence between staff member and leader replicates the findings of Sniezek and Buckley (1995). Sniezek and Buckley also found that a staff member’s confidence in his/her judgments was strongly related to his/her ability to get the leader to choose his/her recommendation regardless of its accuracy. 122 123 These findings are also consistent with the literature on leader-member exchange theory that has found that leaders tend to identify “in-group” staff members to whom to allow greater input and decision latitude on the basis of staff member ability (Graen & Scandura, 1987; Scandura, Graen & Novak, 1986). When given information about the performance of staff members, leaders tended to use this information in weighting future staff member input. Past findings that that leaders increase the weight given the judgments of higher past performing staff members (Croner & Willis, 1967; Hollenbeck et al., 1995; Kelman, 1950) were also replicated. Providing leaders with staff member cumulative past judgment accuracy information was extremely useful to leaders in this study. In particular, leaders more readily discounted the judgments of staff members with low past accuracy when this information was fed back than when it was not available, and LUSJ accuracy improved as a result. As long as staff members’ accuracy does not change over time, as was the case in this study, providing leaders staff member cumulative past accuracy information can be a positive intervention in improving leader weighting accuracy and encouraging the adoption of an appropriate differential weighting strategy across staff members. The findings of Brehmer and Hagafors (1986) were also replicated in this study. Brehmer and Hagafors found that leaders tended to adopt a more equal weighting strategy regardless of the distribution of appropriate staff weighting levels. In this study, where differential staff weighting was the appropriate strategy, leaders not fed back staff member cumulative past accuracy information tended to adopt a much more equal weighting strategy than leaders fed back this information. 124 The conflicting findings in the literature regarding the antecedents of leader utilization of staff judgments thus appear to be resolved once the influence of the availability of staff member judgment confidence and staff member cumulative past performance information is taken into account. In general, leaders tend to utilize an equal weighting strategy in the absence of information that helps them discriminate among their staff members. Leaders tend to use staff member judgment confidence in weighting staff judgments when it is available, leading to greater discrimination among staff members but not contributing to improved weighting accuracy. Leaders also tend to use staff member cumulative past judgment accuracy in more appropriately weighting staff judgments, and are able to make even better use of this information when staff cumulative past performance is fed back to them than when it is not. The finding that providing leaders with staff member cumulative past accuracy information improves weighting accuracy while confidence is unrelated to weighting accuracy suggests that in the interest of more accurate decision making only staff member cumulative past judgment accuracy information should be provided to leaders. The availability of staff member judgment confidence did not improve leaders’ LUSJ accuracy or LUSJ variability alone or when paired with the feedback of staff member cumulative past judgment accuracy information. This finding indicates that staff member judgment confidence information should not be provided to nor sought out by leaders of hierarchical teams with distributed expertise. The finding that judgment confidence appears only to impair leader decision making accuracy is likely due to the small or absent relationship between confidence level and accuracy. This is an effect that has consistently been demonstrated in the 125 literature (Sniezek & Buckley, 1993). Because confidence has been consistently shown to be a mechanism of influence, future research efforts might benefit from a focus on improving individuals’ calibration of their judgment confidence. Should interventions be discovered that improve the calibration of judgment confidence, it is likely that confidence could become beneficial information for leaders interested in improving their decision accuracy. The findings for the effects of confidence on leader decision making despite the poor calibration of confidence to judgment accuracy are not surprising when other literature on confidence is considered. Research on jury decision making, for example, has consistently shown that jurors rely on eyewitness confidence in making a verdict (Moore & Gump, 1995; Penrod & Cutler, 1995), despite the consistently poor calibration of confidence with lineup identification accuracy (Juslin, Olsson & Winman, 1996; Sporer, Penrod, Read & Cutler, 1995). Additional theorizing and research might prove fruitful in the area of identifying other interventions that promote accurate differential weighting of staff on the part of leaders. For example, although leaders’ LUSJ variability and LUSJ accuracy increased over time in this study, interventions might be developed that further speed up the learning curve for leaders learning how to best utilize their staff. For the sake of experimental control, leaders in Experiment I did not have access to any decision-related information other than staff member judgments. Future investigations into the effects of providing leaders their own unique decision information or information redundant with staff members’ information should also be investigated. 126 Future investigations into individual differences of leaders may also shed light on leader utilization of staff information. Just as leaders higher in cognitive ability better utilized their staff, different information processing styles might make leaders more or less likely to utilize the information provided by aids such as staff past judgment accuracy or staff judgment confidence. Highly dogmatic individuals, for instance, tend to make decisions quickly and tend not to utilize all available information. Individuals higher in conscientiousness, on the other hand, might prove to be more adept at identifying and utilizing the most beneficial information. Searches for environmental factors that influence the nature of leaders’ weighting strategies are also encouraged. Differing reward structures, for example, might help or inhibit leaders from differentially utilizing their staff. Leaders concerned about staff turnover or decreased commitment to the resulting decision might be less likely to engage in a differential weighting strategy even when it would lead to higher performance. If the consequences for poor team decision accuracy were severe, on the other hand, leaders might be more willing to differentially weight staff judgments for the sake of improved accuracy. Consequences of LUSJ Experiment 11 found that team performance consistently had a strong effect on staff member reactions to the team. If the team performed well, staff members tended to have more favorable reactions to the team than if the team performed more poorly regardless of how the leader utilized their judgments in making the team’s decisions. This finding is consistent with past research on the effects of workgroup performance on group member satisfaction (Zeffane, 1994) and turnover (Jackofsky (1984). 127 The strong team performance effect may be attributable to the fact that teams competed for bonus money that was dependent solely on team performance. The fact that the teams were assembled for three hours solely for the purposes of this study may have served to intensify the effects of team performance. The fact that effects were still found for different types of leader utilization of staff judgments despite this limitation indicates that the hypothesized effects for LUSJ might be even greater in more natural settings. The finding that the pattern of results for the effect of dyadic LUSJ on staff reactions was opposite in higher relative to lower-perfonning teams for staff member willingness to return and greatly reduced for task withdrawal was surprising. This indicates that past research on the positive effects of decision influence and participation (e. g., Bass, 1981; Drake & Mitchell, 1977; Locke & Schweiger, 1979) may generalize to higher-performing, but not to lower-performing teams in which there is a single correct decision. In a context with such heavy team reward implications, it is logical that team performance tends to drive staff reactions rather than the level of the leader’s utilization of their judgments. If team rewards are made less salient, or if individual consequences such as financial rewards or promotion are present, it is possible that staff members will react more strongly to how their input is utilized by the leader. Future research should continue to explore the moderating role of team performance and the presence of a single correct answer in the relationship between influence level and staff reactions. For example, if the task has a correct answer, team performance is likely to play a much stronger role in affecting staff reactions than if no accuracy assessment can be made. 128 Future theorizing and research should try to identify other causes of staff reactions to LUSJ, in addition to investigating methods of managing more unfavorable staff reactions to lower LUSJ weights. For the task used in this study, lower staff accuracy for some team members was part of the design. The ability of staff members to perform well was strongly determined by the design of the task and had much less to do with a staff member’s ability. Yet the more negative reactions of the less-accurate and lower- weighted staff indicate that leaders do indeed face a dilemma when differential staff weighting is required for higher team performance. The fact that the team’s function is to make accurate decisions and that team performance consistently accounted for greater variance in staff reactions than any type of leader weighting of the staff member indicates that team performance should be the top priority of leaders. Again, replications of these findings in more natural settings are needed before any conclusions can be drawn. Unexpectedly, dyadic LUSJ accuracy was a positive factor on lower-performing teams and a slightly negative factor on higher-performing teams in influencing self- effrcacy and staff willingness to return. The more accurately the leader weighted the judgments of a staff member on lower-performing teams, the more willing the staff member was to return and the greater the staff member’s self-efficacy. Taken together, these findings highlight the existence of a real dilemma for leaders of this type of decision making team. The strongest factor affecting staff reactions in this study was team performance. Staff in higher-performing teams reported more favorable reactions than did staff in lower-performing teams. When differential utilization of staff judgments is required for high team performance, as was the case in this task, some staff members must receive a higher or a lower LUSJ weight than other 129 staff members if the team is to perform well. In the higher-performing teams, however, receiving a lower LUSJ weight led to less favorable reactions than did receiving a higher weight. As discussed earlier, the withdrawal or turnover of even lesser-accurate team members can threaten team viability, particularly when it is the nature of the task cues that creates the differential staff validity. Limitations These two studies were not without limitations. The participants were college undergraduates, did not know each other when they performed the simulation, and knew that the study would last no more than 3 hours. Investigations of teams that work together longer and for whom the consequences of performing well, or even of being heavily weighted by the leader, have greater implications for team members need to be performed. Additionally, the military-theme computer simulation was unfamiliar to participants. Future research should attempt to replicate these findings with teams interacting on more familiar tasks, and to teams in which the leader has unique decision- related information. Also, team viability is a complex phenomenon. Only five different staff reactions were investigated in this study. Some of the staff reactions that were investigated were also found to be moderately intercorrelated, and the reliability of the staff member desire for change scale was lower than would be hoped. Future investigations should incorporate a better measure of this construct, in addition to exploring other types of staff reactions and team viability outcomes. 130 Research of this type is often criticized because of the artificiality of the laboratory setting. In fact, the laboratory is an ideal environment for testing theory pertaining to groups and teams. As stated by Driskell and Salas (1992): The primary criterion for designing an empirical setting to test theory is that it provide a clear and robust test of that theory, not that it resemble the outside world. However, when we attempt to apply the theory to a real-world setting, then the realism of the setting in which the application or intervention is conducted is critical. (p. 110) The more the research setting contains only those variables relevant to the theory being tested, and excludes extraneous variables (and thus the greater the artificiality of the research setting), the better the setting provides a clear test of the hypothesis (Webster & Kervin, 1971; Mook, 1983). The artificiality offered by laboratory settings is thus a benefit, not a liability when it comes to studying complex phenomena such as teams. The fact that these results were found suggests that much remains to be learned about managing this dilemma that faces leaders of hierarchical teams with distributed expertise. Given the prevalence of these types of teams in organizations today, and the relative paucity of knowledge that exists concerning the promotion of their effectiveness, it is hoped that the results of these studies stimulate future research in this important area. FOOTNOTES 1 For the sake of consistency, the term judgment will be used to refer to the interpretation of cues by either the leader or a staff member, while the term decision will reflect the final decision ultimately registered by the team leader upon which team decision accuracy or performance is based. Thus, a leader's initial judgment of the situation may differ from the decision the leader ultimately registers for the team. 131 LIST OF REFERENCES Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park: Sage. American Psychological Association (1992). Ethical principles and codes of conduct. American Psychologist. 47. 1597-1611. American College Testing Program (1989). Preliminary technical manual for the enhanced ACT assessment. Iowa City, IA: ACT Publications. Babad, E. Y., Inbar, J ., & Rosenthal, R. (1982). Pygmalion, Galatea, and the Golem: investigations of biased and unbiased teachers. Journal of Educatiorfl Psychology. 74. 459-474. Baker, CV. (1991). The effects of inter-positional uncertainty and workload on team coordination skills and task performance. Doctoral dissertation, University of South Florida, Department of Psychology, Tampa, FL. Bales, R. F., & Cohen, S. P. (1979). SYMLOG: A system for the multiple level observation of groups. New York: Free Press. Bass, B. M. (1981). Stogdill's @dbook of leadership (Rev. ed.). New York: Free Press. Bimbaum, M. H., & Stegner, S. E. (1979). Source credibility in social judgment: Bias, expertise, and the judge's point of view. Journ_al of Personality and Social Psychology. 37. 48-74. Bimbaum, M. H., Wong, K, & Wong, L. (1976). Combining information from sources that vary in credibility. Memory and Cognition. 4. 330-336. Blake, R. R., & Mouton, J. S. (1964). The managerial gg'd. Houston: Gulf Publishing. Bottger, P. C., & Yetton, P. W. (1988). An integration of process and decision scheme explanations of group problem solving performance. Organizational Behavior and Humjan Decision Performgce. 42. 234-249. 132 133 Brehmer, B. (1973). Single-cue probability learning as a function of the sign and magnitude of the correlation between cue and criterion. Organizational Behavior and Human Decision Processes. 9. 377-395. Brehmer, B., & Hagafors, R. (1986). Use of experts in complex decision making: A paradigm for the study of staff work. Organizational Beh_avior and Hum_an Decision Processes. 38. 181-195. Brunswick, E. (1943). Organismic achievement and environmental probability. Psychological Review. 47. 69-78. Brunswick, E. (1955). Representative design and probabilistic theory in a functional psychology. Psychological Review. 50. 255-272. Brunswick, E. (1956). Perception and representative design of expgriments. Berkeley: University of California Press. Buckley, T., & Sniezek, J. A. (1990). Confidence as influence in a no feedback choice task. Annual meeting of the Judgment and Decision Making Society, New Orleans. Cashman, J ., Dansereau, F. Jr., Graen, G., & Haga, W. J. (1976). Organizational understructure and leadership: A longitudinal investigation of the managerial role-making process. Organizational Behavior gd Humfl PerformanceLIS. 278-296. Cohen, J. (1988). Statistical power afllysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Cohen, J ., & Cohen, P. (1983). Applied multiple reggession/correlation analysis for the beh_avioral sciences. Hillsdale, NJ: Lawrence Erlbaum Associates. Croner, M. D., & Willis, R. H. (1967). Perceived differences in task competence and asymmetry of dyadic influence. Journ_al of Abnormi and Social Psychology. 62. 705-708. Danserau, F., Cashman, J ., & Graen, G. (1973). Instrurnentality theory and equity theory as complementary approaches in predicting the relationship of leadership and turnover among managers. Organizational Beh_avior and Hum Perform_ance. 10. 184- 200. Danserau, F. Jr., Graen, G., & Haga, W. J. (1975). A vertical dyad linkage approach to leadership within formal organizations: A longitudinal investigation of the role making process. Organizational Behavio; and Human Perform_ance. 13. 46-78. 134 Davis, J. H. (1992). Some compelling intuitions about group consensus decisions, theoretical and empirical research, and interpersonal aggregation phenomena: Select examples, 1950-1990. Organizational Behavior and Hum_an Decision Processes. 52. 3- 38. Deane, D. H., Hammond, K. R., & Summers, D. A. (1972). Acquisition and application of knowledge in complex inference tasks. J ourn_al of Experimental Psychology, 92, 20-26. Deluga, R. J ., & Perry, J. T. (1991). The relationship of subordinate upward influencing behavior, satisfaction and perceived superior effectiveness with leader- member exchanges. Jom of Occupational Psychology, 64, 239-252. Deutsch, M. & Gerard, H. B. (1955). A study of normative and informational social influences upon individual judgment. Journal of Abnormal and Social Psychology. 51, 629-636. Dewhirst, D., Metts, V., & Ladd, R. T. (1987). Exploring the delegation decision: Managerial responses to multiple contingencies. Paper presented at the Academy of Management Meetings, New Orleans. Dillon, W. R., & Goldstein, M. (1984). Multivariate apalysis: Methods and applications. New York: John Wiley & Sons. Drake, B., & Mitchell, T. (1977). The effects of vertical and horizontal power on individual motivation and satisfaction. Academy of Management Journal, 20, 573-591. Driskell, J. E., & Salas, E. (1992). Can you study real teams in contrived settings? The value of small group research to understanding teams. In R.W. Swezey and E. Salas, (Eds), Teams: Their Training and Performance (pp. 101-124). Norwood, NJ: Ablex. Duffy, L. (1993). Team decision-making biases: An inforrnation-processing perspective. In G.A. Klein, J. Orasanu, R. Calderwood, and CE. Zsambok (Eds.), Decision Making in Action: Models and Methods (pp. 346-359). Norwood, NJ: Ablex. Dunning, D., Griffin, D. W., Milojkovic, J. D., & Ross, L. (1990). The overconfidence effect in social prediction. Journg of Person_ality afl Soclrl Psychology. 3, 568-581. Dyer, J. L. (1984). Team research and team training: A state-of-the-art review. In F.A. Muckler (Ed.), Human factors review: 1984 (pp. 285-323). Santa Monica, CA: Human Factors Society. F ischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journ_al of Experimental Social Psychology: Human Perception and Performance 3 552-564. 135 F oushee, H. C. (1984). Dyads and triads at 30,000 feet: Factors affecting group process and aircrew performance. Amerigan Psychologist, 39, 885-893. Gigerenzer, G. (1991). Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review. 98. 506-528. Gilliland, SW. (1992). The perceived faimess of selection systems: An organizational justice perspective. Doctoral dissertation, Michigan State University, Department of Psychology, East Lansing, MI. Gottfredson, L.S., & Crouse, J. (1986). Validity versus utility of mental tests: Example of the SAT. Jourpal of Vocational Behavior 29 363-378. Graen, G., & Cashman, J. (1975). A role-making model of leadership in formal organizations: A developmental approach. In J. Hunt & L.L. Larsen (Eds.), Leadership Frontiers (pp. 143-165). Kent, OH: Kent State Univ. Press. Graen, G., & Ginsburgh, S. (1977). Job resignation as a function of role orientation and leader acceptance: A longitudinal investigation of organizational assimilation. Organizational Beh_avior and Hum_an Performance. 19. 1-17. Graen, G., Liden, R. C., & Hoel, W. (1982). Role of leadership in the employee withdrawal process. Journal of Applied Psychology, 67, 868-872. Graen, G., & Scandura, T. A. (1987). Toward a psychology of dyadic organizing. In B. Staw & L.L. Cummings (Eds.), Research in organizational behavior, vol. 9 (pp. 175-208). Greenwich, CT: JAl Press. Gully, S. M. (1994). Repeated measures regression analysis: A clarification with illustrative examples. Paper presented at the 9th Annual Conference of the Society for Industrial/Organizational Psychology, Nashville, TN. Hackman, J. R. (1987). The design of work teams. In J .W. Lorsch (Ed.), Handbook of organizational behavior (pp. 315-342). Englewood Cliffs, NJ: Prentice-Hall. Hackman, J. R., & Oldham, G. R. (1980). Work redesigg New York: Addison- Wesley. Heller, F. A. (1992). Decision-making and the utilization of competence. In F .A. Heller (Ed.), Decision-making and leadership. New York: Cambridge University Press. Heller, F. A., & Yukl, G. (1969). Participation, managerial decision making, and situational variables. Organizational Beflwior and Human Performance 4 227-241. 136 Hollenbeck, J. R., Ilgen, D. R. & Sego, D. J. (1994). Repeated measures regression and mediational tests: Enhancing the power of leadership research. Leadership Quarterly, 5, 3-23. Hollenbeck, J. R., Ilgen, D. R., Sego, D. J., Hedlund, J., Major, D. A. & Phillips, J. (1995). Multilevel theory of team decision making: Decision performance in teams incorporating distributed expertise. Journal of Applied Paychology, 80, 292-316. Hollenbeck, J. R., Sego, D. J., Ilgen, D. R., & Major, D. A., Hedlund, J., & Phillips, J. (1995). Team decision making accuracy under difficult conditions: Construct validation of potential manipulations using the TIDE2 simulation. In M. T. Brannick, E. Salas, & C. Prince (Eds), Team performance assessment and measurement: Theory. research and applications. Hillsdale, NJ: Erlbaum. Hunter, J .E. (1986). Cognitive ability, cognitive aptitudes, job knowledge and job performance. Journal of Vocational Behavior. 29. 340-362. Ilgen, D. R. (1986). Laboratory research: A question of when, not if. In E. A. Locke (Ed.), Generalizing from laboratog to field settings (pp. 257-267). Lexington, MA: Lexington Books. Jackofsky, E. F. (1984). Turnover and job performance: An integrated process model. Academy of Mflagement Review. 9. 74-83. Jensen, AR. (1986). g: Artifact or reality? Journal of Vocational Behavior 29 301-331. Juslin, P., Olsson, N., & Winman, A. (1996). Calibration and diagnosticity of confidence in eyewitness identification: Comments on what can be inferred from the low confidence-accuracy correlation. Journal of Experimental Psychology: Learning, Memory. & Cogpition, 22, 1304-1316. Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics an; biases. Cambridge, MA: Cambridge University Press. Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/aptitude-treatment interaction approach to skill acquisition [Monograph]. Journal of Applied Psychology, 74, 657-690. Katerberg, R., & Hom, P. W. (1981). Effects of within-group and between-groups variation in leadership. Jourpal of Applied Psychology. 66. 218-223. Kelman, H. C. (1950). Effects of success and failure on "suggestibility" in the autokinetic situation. Journal of Abnormal and Social PsychologY. 45. 267-285. 137 Kim, W. C., & Mauborgne, R. A. (1993). Procedural justice, attitudes, and subsidiary top management compliance. Apademy of Management J ourn_a_lé 6. 502-526. Korsgaard, M. A., Schweiger, D. M., & Sapienza, H. J. (1995). Building commitment, attachment, and trust in strategic decision-making teams: The role of procedural justice. Academy of Mgagement Jomal. 38. 60-84. Libby, R., Trotrnan, K. T., & Zirnmer, I. (1987). Member variation, recognition of expertise, and group performance. Journal of Applied Psychology. 72. 81-87. Lichtenstein, S., & F ischhoff, B. (1977). Do those who know more also know more about what they know? Organizational Behavior and Human Performance, 20, 159- 183. Lichtenstein, S., F ischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds), Judgment under uncertainty: Heuristics and biaiesa Cambridge, MA: Cambridge University Press. Locke, E. A., Frederick, E., Lee, C., & Bobko, P. (1984). Effect of self-efficacy, goals, and task strategies on task performance. Journal of Applied Psychology. 69. 241- 251. Locke, E. A., & Schweiger, D. M. (1979). participation in decision making: One more look. In B. M. Shaw (Ed.), Research in organizational behavior vol. 1 (pp. 265- 339). Greenwich, CT: JAI Press. March, J. G. (1956). Influence measurement in experimental and semi- experimental groups. Sociometry. 19. 260-271. Mausner, B. (1954a). The effect of prior reinforcement on the interaction of observer pairs. Journal of Abnormal Social Psychology. 49. 65-68. Mausner, B. (1954b). The effect of one partner's success or failure in a relevant task on the interaction of observer pairs. Journal of Abnormal and Social Psychology. 49. 557-560. McGrath, J. E. (1976). Stress and behavior in organizations. In M. D. Dunnette (Ed.), Handboon inclasfid and organizational psychology (pp. 1310-1367). Chicago: Rand McNally. Michaelsen, L. K., Watson, W. E., and Black, R. H. (1989). A realistic test of individual versus group consensus decision making. Journal of Applied Psychology.7$ 834-839. 138 Miller, K. 1., & Monge, P. R. (1986). Participation, satisfaction, and productivity: A meta-analytic review. Academy of Management Journal, 29, 727-753. Mook, D. G. (1983). In defense of external invalidity. American Psychologist. 38. 379-3 87. Moore, P. J ., & Gump, B. B. (1995). Information integration in juror decision making. Journal of Applied Social Psychology, 25, 2158-2179. Oz, 8., & Eden, D. (1994). Restraining the golem: Boosting performance by changing the interpretation of low scores. Journal of Applied Psychology, 79, 744-754. Paese, P. W., & Sniezek, J. A. (1991). Influences on the appropriateness of confidence in judgment: Practice, effort, information, and decision-making. Organizational Behavior and Human Deci§ion Processes. 48. 100-130. Penrod, S., & Cutler, B. (1995). Witness confidence and witness accuracy: Assessing their forensic relation. [Special issue: Witness memory and law]. Psychology, Public Policy. & Law. 1. 817-845. Potter, E. H., & Fiedler, F. E. (1981). The utilization of staff member intelligence and experience under high and low stress. Academy of Maaagement Journal. 24. 361- 376. Rosenthal, R. (1985). From unconscious experimenter bias to teacher expectancy effects. In J.B. Dusek, V.C. Hall, & W.J. Meyer (Eds), Teacher expectationa (pp. 37-65). Hillsdale, NJ: Erlbaum. Rosenthal, R. (1991). Teacher expectancy effects: A brief update 25 years after the Pygmalion experiment. Journal of Research in Education 1 3-12. Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the classroom. New York: Holt, Rinehart, and Winston. Salas, E., Dickinson, T. L., Converse, S. A., & Tannenbaum, S. l. (1992). Toward an understanding of team performance and training. In R. W. Swezey & E. Salas (Eds), Teams: Their training and performance (pp. 3-29). Norwood, New Jersey: Ablex Publishing Corporation. Scandura, T. A., Graen, G. B. & Novak, M. A. (1986). When managers decide not to decide autocratically: An investigation of leader-member exchange and decision influence. Journal of Applied Psychology, 71, 579-584. 139 Seashore, S.E., Lawler, E.E., Mirvis, P., and Cammann, C. (eds) (1982). Observing and measuring organizational change: A guide to field practice. New York: Wiley. Simon, H. A. (1978). Rationality as process and as product of thought. American Economic Review. 68. 1-16. Slovic, P., F ischhoff, B., & Lichtenstein, S. (1977). Behavior decision theory. Annual Review of Psychology, 28, 1-39. Slovic, P. & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Hum_an Performance 6 649-744. Sniezek, J. A. (1992). Groups under uncertainty: An examination of confidence in group decision making. Special issue of Organizational Beh_avior and Human Decision Processes. 52. 124-155. Sniezek, J. A., & Buckley, T. (1995). Cueing and cognitive conflict in judge- advisor decision making. Organizational Behavior and Humg Decision Processes. 62. 159-174. Sniezek, J. A., & Buckley, T. (1993). Becoming more or less uncertain. In N. J. Castellan, Jr. (Ed), Individual and group decision making: Current issues. (pp. 203-218). Hillsdale, NJ: Lawrence Erlbaum Associates. Sniezek, J. A., & Naylor, J. C. (1978). Cue measurement scale and functional hypothesis testing in one probability learning. Organizational Behavior and Human Decision Processes. 22. 366-374. Sniezek, J. A., & Paese, P. W., & Switzer, F. S. (1990). The effect of choosing on confidence in choice. Organizational Behavior and Human Decision Processes, 46, 264- 282. Sniezek, J. A., & Reeves, A. P. (1986). Feature cues in probability learning: Data base information and judgment. Organizational Beh_avior and Hum_an Decision Processes. 3_7_, 297-315. Sporer, S. L., Penrod, 8., Read, D., & Cutler, B. (1995). Choosing, confidence, and accuracy: A meta-analysis of the confidence-accuracy relation in eyewitness identification studies. Psychological Bulletin. 118. 315-327. Steiner, I. D. (1972). Group process and productivig. New York: Academic Press. 140 Sundstrom, E., DeMeuse, K. P., & Futrell, D. (1990). Work teams: Applications and effectiveness. American Psychologist, 45, 120-133. Thompson, W. C. (1993). Research on jury decision making: The state of the science. In N. J. Castellan, Jr. (Ed), Individual and grog) decision making: Current issues, (pp. 203-218). Hillsdale, NJ: Lawrence Erlbaum Associates. Tindale, R. S. (1993). Decision errors made by individuals and groups. In N. J. Castellan, Jr. (Ed), Individual and group decision making: Current issues, (pp. 203-218). Hillsdale, NJ: Lawrence Erlbaum Associates. Tucker, L. R. (1964). A suggested alternative formulation in the developments by Hursch, Hammond, and Hursch, and by Hammond, Hursch, and Todd. Psychological Review 71 528-530. Vallone, R. P., Griffin, D. W., Lin, 8., & Ross, L. (1990). Overconfident prediction of future actions and outcomes by self and others. Journal of Personality a_n_d Social Psycholgg. 58. 582-592. Vecchio, R. P. (1982). A further test of leadership effects due to between-group and within-group variation. Journal of Applied Psycholggy, 67. 200-208. Vroom, V. A. (1964). Work and motivation. New York: Wiley. Vroom, V. H., & Yetton, P. W. (1973). Leadership and decision making. Pittsburgh: University of Pittsburgh Press. Wagner, J. A. III (1994). Participation’s effect on performance and satisfaction: A reconsideration of research evidence. Academy of Management Review. 19. 312-330. Wagner, J. A. III, & Gooding, R. Z. (1987a). Effects of societal trends on participation research. Administrative Science Quarterly, 32, 241-262. Wagner, J. A. III, & Gooding, R. Z. (1987b). Shared influence and organizational behavior: A meta-analysis of situational variables expected to moderate participation- outcome relationships. Academy of Management Journal. 30. 524-541. Wakabayshi, M., Minami, T., Sano, K., Graen, G., & Novak, M. (1980). Management progress: Japanese style. International Journal of Cultural Relations. 4. 391- 420. Webster, M., & Kervin, J. B. (1971). Artificiality in experimental sociology. Canadian Review of Sociology and Anthropology, 8, 263-272. 141 Wood, M. T. (1973). Power relationships and group decision making in organizations. Psychological Bulletin. 79. 280-293. Yukl, G. A. (1989). Leadership in organizations. Englewood Cliffs, NJ: Prentice Hall. Zeffane, R. M. (1994). Correlates of job satisfaction and their implications for work redesign: A focus on the Australian telecommunications industry. Public Personnel Management, 23, 61-75. APPENDICES APPENDIX A TESTS OF LUSJ INDICES APPENDIX A TESTS OF LUSJ INDICES A monte carlo simulation involving three staff members' judgments and a leader's decisions over 100 decisions was performed under six different configurations of leader staff weighting strategies. In all scenarios the intercorrelations of staff judgments were kept low. The four LUSJ constructs (Dyadic LUSJ, Accuracy of Dyadic LUSJ, Relative Dyadic LUSJ, and Dyadic LUSJ Variability) are examined below in each of these six scenarios. Dyadic LUSJ Accuracy was calculated assuming a correct judgment B weight of .2 for staff member A, .4 for staff member B and .6 for staff member C. Scenario 1: A's judgments are given high weight by the leader B's judgments are given moderate weight by the leader C's judgments are given low weight by the leader Correlation Matrix: A B C B .01 C -.07 -.01 Leader Decision .70 .48 .21 LUSJ Indices: Staff Dyadic Dyadic LUSJ Relative Dyadic LUSJ Member LUSJ Accuracy Dyadic LUSJ Variability A .68 .48 .22 .14 B .46 .06 .00 .14 C .25 -.35 -.21 .14 Scenario 2: A's judgments are given low weight by the leader B's judgments are given high weight by the leader C's judgments are given high weight by the leader Correlation Matrix: A B C B -.01 C -.02 .00 Leader Decision .21 .63 .58 LUSJ Indices: Staff Dyadic Dyadic LUSJ Relative Dyadic LUSJ Member LUSJ Accuracy Dyadic LUSJ Variability A .20 .00 -.27 .18 B .63 .23 .16 .18 C .59 -.01 .12 .18 142 143 Scenario 3: A's judgments are given high weight by the leader B's judgments are given low weight by the leader C's judgments are given low weight by the leader Correlation Matrix: A B C B -.02 C -.02 .03 Leader Decision .72 .17 .21 LUSJ Indices: Staff Dyadic Dyadic LUSJ Relative Dyadic LUSJ Member LUSJ Accuracy Dyadic LUSJ Variability A .76 .56 .37 .25 B .17 -.23 -.22 .25 C .23 -.37 -.16 .25 Scenario 4: A's judgments are given moderate weight by the leader B's judgments are given moderate weight by the leader C's judgments are given moderate weight by the leader Correlation Matrix: A B C B -.01 C -.04 -.01 Leader Decision .33 .36 .31 LUSJ Indices: Staff Dyadic Dyadic LUSJ Relative Dyadic LUSJ Member LUSJ Accuracy Dyadic LUSJ Variability A .34 .14 .00 .01 B .35 -.05 .01 .01 C .32 -.28 -.02 .01 144 Scenario 5: A's judgments are given high weight by the leader B's judgments are given high weight by the leader C's judgments are given high weight by the leader Correlation Matrix: A B C B -.01 C -.05 -.04 Leader Decision .48 .53 .48 LUSJ Indices: Staff Dyadic Dyadic LUSJ Relative Dyadic LUSJ Member LUSJ Accuracy Dyadic LUSJ Variability A .50 .30 -.01 .01 B .53 .13 .02 .01 C .51 -.09 .00 .01 Scenario 6: A's judgments are given low weight by the leader B's judgments are given low weight by the leader C's judgments are given low weight by the leader Correlation Matrix: A B C B .06 C -.04 -.04 Leader Decision .12 .14 .15 LUSJ Indices: Staff Dyadic Dyadic LUSJ Relative Dyadic LUSJ Member LUSJ Accuracy Dyadic LUSJ Variability A .12 -.08 -.O2 .02 B .15 -.25 .01 .02 C .16 -.44 .02 .02 APPENDIX B MATERIALS USED IN BOTH EXPERIMENTS I AND II: CONSENT FORM, DEMOGRAPHIC QUESTIONNAIRE, GENERAL TRAINING MANUAL AND TRAINING SCRIPT APPENDIX B EXPERIMENTS I AND 11 CONSENT FORM This set of two studies was designed to investigate team decision making effectiveness. If you choose to participate in this study, you will be asked to learn a computer-simulated aircraft- identification task, operate the simulation task as part of a four-person team, and complete a series of questionnaire items. Also, by signing below you will indicate that you choose to participate in this study and that you authorize the researchers to have access to your SAT/ACT scores. Your participation in the simulation should take about three hours. You will receive course credit in exchange for your participation in this study. Your participation in this research is completely voluntary. You are free to decline to answer any questions or to terminate your participation at any time. Your participation in this study will be totally confidential. Your data will be included in a summary report along with the data from others. The report will not include any information that will allow anyone to identify any of your individual responses. If you have any questions or concerns regarding this study, you may contact Jean Phillips in the Management Department at 353-7116. Participant Statement 1 agree to participate in the Team Decision Making Study. By signing below I authorize the researchers to use my SAT/ACT scores, and I recognize that I must provide my student number (PID) to do this. It is my understanding that these materials will be strictly confidential and will not be seen by anyone other than the research team. I consent to having these materials used for research purposes. I also understand that I will learn to operate a computer simulation and perform the simulation with other individuals, and that I will complete a series of questionnaires before and afier the simulation. I understand that the top teams in each condition will receive cash prizes on the following basis: First place $20/ person; Second place $15/ person; Third place 310/ person. I understand that my participation is voluntary, that I may discontinue participation at any time without penalty, that all of my individual responses will be kept strictly confidential, and that I will not be identified in any report of this study. Printed Name Date Signature Student Number (PID) Course/T A Section # 145 O‘M-fiWN-fl \l 146 EXPERIMENTS 1 AND 11 DEMOGRAPHIC QUESTIONNAIRE PARTICIPANT # . DATE: .NAME: . STUDENT NUMBER: . MANAGEMENT 302 TA: .SEX (CIRCLE ONE): MALE FEMALE .YEAR (CIRCLE ONE): FRESHMAN SOPH JUNIOR SENIOR OTHER . AGE: . Have you ever participated in a study in this lab before? Y N . Approximately how often do you use a personal computer? (Circle a whole number using the scale below) <--l 2 3 4 5--> Monthly Weekly Daily or Less 10. Approximately how often do you play video games? (Circle a whole number using the scale below) <-- l 2 3 4 5--> Monthly Weekly Daily or Less 11. Approximately how well do you know each of the people you will be working with today (don't rate yourself): CARRIER <--1 2 3 4 5--> Not at Casual Very Good All Acquaintance Friends CAD <--1 2 3 4 5--> Not at Casual Very Good All Acquaintance Friends AWAC <--1 2 3 4 5--> Not at Casual Very Good All Acquaintance Friends CRUISER <--l 2 3 4 5--> Not at Casual Very Good All Acquaintance Friends 147 EXPERIMENTS I AND 11 GENERAL TRAINING MANUAL INTRODUCTION The year is 1996 and you are a part of a US. naval Carrier group's command and control team stationed in the Middle East. A regional conflict between two nations in this area has recently broken out, and your mission is to protect seagoing commercial traffic in the area fi'om accidental or intentional attacks. As history indicates, this is a highly sensitive task. For example, in 1987, failure by a command and control team to quickly and accurately identify a plane as threatening, allowed an Iraqi jet to accidentally fire two Exocet missiles into the Frigate U.S.S. Stark, killing 37 American servicemen and crippling the vessel. One year later, a command and control team error resulted in the USS. Cruiser Vincennes accidentally shooting down an Iranian passenger plane killing 290 innocent civilians. Any repeat of mistakes of this kind will probably lead to a withdrawal of American forces from the area. Such a withdrawal would have disastrous economic and political ramifications that would spread well beyond this region. THE TASK FORCE Your naval Carrier group is an array of ships, planes, and other supporting units with the purpose of protecting approximately 196,000 square miles of ocean. In order to control such a large area, radar surveillance is necessary so that the Carrier group is not surprised by the enemy. Four units provide the bulk of radar coverage over a Carrier group. These units are linked together by an electronic data network so that they can supply bits and pieces of critical information concerning possible enemy planes to each other. These four units are sometimes called a command and control team. Essentially, these four units communicate and coordinate what they see on their individual radars, so that the team commander, located on the aircraft Carrier, ends up seeing an accurate overall 'big-picture'. This accurate picture is necessary so that the commander can make appropriate decisions concerning possible enemy targets (aircrafi that are being tracked are called targets). The first station of the command and control team consists of an Air Force AWAC S (Airborne Warning and Control System) reconnaissance plane which flies overhead using radar to identify targets far off in the distance. The second station is a land-based Marine CAD (Coastal Air Defense) unit which supplies radar coverage from a beach. The third station is a fast, highly maneuverable navy ship called a Cruiser (which supplies radar coverage from the sea). Finally, the Carrier provides leadership and integrates the information gathered by the other three stations into the team's final decision for any offensive or defensive tactical actions. 148 TEAM MISSION - Monitoring Air Space The team, of which you are a part, will role play the Commanding Officers of the stations which compose the Carrier group’s command and control team. Your mission is to monitor the airspace surrounding the Carrier group, making sure that neutral ships are not attacked. In performing this role, you must make certain that you do not allow loss of life resulting from accidental or intentional attacks on ships in the task force. At the same time, it is also of paramount importance that you do not inadvertently shoot down fiiendly military aircraft or any civilian aircraft. Many passenger flights move in and out of the region, and friendly military aircrafi from nations not involved in the conflict also patrol the area. In 1994 two US. F-15 fighters shot down two fiiendly helicopters in Northern Iraq killing 26 people. Another occurrence such as this, or of the USS. Stark or U.S.S. Vincennes variety, will diminish public support for the current mission, and in turn, jeopardize peace in this region. OVERVIEW OF ROLES There are four roles in this simulation, one for each member of a four person team. The leader is the Commanding Ofiicer (CO) of the Aircraft Carrier. The other team members include the C0 of an AWACS air reconnaissance plane, the CO of a Cruiser, and the CO of a CAD unit (Coastal Air Defense unit located on a beach). The team’s task is to decide what response the Carrier group should take toward incoming air targets. The COs of the AWACS, Cruiser, and CAD will make recommendations to the Carrier CO, who will then make the final decision for the team. Team members base their decisions on data they collect by measuring characteristics of targets that enter the Carrier group’s area of responsibility. These measures are obtained from sophisticated radar and other electronic devices. Each staff member has something that is unique to contribute to the decision. There are seven possible choices to make for each incoming target. These responses are graded in terms of their aggressiveness and there is one correct response for each aircrafi. Each of these is described on the next page, moving from least to most aggressive. 1 49 SEVEN POSSIBLE DECISIONS 1) IGNORE: This means that no further attention should be devoted to the target and instead focus should be directed on other possible targets in the area. Never ignore a target that might possibly attack. This would most assuredly lead to loss of lives. 2) REVIEW: This means attention can be shifted away from this target momentarily. After a short period of time this target should be returned to in order to update its status. A large number of targets can be in review status, however, reviewing targets decreases the amount of team resources that can be spent addressing other targets. 3) MONITOR: This means that the target should be continuously tracked. The systems that do this tracking are capable of monitoring fewer targets than can be reviewed, and thus monitoring diminishes overall patrol capacity. 4) WARN: This means that a message is sent to the target ordering it to turn away. Warning targets that should be ignored detracts from the importance of legitimate warnings. Warning targets that intend to attack is also bad, since the warning makes it easier for the attacker to locate the ship. 5) READY: This means to get into a defensive posture and to set defensive weapons on automatic. A ship in a readied position is rarely vulnerable to attack. This stance should not be taken to non-threatening targets since weapons set to automatic and fire mistakenly at innocent targets that fire too closer to the Carrier group. A ship in this position cannot readily take offensive action toward other targets. 6) LOCK-ON: This synchronizes radar and attack weapons so that the weapons fix themselves on the target. A ship at Lock-On position can take offensive action at a moment’s notice. The capacity to track other targets is severely constrained once there is Lock-On to a single target, however. Thus, this should be reserved for targets that are almost certain to be threatening. 7) DEFEND: This is “weapons away” and means to attack the target with missiles or depth charges. A defend decision cannot be aborted once initiated and thus must only be used when enemy attack is imminent. 150 EXPERIMENTS I AND II INTERACTIVE TRAINING SCRIPT Explain to the participants, "The interactive training involves the first three targets. The first three targets are extremely long to allow for questions and to ensure that we cover everything you need to know. You will then see sixty targets that will count toward the bonus money. Each of these sixty targets will be sixty seconds long. The first three targets are just practice to allow you to get used to the game - they will not count at all toward the bonus money. I will be available during all three practice targets to answer questions, but once the real targets start I can not answer any questions." "Please follow along closely and do not get ahead of me and the rest of your team. This will ensure that we cover everything." The researcher will then call up the first target and begin the hands-on training. (For Experiment 11, explain that the team leader, the Carrier, is in another room). Make sure that everyone is on the blue icon screen. 1. Point out and quickly explain the icons, game #, time clock, and menu bar. Mention that the 60 targets that count toward the bonus money will each last 60 seconds. If called for by the condition, explain the confidence and past accuracy portions of the screen. Explain that the past accuracy bars on the leader’s screen, and the utilization bars on the staff members’ screens will not be accurate until the seventh target because the statistic takes that long to calibrate. 2. Explain that the staff members will now learn how to measure attributes. Say, “You Measure attributes by using the mouse to open the Measure menu. To measure, use the mouse and point to the desired cue you want to measure and click on it. The gray box on the lower lefi portion of the screen showing the attribute value will disappear in 3 seconds or when enter is pressed again. Measure another attribute.” 3. Point out that each staff member can only measure the attributes on which they have been trained and that are within their area of expertise. Remind participants that each staff member sees three unique pieces of information, and that each has something unique to contribute to the team’s decision. Ask participants to click on Measure again and then click on the Measure Summary. This is a summary box that will display all of the attributes measured by that staff member on the current target. Explain that it will stay open for about 3 seconds, then disappear. 151 4. Ask participants to hit F2. Explain that this is an even faster way to open the Measure Summary box. Comment that they must have measured an attribute before it appears in this box, but once they’ve measured it it stays in the summary box until the next target comes up. 5. When each staff member has measured all 3 attributes and is ready to send a judgment to the leader, explain that at 30 seconds left, the clock will start beeping, indicating that judgments must be sent soon by the outlying stations (CAD, AWAC and Cruiser). This judgment must reach the Carrier with enough time to make a team decision. Have everyone click on Judgment. Briefly explain the seven decision options, and tell participants to click on their judgment choice. Explain that if they don't do anything else, the judgment will register in about 3 seconds. If they made a mistake and want to resend their judgment, they can click on Cancel and resend their judgment. If the judgment is the one they want, they can also hit OK and their judgment will immediately be sent to the leader. 6. Explain that the leader in the other room will receive their judgments and will be shown by another researcher how to register the team decision. Ask if there are any questions. 7. When the feedback screen comes up, explain the previous decision's feedback information. Explain that timing the simulation, this information will stay on the screen for about 5 seconds, then the next target will automatically come up. Stress that there is nothing they can do to make the next target come up any faster, and to avoid hitting anything on the keyboard to prevent them from being locked out of the next target. Explain that if anything ever seems wrong, they should get a researcher immediately. 8. When the next target comes up, explain that you will stay in the room for this and the following target to answer any questions participants might have. Point out that the game number has changed (it is the second target), and that they will see a total of 60 targets during the experimental session. Remind them that they will also be doing a second decision making task near the end of the session, and that they will be given more input as to how the second task will be completed (e.g., keeping things the same, changing the leader, changing the team, or doing it as individuals). 9. Near the end of the second target, tell participants that after they have registered their judgment, they should not open any new menus on their computer screen to prevent being locked out when the leader makes the team decision. Explain that open windows can prevent the reset signal from resetting the computer. 152 10. At the end of the third practice target, save the practice data and load the experimental simulation. Do not start the experimental session until you tell participants: "At this time, I want to stress two very important points: 1. With 20 seconds it is very important that you make judgments relatively quickly to leave time for the Carrier to receive them and to make the team decision. 2. When time is running down and you have already made a judgment, clear your menu bar (hit ESCAPE in the top left corner) to ensure that you proceed to feedback and to the next target. If you notice negative time on the clock or if you notice that you are still in feedback when the other stations have moved on to the next target, contact a researcher immediately. 3. When you are in a text message box, the clock will appear frozen, but it is actually counting down. Be aware!! Let them know that they will be monitored through an intercom in case someone gets locked out. Remind them not to talk during the simulation, and tell them that everything they type is recorded by the computer. APPENDIX C EXPERIMENT 1 MATERIALS: LEADER’S POSITION-SPECIFIC TRAINING MANUAL, POST-SESSION QUESTIONNAIRE AND DEBRIEFING FORM APPENDIX C EXPERIMENT I LEADER’S POSITION-SPECIFIC TRAINING MANUAL The CARRIER is a large ship that is the core of the naval command group. As the Commanding Officer (C0) of the Carrier, you are responsible for making the naval group’s decisions of how to respond to incoming aircraft. Your staff members, the COs of the CAD, AWAC, and CRUISER, are responsible for summarizing information about the incoming aircraft and sending you a judgment relevant to their area of expertise. The CAD’s responsibility is to summarize information relevant to the target’s location, the AWAC’s is to summarize information about the target’s movement, and the CRUISER summarizes information relevant to the classification of the target (what type of plane it is). Each staff member therefore has something unique to contribute to the decision -- their roles do not overlap. CONFIDENCE (if called for by the condition) In addition to sending you their judgment for the target, you will also receive an indication of how confident each staff member is (on a scale of 1%, reflecting a guess, to 100%, reflecting extreme confidence) in the accuracy of their judgment. For example, in addition to recommending an IGNORE decision, the CRUISER may tell you that s/he is 80% confident that this judgment is correct, based on the information s/he has acquired. This will become more clear during training. PAST JUDGMENT ACCURACY (if called for by the condition) During each active target, there will be a red bar on your screen near each of the icons reflecting your staff members. This bar reflects how accurate that staff member’s judgments have been in the past (in terms of the strength of the relationship between their judgments and the correct decision). These bars will be seen by all team members. The longer the red bar, the more accurate the person’s judgments have been. COMBINING THE THREE JUDGMENTS INTO A TEAM DECISION As the C0 of the Carrier you are responsible for combining the CO’s judgments of the target’s standing into the team decision. Each staff member has information that is unique to contribute to the decision. An incoming target could therefore look different to each of the three staff members in terms of its threat. It is up to the leader to combine this information into the team decision. ONLY THE DECISION REGISTERED BY THE LEADER is considered to be the team’s decision. OUTCOMES OF DECISIONS According to the accuracy of your decision, there are five possible evaluative outcomes (scoring is done automatically by the computer). ONLY THE DECISION REGISTERED BY YOU, THE LEADER is considered to be the team’s decision. The five possible outcomes are: 153 154 OUTCOME DEFINITION EXAMPLE SCORE (1) HIT The decision was Carrier said defend, correct 2 exactly correct answer was defend (2) NEAR MISS The decision was off Carrier said defend, correct 1 by one level answer was lock-on (3) MISS The decision was off Carrier said defend, correct 0 by two levels answer was ready (4) INCIDENT The decision was off Carrier said defend, correct -1 by three levels answer was warn (5) DISASTER The decision was off Carrier said defend, correct -2 by more than three answer was either monitor, levels review, or ignore 155 EXPERIMENT I LEADER POST-SESSION QUESTIONNAIRE Please indicate how much you feel you weighted (utilized) the judgments of each of your staff members. Divide 100 points across each of the staff members in a manner that reflects how you think you weighted each staff member during the simulation. Example: If you felt you weighted the CAD the most, followed by the AWAC, and tended to ignore the Cruiser, you would fill in the blanks like this: CAD: 65 AWAC: 35 Cruiser: 0 Your response: CAD: AWAC: Cruiser: Subordinate Accuracy Please indicate how accurate (on a scale of 0-100) you feel each of your staff members was in predicting the correct decision: CAD: AWAC: Cruiser: 156 EXPERIMENT I DEBRIEFING FORM The purpose of this study was to investigate factors that affect a leader's ability to accurately utilize information provided by subordinates. Judgment confidence information (e. g., the CO would recommend an IGNORE decision and add that s/he was 80% confident), staff member past judgment information, neither, or both were provided to different participants to test whether this information improved team performance and the appropriateness of leaders' strategies for combining the judgments of the four COs. People's ability to accurately weight staff members in conditions in which differential vs. equal weighting strategies are appropriate and no staff past judgment accuracy or judgment confidence information is available was also investigated. If you have any questions about this study, please contact Jean Phillips in the Management Department at 353-7116. To avoid affecting the results of this study, IT IS VERY IMPORTANT that you do not discuss your experience with this study with other students who might participate in the future. Thank you for participating! APPENDIX D EXPERIMENT 11 MATERIALS: POSITION-SPECIFIC TRAINING MANUALS FOR STAFF, POST-SESSION QUESTIONNAIRES, AND DEBRIEFING FORM APPENDIX D EXPERIMENT II POSITION-SPECIFIC STAFF TRAINING MANUALS [NOTE: PARTICIPANTS WILL ONLY RECEIVE INFORMATION ABOUT THEIR POSITION AND THE ATTRIBUTES AND COMBINATION RULE RELEVANT TO THEIR PARTICULAR AREA OF RESPONSIBILITY] The CAD is the Coastal Air Defense unit (located on a beach). As the Commanding Officer (CO) of the CAD, you are responsible for providing the Carrier (the team leader) with a judgment summarizing a target’s standing in terms of it’s location. DETERMINING THE LEVEL OF THREAT FOR A TARGET In general, the degree to which an incoming target is threatening depends on its standing on nine attributes, three of which you are responsible for interpreting. These nine attributes combine into three simple rules which in combination are used to determine the danger associated with any target. The commanding officers of the CAD, AWACS, and Cruiser are each responsible for combining three different attributes into one of these rules. The commanding officer of the Carrier is responsible for combining staff summaries of these three rules into a correct overall team decision. CHARACTERISTICS OF AIRBORNE TARGETS The three attributes of targets for which you, as the C0 of the AWAC, are responsible are listed below along with the ranges of possible values for these attributes: CAD (1) Altitude Lower targets are more threatening. 35,000 to 5,000 ft. (2) Corridor A corridor is a 20 mile wide "safe lane" open to commercial Status air traffic. Targets in the center of the corridor are less threatening than those farther away from the center of the corridor. 0 miles (in the middle of it) to 30 miles (way out of it) (3) Range Distance of the aircraft from the Carrier. 200 miles to 1 mile Location Rule (CAD): ALTITUDE, CORRIDOR STATUS, and RANGE go together to determine the location of the aircraft. Aircraft are threatening only if they are low (low value on altitude), outside commercial traffic lanes (high value on corridor status), and close (low value on range) to the Carrier. If any one of these three values are non-threatening, then the aircraft is to be considered none-threatening in terms of the location rule 157 158 The AWAC is an air reconnaissance plane. As the Commanding Officer (C0) of the AWAC, you are responsible for providing the Carrier (the team leader) with a judgment summarizing a target’s standing in terms of its movement. DETERMINING THE LEVEL OF THREAT FOR A TARGET In general, the degree to which an incoming target is threatening depends on its standing on nine attributes, three of which you are responsible for interpreting. These nine attributes combine into three simple rules which in combination are used to determine the danger associated with any target. The commanding officers of the CAD, AWACS, and Cruiser are each responsible for combining three different attributes into one of these rules. The commanding officer of the Carrier is responsible for combining staff summaries of these three rules into a correct overall team decision. CHARACTERISTICS OF AIRBORNE TARGETS The three attributes of targets for which you, as the C0 of the AWAC, are responsible are listed below along with the ranges of possible values for these attributes: AWAC (1) Speed Faster targets are more threatening. 100 to 800 m.p.h. (2) Angle Descending targets are more threatening - the sharper the descent, the greater the threat +15 degrees (rapid ascent) to -15 degrees (rapid descent) (3) Direction Targets headed directly at the Canier are more dangerous than those passing far to the left or right +30 degrees (passing far to the left or right of the Carrier) to 00 degrees (coming straight into the Carrier) Movement Rule (AWACS): SPEED, ANGLE, AND DIRECTION go together to determine the movement of the aircraft. Aircraft are threatening only if they are going fast (high value on speed), descending (low value on angle), and coming straight in to the Carrier (low value on direction). If any one of these three values are non-threatening, then the aircraft is to be considered non-threatening in terms of the movement rule. 159 The CRUISER is a large ship that provides support from the ocean. As the Commanding Officer (C0) of the Cruiser, you are responsible for providing the Carrier (the team leader) with a judgment summarizing a target’s standing in terms of its classification. DETERMINING THE LEVEL OF THREAT FOR A TARGET In general, the degree to which an incoming target is threatening depends on its standing on nine attributes, three of which you are responsible for interpreting. These nine attributes combine into three simple rules which in combination are used to determine the danger associated with any target. The commanding officers of the CAD, AWACS, and Cruiser are each responsible for combining three different attributes into one of these rules. The commanding officer of the Carrier is responsible for combining staff summaries of these three rules into a correct overall team decision. CHARACTERISTICS OF AIRBORNE TARGETS The three attributes of targets for which you, as the C0 of the Cruiser, are responsible are listed below along with the ranges of possible values for these attributes: CRUISER (1) Size Smaller targets are more threatening. 65 to 10 meters (2) IFF IFF stands for "Identification Friend of Foe," this is a radio signal that identifies whether an aircraft is civilian, para-military or military. .2 MHz (civilian) to 1.8 MHz (fighter) (3) Radar Type The kind of radar possessed by the aircraft. Class 1 (weather radar only) to Class 9 (weapons radar) Category Rule (Cruiser): SIZE, IF F, and RADAR TYPE go together to determine the category of aircraft. Aircraft are threatening only if they are small (low value on size), military (high value on IFF) and carrying weapons radar (high value on radar). If any one of these three values are non-threatening, then the aircraft is to be considered non-threatening in terms of the category rule. 160 (THIS PAGE GIVEN TO ALL POSITIONS) CONFIDENCE (11' called for by the condition) In addition to sending the leader your judgment of the target, you will also be asked to send an indication of how confident you are (on a scale of 0% reflecting a guess to 100% reflecting extreme confidence) in the accuracy of your judgment. For example, in addition to recommending an IGNORE decision, you will inform the leader that you are 80% confident that this judgment is correct, based on the information you have acquired. This will become more clear during training. PAST JUDGMENT ACCURACY (if called for by the condition) During each active target, there will be a red bar on the leader’s screen near the icon that represents your station. This bar reflects how accurate your judgments have been in the past (in terms of the strength of the relationship between your judgments and the correct decision). There will also be a number to the left of the red bar on a ~100 to +100 scale. The number and bar will only be seen by the leader. The longer the red bar and the more positive the number, the more accurate your judgments have been. COMBINING THE THREE RULES INTO A TEAM DECISION The CO of the Canier is responsible for combining the CO's judgment of the target's standing on each of three rules into the team decision. ONLY THE DECISION REGISTERED BY THE LEADER is considered to be the team's decision. OUTCOMES OF DECISIONS Your decisions regarding each target are to be made based upon the information on the dimensions listed above. According to rules described in this section, there are five possible evaluative outcomes associated with the accuracy or the team's decisions (scoring is done automatically by the computer). ONLY THE DECISION REGISTERED BY THE LEADER is considered to be the team's decision. The five possible outcomes are: 161 OUTCOME DEFINITION EXAMPLE SCORE (1) HIT The decision was Carrier said defend, correct 2 exactly correct answer was defend (2) NEAR MISS The decision was off Carrier said defend, correct 1 by one level answer was lock-on (3) MISS The decision was off Carrier said defend, correct 0 by two levels answer was ready (4) INCIDENT The decision was off Carrier said defend, correct -1 by three levels answer was warn (5) DISASTER The decision was off Carrier said defend, correct -2 by more than three answer was either monitor, levels review, or ignore 162 EXPERIMENT II WILLINGNESS TO RETURN QUESTIONNAIRE Please circle one of the three options for each of the following questions. If you respond "yes" or "maybe" for any of the questions, you may be contacted later this term regarding firrther opportunities to participate in this or other research projects. You may change your mind and decline to participate at any time, but if you think that you might be interested, please indicate this below. 1. Would you be willing to return for $7.50/hour later this term to participate in further research, doing the SAME type of task with the SAME team members? YES NO MAYBE 2. Would you be willing to return for $7.50/hour later this term to participate in further research, doing a DIFFERENT task but with the SAME teammates and SAME leader? YES NO MAYBE 3. Would you be willing to return for $7.50/hour later this term to participate in further research, doing the SAME type of task with the SAME leader but with DIFFERENT teammates? YES NO MAYBE 4. Would you be willing to return for $7.50/hour later this term to participate in further research, doing a DIFFERENT task with the SAME leader but with DIFFERENT teammates? YES NO MAYBE 5. Would you be willing to return for $7.50/hour later this term to participate in further research, doing the SAME type of task with the SAME teammates but with a DIFFERENT leader? YES NO MAYBE 6. Would you be willing to return for $7.50/hour later this term to participate in further research, doing a DIFFERENT task with the SAME teammates but with a DIFFERENT leader? YES NO MAYBE Name: (please print) Student Number: Phone Number: 163 EXPERIMENT II DESIRE TO CHANGE FOR NEXT TASK QUESTIONNAIRE The next thing you will be doing is a second decision making task. The top performers on this task will receive a cash bonus: First Place will receive $10, Second and Third Place will each receive $5. For the next decision making task, would you like to: l. Remain working with the same leader? (circle one) Keep the Same Leader Change 2. Remain working with the same team? (circle one) Keep the Same Team Change 3. Do the next task as an individual? (circle one) Work as an Individual Work With a Team 164 EXPERIMENT II STAFF SATISFACTION WITH LEADER QUESTIONNAIRE Adapted from Seashore, Lawler, Mirvis and Cammann (1982) Please use the following scale in responding to the statements below. There are no right or wrong answers; please answer honestly. Fill your response in the corresponding circle on the computer scorable answer sheet. 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 1. Overall, I am satisfied with my leader. 2. In general, I don't like my leader. 3. I would be willing to work with this leader again in the future. 4. I think my leader did a poor job in making decisions. 5. If we were to perform another set of targets, 1 would definitely want to change leaders. 6. I think I could have done a better job than my leader did. 7. I am satisfied with my team's performance. 8. I think my fiiends would be interested in applying for this project. 165 EXPERIMENT II WITHDRAWAL FROM TASK QUESTIONNAIRE Adapted from Baker (1991) and Gilliland (1992) Please use the following scale in responding to the statements below. There are no right or wrong answers; please answer honestly. Fill your response in the corresponding circle on the computer scorable answer sheet. 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 1. I often daydreamed while working on the task. 2. I was frequently bored while working on the task. 3. At the end of the session, I felt as though I had accomplished something. 4. I felt that I was working below my abilities most of the time. 5. When I was doing the task I wished I was anyplace else. 6. I often thought of quitting the task. 7. If I hear of other projects like this, I would be interested in participating. 8. If I knew in advance what this project would entail, I would not have chosen to participate. 9. I would recommend this project to my classmates. 166 EXPERIMENT II SELF-EFFICACY QUESTIONNAIRE Adapted from Locke, Frederick, Lee & Bobko (1984) This set of questions asks you to describe how you feel about your capabilities to perform the task if you were toaperform the simulation again. Please use the scale shown below to make your ratings. Strongly Strongly Disagree Disagree Neutral Agree Agree <_-| | I I l > I l | I (1) (2) (3) (4) (5) 1. I can meet the challenges of my role in this simulation. 2. I am confident in my understanding of how information cues are related to the decisions I have to make. 3. I can deal with decisions under ambiguous conditions. 4. I am certain that I an manage the requirements of my position for this task. 5. I believe I will fare will in this task even if the workload is increased. 6. I am confident that I can cope with my role if the simulation becomes more complex. 7. I believe I can develop methods to handle the requirements of my task and my role. 8. I am certain I can cope with task components competing for my time. 167 EXPERIMENT II DEBRIEFING FORM The purpose of this study was to investigate factors that affect decision making processes in hierarchical decision making teams, as well as the consequences of different decision making processes. Staff member reactions to leaders differentially vs. equally utilizing staff member judgments in making the team decision were investigated. If you have any questions about this study, please contact Jean Phillips in the Management Department at 3 5 3 -7 l 1 6. To avoid affecting the results of this study, IT IS VERY IMPORTANT that you do not discuss your experience with this study with other students who might participate in the future. Thank you for participating! APPENDIX E REPEATED MEASURES REGRESSION ANALYSES FOR EXPERIMENT II USING EFFECTS CODING APPENDIX E REPEATED MEASURES REGRESSION ANALYSES FOR EXPERIMENT II USING EFFECTS CODING Repeated Measures Regression Analysis of LUSJ and Team Performance on Willingness to Return A in A in A in Incremental F Variable Total R2 Within R2 Between R2 (df,df) Team Performance .021* .055" 4.32 (1,74) LUSJ .021 .034 1.74 (3,149) Performance X LUSJ .043"‘ .069* 3.77 (3,146) Total R2 .085" .103* .055* lira; The higher the score is, the greater the willingness to return. N=228. Between-team variance=.48 (38%); within-team variance=.78 (62%). Total df within-team=152; total df between-team=75. *p<.05. **p<.01. Repeated Measures Regression Analysis of LUSJ and Team Performance on Desire to Change for Next Task A in A in A in Incremental F Variable Total R2 Within R2 Between R2 (df,df) Team Performance .118" .273“ 27.83 (1,74) LUSJ .040* .070* 3.76 (3,149) Performance X LUSJ .018 .032 1.72 (3,146) Total R2 .176" .102* .273" w; The higher the score is, the greater the desire to change for the next task. N=214. Between-team variance=.37 (43%); within-team variance=.49 (57%). Total df within- team=142; total df between-team=71 . *p<.05. *"‘p<.01. 168 169 Repeated Measures Regression Analysis of LUSJ and Team Performance on Satisfaction With Leader A in A in A in Incremental F Variable Total R2 Within R2 Between R2 (df,df) Team Performance .283" .466" 64.51 (1,74) LUSJ .002” .005 .25 (3,149) Performance X LUSJ .016 .041 2.08 (3,146) Total R2 .301" .046 .466" N91; The higher the score is, the greater satisfaction with the leader. N=228. Between- tearn variance=.35 (61%); within-team variance=.23 (39%). Total dfwithin-team=152; total df between-team=75. *p<.05. "p<.01. Repeated Measures Regression Analysis of LUSJ and Team Performance on Task Withdrawal A in A in A in Incremental F Variable Total R2 Within R2 Between R2 (df,df) Team Performance .079** .180“ 16.24 (1,74) LUSJ .046" .082M 4.44 (3,149) Performance X LUSJ .019 .034 1.86 (3,146) Total R2 .144" .116" .180M N_o_t§_. The higher the score is, the greater the task withdrawal. N=228. Between-team variance=.18 (44%); within-team variance=.23 (56%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p<.01. 170 Repeated Measures Regression Analysis of LUSJ and Team Performance on Self- Efficacy A in A in A in Incremental F Variable Total R2 Within R2 Between R2 (df,df) Team Performance .038" .085* 6.86 (1,74) LUSJ .013 .024 1.20 (3,149) Performance X LUSJ .031 .056* 2.97 (3,146) Total R2 .082* .080* .085* & The higher the score is, the greater the self-efficacy. N=228. Between-team variance=. 14 (45%); within-team variance=.18 (55%). Total dfwithin-team=152; total df between-team=75. *p<.05. **p<.01. "llllllllllllllllill-l“