llllllllll‘gljljlllllfill 3 1293 l This is to certify that the dissertation entitled THE PERCEIVED FAIRNESS OF SELECTION SYSTEMS: AN ORGANIZATIONAL JUSTICE PERSPECTIVE presented by Stephen W. Gilliland has been accepted towards fulfillment of the requirements for Ph .D . degree in Psychology W102»? Major professor Date @/¢/72— / f MSU is an Affirmative Action/Equal Opportunity Institution 0- 12771 r ‘1 LIBRARY Michigan State Untveretty PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. DATE DUE DATE DUE DATE DUE ll 3 if“ ’1’? 1993 1 i fl MSU Is An Affirmative ActiorVEquel Opportunity Institution ammo-9.1 THE PERCEIVED FAIRNESS OF SELECTION SYSTEMS: AN ORGANIZATIONAL JUSTICE PERSPECTIVE BY Stephen H. Gilliland A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Psychology 1992 ABSTRACT THE PERCEIVED FAIRNESS OF SELECTION SYSTEMS: AN ORGANIZATIONAL JUSTICE PERSPECTIVE by Stephen w. Gilliland A conceptual model of perceived selection system fairness was developed in an effort to address theoretical shortcomings of past research. Organizational justice theory provided the theoretical framework for examining issues of perceived fairness and generated a number of hypotheses. Issues of procedural and distributive justice were both described as important to overall fairness reactions and the interaction between these forms of justice was cited as an issue of both practical and theoretical importance. Links were proposed between perceived fairness and a number of individual and organizational outcomes. To test a number of the hypotheses proposed in the conceptual model, a study was conducted in which procedural justice was manipulated along dimensions of job relatedness and selection information explanation, and distributive Justice was manipulated in terms of a hiring decision. Undergraduates (n = 260) completed one of three employee selection tests that varied in terms of perceived job relatedness after either receiving or not receiving an explanation regarding the suitability of the selection test. Half of the participants (9 = 132) were randomly selected to perform a journal coding task in return for monetary compensation. Questionnaire dependent measures were collected from the entire sample with measures of perceived fairness, application recommendations, test motivation, self-esteem, and self-efficacy. Additionally, work performance and job satisfaction measures were collected from hired participants. Results demonstrated main effects of job relatedness on perceived fairness and test motivation measures, and interactions between job relatedness and hiring decision on perceived outcome fairness and self- efficacy. Explanation appeared to influence application recommendations and post hire work performance and Job satisfaction. Theoretical and practical implications of the findings were discussed, as were some methodological shortcomings. A final methodological advance provided by this research was the development and initial assessment of a measure for assessing perceived selection system fairness. ACKNOWLEDGEMENTS As I compose this acknowledgement I reflect on the fact that this page will be read more often than any other page in this dissertation... I will start by saying that completing this dissertation was interesting, challenging, a useful learning experience, and for the most part, enjoyable. While that is probably not the typical reaction of someone completing a dissertation, most people are not as lucky as I was in having Neal Schmitt as my chair. Throughout this dissertation and graduate school in general, Neal has served as an advisor, a mentor, and a role model. I cannot imagine having a better graduate school experience or a better mentor. Thank you Neal. I would also like to thank a number of other people for helping me complete this dissertation. When faced with the time pressures imposed by my desires to spend a summer in Europe. you (soon to be y'all) facilitated the rapid completion of the final phases of this project.. In particular, I would like to thank Jose Cortina, Rob Anderson, and especially Pat Draves for helping me conduct the study; Neal, Jose, Rob, and Cindi for helping complete the mind numbing task of correcting the journal coding; Dan Ilgen, Steve Kozlowski, and Mike Lindell for their insightful comments and questions during my oral defense meetings; and Wonderlic, Inc. and London House for providing test materials. Finally, I would like to thank Cindi Kaufman for her support and love, and for helping me maintain a awareness of the fact that there is more to life than dissertations. iv TABLE OF CONTENTS Page LIST OF TABLES ................................................... ix LIST OF FIGURES ..... ............................................. xi CHAPTER 1: INTRODUCTION .......................................... 1 Organizational Justice ......................................... 6 Distributive Justice ......................................... 7 Equity theory .............................................. 9 Alternate distributional rules ............................. 12 Summary .................................................... 14 Procedural Justice ........................................... 14 Process control and procedural justice ..................... 15 Leventhal’s components of allocation procedures ............ 18 Empirical investigation of procedural dimensions ........... 20 Interactional justice ...................................... 24 Summary: A trichotomy of procedural antecedents ............ 26 Integrating Procedural and Distributive Justice .............. 27 Referent cognition theory .................................. 29 Summary .................................................... 31 Outcomes Associated with Justice Perceptions ................. 32 Conclusions .................................................. 34 Perceptions of Employee Selection Systems ...................... 35 Interviews ................................................... 37 Work Sample Tests and Assessment Centers ..................... 39 Paper-and-pencil Tests ....................................... 41 Computerized Testing ......................................... 43 Drug Testing ................................................. 44 Integrity Testing .... ........................................ 47 Methodological Advances ...................................... 48 Conclusions .................................................. 49 CHAPTER 2: A MODEL OF APPLICANT REACTIONS TO SELECTION SYSTEMS ... 52 The Conceptual Model ........................................... 56 Procedural Justice Rules ..................................... 59 Job relatedness ............................................ 61 Opportunity to perform ..................................... 62 Fakability ................................................. 63 Opportunity for retest ..................................... 64 Page Consistency of administration .............................. 65 Privacy and confidentiality ................................ 66 Feedback ................................................... 66 Validity information ....................................... 67 Information on scoring ..................................... 69 Information on procedures .................................. 69 Interpersonal effectiveness of administrator ............... 70 Personal bias suppression .................................. 71 Two—way communication ...................................... 72 Conclusions ................................................ 72 Distributive Justice Rules ................................... 74 Equity ..................................................... 75 Equality ................................................... 79 Special needs .............................................. 81 Conclusions ................................................ 82 Combining Procedural and Distributive Justice ................ 83 Organizational Outcomes ...................................... 87 Affect toward the selection process ........................ 88 Job application and acceptance decisions ................... 89 Organizational product endorsement ......................... 92 Test motivation ............................................ 93 Self-perceptions ........................................... 93 Future job search intentions ............................... 96 Job performance ............................................ 97 Organizational citizenship behavior ........................ 98 Job satisfaction and organizational commitment ............. 99 Turnover ................................................... 100 Conclusions .................................................. 100 CHAPTER 3: PRIMARY STUDY AND HYPOTHESES .......................... 104 Hypotheses ................................................... 107 Fairness perceptions ....................................... 107 Application/recommendation intentions ...................... 112 Test motivation ............................................ 113 Self-esteem and self-efficacy .............................. 114 Performance ................................................ 118 Job satisfaction ........................................... 119 CHAPTER 4: METHOD ................................................ 120 Overview ..................................................... 120 Participants ................................................. 120 Design ....................................................... 121 Task ......................................................... 122 Independent Variables ........................................ 122 Procedures ................................................... 123 Phase 1: Assessment ........................................ 123 Phase 2: Work Performance .................................. 127 vi Page Dependent Measures ........................................... 128 Fairness perceptions ....................................... 128 Application/recommendation intentions ...................... 130 Test motivation ............................................ 130 Self-esteem ................................................ 131 Self-efficacy .............................................. 132 Performance ................................................ 132 Job satisfaction ........................................... 132 Analyses ..................................................... 132 CHAPTER 5: RESULTS ............................................... 135 Measurement Adequacy and Descriptive Statistics .............. 135 Manipulation Checks .......................................... 142 Dependent Variables .......................................... 144 Perceived fairness ......................................... 144 Application/recommendation intentions ...................... 148 Test motivation ............................................ 151 Self-esteem and self-efficacy .............................. 158 Performance ................................................ 165 Job satisfaction ........................................... 171 Supplementary Analyses of Selection System Fairness Measure .. 171 CHAPTER 6: DISCUSSION ............................................ 183 Summary of Results ........................................... 183 Perceived fairness ......................................... 184 Test motivation ............................................ 186 Self-concepts .............................................. 187 Post hire attitudes and behaviors .......................... 187 Job Relatedness .............................................. 188 Explanation .................................................. 193 Interactions Between Procedural and Distributive Justice ..... 197 Perceived outcome fairness ................................. 197 Self-efficacy .............................................. 200 Limitations .................................................. 204 Future Research Directions ................................... 208 General Conclusions . ......................................... 212 LIST OF REFERENCES ..... . ......................................... 215 APPENDIX A: Journal Coding Materials ............................. 226 APPENDIX 8: Recruiting Script .................................... 231 APPENDIX C: Application Form ..................................... 236 APPENDIX D: Selection Tests ...................................... 238 vii Page APPENDIX E: Explanation Manipulations ............................ 266 APPENDIX F: Feedback and Debriefing .............................. 271 APPENDIX G: Dependent Measures ................................... 274 APPENDIX H: Results of Factor Analysis on Perceived Fairness ..... 287 viii TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE 10 11 12 13 14 15 16 LIST OF TABLES Components of Procedural Fairness ........................ Dependent Measure Internal Consistencies, Means, and Standard Deviations .................................... Intercorrelations of Independent and Dependent Measures .. Analysis of Variance Summary for Fairness Perceptions and Application/Recommendation Intentions .............. Means and Standard Deviations for Test Type by Selection Decision Interaction on Perceived Outcome Fairness ..... Means and Standard Deviations for Explanation by Selection Decision Interaction on Application/ Recommendation Intentions .............................. Analysis of Variance Summary for Test Motivation Scales .. Means and Standard Deviations for Test Type Effects on Test Motivation Scales ................................. Analysis of Variance Summary for Self-esteem and Self- efficacy ............................................... Means and Standard Deviations for Test Type by Selection Decision Interaction on Self-efficacy Measures ........ Analysis of Variance Summary for Work Performance ....... Means and Standard Deviations for Test Type by Session Interaction on Performance Quantity ............. . ..... Means and Standard Deviations for Explanation by Session Interaction on Performance Quantity ................... Analysis of Variance Summary for Job Satisfaction ....... Rotated Factor Matrix for 13 Factor Solution ............ Internal consistencies, Means, and Standard Deviations for the Selection System Fairness Measure ............. ix Page 22 136 137 146 149 152 155 156 159 162 166 167 169 172 174 180 Page TABLE 17 Summary of Hypotheses and Results ....................... 185 LIST OF FIGURES Page FIGURE 1 Conceptual Model of Perceived Fairness ................. 58 FIGURE 2 Hypothesized Job Relatedness by Selection Decision Interaction on Perceived Fairness .................... 110 FIGURE 3 Hypothesized Explanation by Selection Decision Interaction on Perceived Fairness .................... 111 FIGURE 4 Hypothesized Job Relatedness by Selection Decision Interaction on Self-esteem and Self-efficacy ......... 116 FIGURE 5 Hypothesized Explanation by Selection Decision Interaction on Self-esteem and Self—efficacy ......... 117 FIGURE 6 Job Relatedness by Selection Decision Interaction on Outcome Fairness ..................................... 150 FIGURE 7 Explanation by Selection Decision Interaction on Recommendation Intentions ............................ 153 FIGURE 8 Job Relatedness by Selection Decision Interaction on Quantity Self-Efficacy ............................... 163 FIGURE 9 Job Relatedness by Selection Decision Interaction on Quality Self-Efficacy ................................ 164 xi CHAPTER 1 INTRODUCTION Personnel selection has long been a process that has received considerable attention in industrial psychology. Dominant concerns in the selection literature involve issues of the psychometric soundness of tests (e.g. reliability), the validity of inferences made with tests (Schmitt, Gooding, Noe, & Kirsch, 1984), the utility of selection procedures (Hunter a Hunter, 1984), and the fairness with which selection procedures impact protected subgroups (Schmitt, 1989). Although all of these topics have been studied from a psychometric perspective, rarely has attention been directed at the social side of the selection process (Herriot, 1989). In particular, the study of fairness has focussed on concerns about test bias, differential prediction, and the relative impact of subgroup differences on expected organizational productivity and subgroup hiring rates (Hartigan 8 Wigdor, 1989; Schmitt, 1989). An alternate, and often overlooked perspective of test fairness focusses on applicants’ reactions to testing and selection decisions. It can be argued that the perceived fairness of selection procedures is equal in importance to issues of psychometric fairness. Just as the establishment of psychometrically fair selection procedures is important from business, ethical, and legal perspectives, applicants’ 1 2 perceptions of test fairness can also be seen as important to these perspectives. From a business perspective, reactions to selection procedures may influence the ability of the organization to attract highly qualified applicants and the job acceptance decisions of applicants to whom offers have been extended. Both of these outcomes can influence the overall utility of selection procedures (Murphy, 1986; Boudreau a Rynes, 1985). Indirectly, reactions to the selection process may influence applicants’ decisions to buy products or services of organizations after they have unsuccessfully sought employment with the organization. From an ethical perspective, organizations should be concerned with the effects of selection procedures on the psychological well being of applicants. For example, the perceived fairness of selection testing may influence the efficacy and self-esteem of rejeCted applicants (Robertson a Smith, 1989). The Principles for the Validationgand Usesof Personnel Selection Procedures (Society for Industrial and Organizational Psychology, 1987) recommends that: Researchers should seek to avoid bias and even the appgagag§g_gfi bias [italics added] in choosing, administering, and interpreting selection procedures. At the very least, the use of a selection procedure should create an environment that is responsive to the feelings of all candidates, insuring the dignity of all persons. (p. 33) Finally, from a legal perspective the perceived fairness of the selection procedure may influence applicants’ and lawyers’ decisions to rnJrsue discrimination cases. Seymour (1988) indicated that face validity (or face invalidity) of a test can be a factor in deciding 3 whether to challenge a particular test. With the current national attention given to the 1991 amendments to the Civil Rights Bill and debate over quota hiring, issues of selection procedure fairness are likely to receive even greater public scrutiny. Additionally, the 1991 amendments allow for jury trials in discrimination cases, opening up the possibility that issues of face validity may become a more salient issue with statistically naive jurors. In addition to these practical outcomes associated with the perceived fairness of selection procedures, research in this domain can be of theoretical importance. While few attempts have been made to empirically assess reactions to selection procedures, even less attention has been given to developing a theoretical model of these perceptual processes. One predominant characteristic of selection research in general is that it is largely atheoretical and emphasizes practical concerns. When one examines the social side of selection, it is possible to extend theories of organizational psychology to selection issues. In particular, organizational justice theories are concerned with the perceived fairness of organizational procedures and outcomes. By applying theories of organizational justice to selection procedures and hiring decisions, it is possible to develop a theory of selection procedure perceived fairness. In addition, it is possible that research in this domain will contribute to and build upon existing literature that is developing and testing organizational justice theories. Traditional discussions of organizational justice have emphasized the role of gistgipgtjvs jgstice, or the fairness of organizational outcome distributions (Bierhoff, Cohen, a Greenberg, 1986). One of the original distributive justice theories, equity theory (Adams, 1965), 4 suggested that perceptions of the fairness of outcome distributions relative to input contributions influence the perceiver’s emotional reactions toward that outcome. In turn, emotional reactions influence cognition, attitudes, and behavioral responses. More recently, researchers have also examined perceptions of the fairness of procedures that are used to divide valued organizational outcomes (Folger a Greenberg, 1985). This prggsdural jgstics perspective focuses on how decisions affecting employees are made rather than the outcomes of those decisions (Greenberg a Tyler, 1987). Current developments in organization justice theories are aimed at integrating procedural and distributive justice perspectives (e.g. referent cognition theory; Folger, 1986; 1987). Organizational justice theories have been applied to reward allocation decisions, performance appraisal/evaluation systems, and dispute resolution through negotiation (Folger a Greenberg, 1985). This application has been beneficial to both the theoretical development of organizational justice and in developing theory and knowledge in the areas of application (Greenberg a Tyler, 1987). The current research extends theories and research from the organizational justice literature to the selection domain with the similar goal of advancing both organization justice theory and our understanding of selection fairness. The research outlined in this proposal has three primary objectives, the first of which is to develop a comprehensive model that describes procedural and distributive factors that influence applicants’ reactions to selection systems. The model also links fairness Perceptions to a variety of attitudinal and behavioral outcomes. To provide a basis for this model, organizational justice literature is 5 selectively reviewed, highlighting issues of importance for the extension of justice literature to selection processes. Additionally, selection research that has examined reactions to selection procedures is exhaustively reviewed in an effort to establish what has been investigated and where further research is needed. It is hoped that the model that is developed will launch a program of systematic research on the perceived fairness of selection systems. The second objective of the current research is to develop a measure of fairness perceptions. This will include global perceptions of system and outcome fairness, as well as perceptions of the extent to which a number of procedural and distributive rules associated with the selection process have been violated. The final objective of this research is to examine the interaction between procedural features and distributive outcomes of a hiring situation as it relates to various outcomes. This interaction is of importance because some organizational justice research has indicated that procedural justice only influences fairness perceptions when distributive outcomes are negative (Greenberg, 1987a). In the selection situation, this would suggest that procedural aspects of the selection process may have the greatest impact on rejected applicants and may not influence the reactions of accepted applicants. From a corporate perspective, the individuals of primary concern are those applicants that are accepted for employment. If selection procedures do not influence reaction to these applicants, there may be little applied irrterest in this line of research. From a theoretical perspective, the deflkonstration of an interaction between procedural and distributive features of the selection process is important to both organizational 6 justice theory and the theory base that is developing in the selection procedure reactions literature. The primary study will examine this interaction as it relates to fairness perceptions and a number of pre- and post-hire attitudinal and behavioral outcomes. Organizational Justice Theories of organizational justice are concerned with perceptions of the fairness of outcome distributions (e.g. rewards or performance evaluations) and the procedures through which these outcomes are distributed (Greenberg, 1990b). These fairness perceptions influence a variety of cognitive, attitudinal, and behavioral outcomes of employees in organizations. While the basic processes of different organizational justice theories are similar, a variety of perspectives have been developed and, as a result, justice research has taken many different directions. Perhaps because of this diversity, there have been a number of recent reviews that have attempted to integrate and organize research and theory (e.g. Cohen, 1987; Greenberg, 1987b, 1990b; Lind a Tyler, 1988) Theories of organizational justice have been distinguished along two independent dimensions: a reactive-proactive dimension and a process-content dimension (Greenberg, i987b). Reactive theories of justice involve individuals’ reactions to, or attempts to deal with unfair states. Such theories include equity theory (Adams, 1965) and larocedural justice theory (Lind a Tyler, 1988). Proactive theories deal Irith behaviors aimed at promoting justice or creating just situations (Leaventhal, 1980). As the current research is concerned with reaction tr) sselection procedures, the focus here will be on reactive theories of organizational justice. The second dimension on which justice theories can be distinguished is a process—content dimension (Greenberg, 1987b). Process theories deal with the procedures by which organizational outcomes, such as pay allocation or performance appraisals, are determined. They received their foundation in the legal literature and basic ideas were extended to social psychology (Lind a Tyler, 1988). Content theories are concerned with distributive fairness based on rules of equity, need, or equality. Adams’ (1965) equity theory was a dominant motivation theory in the 1960’s and early 1970’s and research on equity theory constitutes the bulk of investigation into distributive justice (Greenberg, 1987b). In addition to Greenberg’s (1987b) categorization of different justice theories, research has attempted to empirically distinguish procedural and distributive justice. More recently, theories have also been proposed that link components of procedural and distributive justice. Before this research and theory are discussed, the components and conceptualizations of distributive and procedural justice are reviewed. The discussion of organizational justice concludes with an examination of some organizational outcomes that have been associated with justice perceptions. tri tive Justi Distributive justice theories are the most thoroughly developed (and empirically tested of the different organizational justice theories (IBreenberg, 1987b). Indeed, theories of distributive justice extend wel 1 beyond organizational behavior and can be found in fields such as soc:ial psychology, sociology, economics, political theory, and 8 philosophy (Cohen, 1987). Within this vast diversity of perspectives, four fundamental components can be found in all theories of distributive justice: receipts, recipients, distributional rules, and evaluation standards (Cohen, 1987). Receipts may be material goods, opportunities, conditions or roles and refer to outcomes of social exchanges (Cohen, 1987). In distributive justice research found in the organizational literature, the most common receipt is pay (e.g. Vecchio, 1982), however other organizational receipts include performance evaluations (Greenberg, 1986b), job titles (Greenberg a Ornstein, 1983), and workspace status (Greenberg, 1988). While in some domains it may be difficult to define and draw boundaries to differentiate recipients of outcomes (Cohen, 1987), in the organizational literature recipients typically refer to individual workers or individuals within work groups (e.g. Greenberg, 1982). Individual differences, such as gender differences, have been examined within theories of organizational justice, but research has tended to be inconsistent and many differences have been shown to be a function of the situational context (Major a Deaux, 1982). Recently, Huseman, Hatfield, and Miles (1987) proposed that people differ in their sensitivity to equity, overpayment, and underpayment, hwever this conceptualization is both inconsistent with equity theory and thus far, untested (Greenberg, 1990b). The third component of distributive justice theories is the cristributional rule which describes the relationship between the receipt anti some characteristics of the recipient unit. Equity, the original and most thoroughly investigated distributive rule, suggests that the 9 outcomes people receive should correspond in some way to their inputs into the exchange situation. Other distributive rules include equality, in which equal outcomes are distributed to all individuals, and needs, in which outcomes are distributed on the basis of relative needs. Research has examined individual preferences for different distributional rules and situations that promote the use of a particular rule (Bierhoff et al., 1986). These different distributional rules are discussed in more detail later as they have implications for the distributive justice of selection decisions. The final component of distributive justice theories is the evaluation standard or moral judgment of the satisfaction of the distributional rule. While justice is the only standard that has received empirical attention, especially in the organizational literature, it is possible that other standards, such as efficiency could also be used as the basis of judgments (Cohen, 1987). Further discussion of alternate evaluation standards is beyond the scope of the current research because justice is the only standard of interest. With these four basic components of distributive justice, the way in which theories differ is in the specific components and the means by which these components are combined. As equity theory is the most dominant theory of distributive justice, the specifics of this theory and its empirical support are presented. Following this discussion, research that has examined some alternate distributional rules is discussed. u th r . With equity theory (Adams, 1965), the receipt is work outcomes, the recipient is the worker, the distributional rule is equity, and the comparison standard is justice. Equity theory states 10 that people compare a ratio of their work outcomes to work inputs with a corresponding ratio from a referent other (e.g. coworker). If the ratios are unequal such that an individual is receiving fewer outcomes in relation to inputs than the comparison other (underpayment), the individual will experience anger, and in turn will be motivated to change his/her behavior or distort cognition associated with perceptions of unfairness. Behavior change is most commonly in the form of reducing inputs (e.g. decreasing work productivity) and cognitive distortion can be either in terms of inputs (beliefs of working harder than one actually is), outcomes (increasing perceived status of ones job), or changing the referent comparison group with which judgments of unfair distribution are made. Conditions of overpayment (receiving more outcomes relative to input than a comparison other) lead to experienced guilt, and again motivation to restore the equity.‘ Attitudes are also affected by perceived inequity with dissatisfaction becoming greater as the inequity increases. Equitable relationships are associated with positive emotional states and feelings of satisfaction. Since Adams (1965) original conceptualization of the equity ratios, a number of alternate models have been proposed involving ratios with person’s and referent other’s inputs and outcomes (Vecchio, 1982). In addition, much research has examined the under and overpayment predictions of these equity theories, with results summarized in a number of reviews (Campbell a Pritchard, 1976; Greenberg, 1982; Pritchard, 1969). After considering previous research and critiques, Greenberg (1982) concluded that support existed for the major predictions of equity theory in terms of productivity outcomes, and furmther, these predictions held in both underpayment and overpayment 11 conditions. This conclusion can be contrasted with an earlier review by Pritchard and Campell (1976) in which they suggest that support has been found for equity theory in the underpayment but not overpayment conditions. An example of equity research that is somewhat unique because it did not rely on monetary outcomes can be found in one recent study. In an extension of equity theory to rewards associated with the status of one’s workspace, Greenberg (1988) found that work performance increased in overpayment conditions (i.e. assignment to higher status workspace than was perceived to be just) and decreased with underpayment conditions. Although research has tended to support the basic predictions of equity theory on performance outcomes, criticism in the 19703 proved problematic and eventually led to a decline of interest in this theory (Greenberg, 1990b). The most serious problem with equity theory is a lack of specificity regarding individual response to inequity. Equity theory does not predict when cognitive as opposed to behavioral responses will occur, and what the nature of the cognitive responses may be. Additionally, ambiguity exists as to what defines domains of inputs and outcomes. For example, in addition to work production, inputs could potentially include tenure, attendance, and organizational citizenship type behaviors. Inputs and outcomes also may not always be distinguishable as an increase in responsibility could be interpreted as an input (increased duties) or an outcome (increased prestige) (Greenberg, 1982). Perhaps as a result, there has been a shift in l”£isearch interest away from equity theory and toward issues of prOcedural justice. Additionally, greater attention has begun to be 6 1 Pected toward other distributive rules such as equality and needs. 12 Al rnat i ri ti n l r . Cohen (1987) distinguishes between differentiating and nondifferentiating distributional rules. Differentiating distributional rules, such as equity and needs, specify a characteristic or set of characteristics (e.g. relative contributions and individual needs) upon which outcome distributions can be evaluated for different recipients. Additional differentiating distributional rules include ”winner-takes-all“ and rules based on individual differences such as sex or race (Cohen, 1987; Deutsch, 1986). Nondifferentiating distributional rules, such as equality are based only on the distribution of outcomes and not on characteristics of recipients. The differences in these perspectives can be seen to extend well\beyond the implications for organizational justice as they reflect philosophical or political perspectives. Research in organizational justice has tended to only examine distributive justice from the equity perspective, while in the social psychology literature there is support for the existence of three distributional rules: equality, equity, and needs (Bierhoff et al., 1986). In a summary of the results of six studies that rewarded individuals under different distributional rules, Deutsch (1986) found that people initially preferred equity distributions over equality, and needs distributions were preferred the least. Some situational and individual factors were found to be related to preferences. After working in interactive groups, individuals increased their preference ft>r equality distributive systems, whereas those working in IVoninteractive groups demonstrated no change in preferences. Individual d1 fferences in performance were also related to preferences in that high F"’et‘i’ormers indicated less preference for equality and needs based 13 distributive systems. While this research suggests that the three distributional rules are mutually exclusive, this need not be the situation. Leventhal (1980) suggested a means by which the three distributional rules could potentially be integrated. He suggested that evaluations of distributive justice are based on a weighted evaluation of the satisfaction of each of the three rules (plus any other distributional rules that may exist). Therefore, an individual’s perceived deserved outcomes are a function of the deserved outcomes related to each rule. The weighting attached to each rule suggests that in some situations, some rules may contribute more to an overall evaluation of distributive justice than in other situations. While research has not specifically addressed Leventhal’s model, some research exists that is consistent with this perspective. Two separate lines of research suggest that certain distributional rules can be made more salient in some circumstances. Crosby, Burris, Censor, and MacKethan (1986) found that people were more sensitive to discrimination after seeing clear cut examples of discrimination. This suggests that the equality rule may be more salient (or weighted more heavily) after observing situations in which equality had been violated. Similarly, Schwinger (1986) summarized research on the needs distributional rule in which individual needs were made salient to outcome allocators. Allocation distributions were found to deviate from equity in that needier recipients were given a greater proportion of the Outcomes. The origin or locus of needs appeared to influence use of the needs distributional rule in that intentionally caused needs were Considered less in allocation decisions than were external, 14 unintentionally caused needs. Symmsgy. In summarizing distributive justice theory with an orientation toward selection systems, three of the four elements of distributive justice should be emphasized: recipients, receipts, and distribution rules. The recipient of outcomes in a selection situation is clearly the job applicant. Less clear is the receipt, which could either be interpreted as the score received on a given selection test or the receipt of a job offer. Distributive justice research indicates that the dominant distribution rule is equity, however other research suggests that equality and needs distribution rules may become more salient when instances of rule violation are experienced. These issues and others, such as what constitutes the referent comparison group in a selection situation, are discussed in detail when a model of reactions to selection systems is developed. This discussion will also include issues of procedural justice which are presented in the following section. Pro r ti Procedural justice theory is concerned with the perceived fairness of procedures used in making decisions; that is the process of attaining various outcomes (Folger a Greenberg, 1985). Two major perspectives or models can be seen as initiating much of the current research and interest on procedural justice. Thibaut and Walker (1975) approached procedural justice from a legal perspective and emphasized the role of I>rocess control in fairness perceptions. Leventhal (1980) addressed the Vieward-allocation decision process and identified a number of components of allocation procedures and evaluation rules that influence fairness perceptions. While Thibaut and Walker’s (1975) model has led to 15 considerable research and theoretical development, Leventhal’s (1980) perspective has gone largely unevaluated. However, research has followed Leventhal’s lead in that there have been attempts to empirically describe the procedural aspects of performance appraisals (Greenberg, 1986a), recruitment procedures (Bies a Moag, 1986), and managerial fairness (Sheppard a Lewicki, 1987). In addition to distributive and procedural justice, some researchers have proposed a third type of justice, interactional justice, which reflects the fairness with which procedures are implemented or enacted (Bies a Moag, 1986). Although it is important to distinguish the formal properties of procedures from the manner in which procedures are implemented, the current discussion will incorporate both aspects under the rubric of procedural justice. Following a discussion of these different procedural justice perspectives, a conceptualization is presented which distinguishes between the following three aspects of procedural justice: a) formal properties of procedures, b) explanation of procedures, and c) interpersonal treatment. This distinction will be used as a framework for examining the procedural fairness of selection systems. r e ntr l n i . Thibaut and Walker’s (1975) original research compared adversarial and inquisitorial methods of dispute resolution in terms of the perceived fairness of, and satisfaction with litigation decisions. They consistently found greater perceived fairness and satisfaction with adversarial procedures, lriisardless of the decision outcome. This research established Procedural justice as an important influence in reactions to decisions. In developing a theory to account for procedural justice effects, 16 Thibaut and Walker (1975) distinguished between process control and decision control. Process control reflects the amount of control or input an individual has in the decision making DLQQQSS. whereas decision control reflects the actual impact an individual has on the decision gytsgms. Thibaut and Walker believe that the distribution of process and decision control to the individual affected by the decision influenced the individual’s perceptions of procedural justice. Adversarial dispute resolution procedures offer the disputants more process control and therefore seem to be fairer. A considerable body of research has found support for Thibaut and Walker’s theory that the distribution of control within a procedure influences the procedural fairness (Lind a Tyler, 1988). In particular, subsequent research has replicated the basic findings in a variety of laboratory and applied decision situations (Tyler, 1987). Research in organizational behavior has extended the basic findings from dispute resolution situations to reward allocation situations. Folger (1977) examined the influence of providing individuals the opportunity to express their ”voice“ in determining reward allocations. Voice is often used as an analogous term to process control, although it does not necessarily imply ”control” over the process, rather just the opportunity to offer input (Folger a Greenberg, 1985). Participants in Folger’s study were either paid equitably or inequitably for work on a series of tasks and were either provided (voice condition) or not provided (mute condition) the opportunity to 1r1fluence the reward allocation decision. Even though those in the Voice condition did not actually influence the allocation decision, this ‘=‘="1ciition was seen as fairer, more satisfying, and more acceptable than 17 the mute condition. Considerable research has replicated Folger’s (i977) basic findings in actual reward allocation situations (Bies a Shapiro, 1988) and performance evaluation situations (Dipboye a de Pontbriand, 1981; Landy, Barnes, a Murphy, 1978). All these studies used a survey methodology and found that the opportunity to offer one’s viewpoint in the decision process was correlated with the perceived fairness of the allocation or evaluation procedure. Landy, Barnes-Farrell, and Cleveland (1980) found that voice correlated with procedural fairness of a performance evaluation even after partialling out the outcome of the evaluation. In addition to demonstrating the importance of process control in perceptions of procedural justice, Thibaut and Walker (1975) suggested that decision control was more important than process control. Specifically, they saw process control as having only an indirect influence on procedural justice perceptions in that it was mediated by effects on decision control. Process control was only thought to influence procedural justice to the extent that it influenced decision control. People want the opportunity to express voice because they want to influence the decision procedure in hopes that it will lead to a better outcomes. This model of process control has been referred to as the self-interest or instrumentality perspective (Lind a Tyler, 1988). An alternate perspective suggests that process control enhances £>rocedural justice because it satisfies individual needs to have one’s Views heard (Lind 3. Tyler, 1988). This value-expressive perspective suggests that process control will have an impact on procedural justice independent of decision control. The value-expressive perspective forms 18 the basis for the group-value model which suggests that people value long-term relationships with the groups within which they are involved (Lind a Tyler, 1988). Process control is important in procedural justice to the extent that it promotes within-group relationships. While research has not supported Thibaut and Walker’s suggestion that decision control is more important than process control, research exists that is consistent with both the self-interest model and the group-value model (Greenberg, 1990b; Tyler, 1989). Lind and Tyler (1988) suggest that it would be premature to create a unified theory of procedural justice and instead argue for the importance of both models. In spite of this warning, Leung and Li (1990) propose a unification in which the self—interest model accounts for process control effects on outcome- related variables and the group-value model accounts for process control effects on process-related variables. To date, no research has addressed the validity of Leung and Li’s proposal. In summary, the initial research by Thibaut and Walker (1975) and considerable subsequent research (Lind a Tyler, 1988) point to the importance of procedural justice in evaluations of decision fairness. Further, process control is one important component of the decision process that influences perceptions of procedural justice. The next section describes a number of other components, aside from process control, that may influence perceptions of procedural justice. W- In contrast to the process control model of procedural justice which focusses on (JIWderlying processes, Leventhal’s (1980) perspective details the structural components that are thought to exist in individuals’ “:‘=’£lnitive conceptualizations of the allocation process. Leventhal 19 defined seven components of allocation procedures that are considered when forming evaluations of procedural fairness. They include: (a) the selection of agents who will be making the allocation decisions, (b) the establishment and communication of procedures for making allocations, (c) the gathered information upon which the decisions will be made, (d) definition of the decision structure, (e) the presence of an appeals procedure, (f) safe-guards to ensure objectivity in the decision process, and (g) the existence of mechanisms for changing the allocation procedures. Each of these components may contribute in varying degrees to overall judgments of procedural fairness. Leventhal (1980) does not argue that all components are evaluated in all evaluations of procedural fairness, but rather the decision setting may influence the importance or salience of specific components. Additionally, allocation procedures are defined broadly as any situation where rewards, resources, or other outcomes are distributed. The evaluation of each component of the allocation procedure is guided by the following six procedural rules (Leventhal, 1980): (a) allocation procedures should be consistent over persons and time, (b) personal biases or self-interests should be prevented from influencing the allocation process, (c) decisions should be based on as much gssggsss information as possible, (d) decisions should be modifiable, (e) allocation procedures should be representative of the interests of all affected individuals, and (f) procedures must be compatible with ethical values of the individual. As in Leventhal’s (1980) <3erformers on a trainability test than among high performers. More clirect evidence of distributive justice effects are found in studies ‘CLhat demonstrated a relation between test performance and perceptions of test accuracy and fairness. Teel and Dubois (1983) interviewed high and 41 low performers after participation in an assessment center, but before feedback on performance was communicated. Compared to low performers, more high performers felt their performance in the center accurately reflected what they would do on the job and felt the assessors’ evaluations were accurate. Similarly, Noe and Steffy (1987) found reactions to an assessment center, which included felt accuracy and satisfaction, to be strongly related to assessment center performance. In a summary of research on the perceived face validity of assessment centers, Dodd (1977) indicated that assessment center performance, feedback of performance, and decisions based on performance can all influence perceptions of face validity. For example, in one study ass of selected individuals believed the center assessed many of the important qualities required for effective managers, whereas 63% of those not selected made a similar evaluation. Papsr-and-Dsncil Tssss Paper-and-pencil tests can include ability tests, personality inventories, and biodata forms. While these represent common forms of selection tests, relatively little research has examined reactions toward different forms of paper-and-pencil tests. More common is research comparing paper-and-pencil testing to work sample testing or computerized testing. As discussed above, reactions were less favorable to written job knowledge tests than work sample tests (Schmidt et al., 1977). Similarly, Robertson and Smith (1989) cited unpublished research that demonstrated less favorable reactions, in terms of perceptions of encouracy and fairness, toward biodata than toward situational interviews «tend assessment centers. Research discussed in more detail below demonstrated less favorable reactions to paper-and-pencil tests than 42 computer adapted tests (Arvey, Strickland, Drauden, a Martin, 1990; Schmidt, Urry, a Gugel, 1978) Recently, research has begun to investigate both reactions to different types of paper-and-pencil tests and the reasons for differences in these reactions. Smither and Pearlman (1991, April) had both college recruits and recruiting managers rate the extent to which they perceived different selection procedures to be job-related, both in terms of criterion-related validity and content validity. Participants were given descriptions and sample items for eight types of cognitive ability tests as well as a personality inventory, a biodata inventory, structured and unstructured interviews, and two assessment center tasks. The interviews, assessment center tasks, and cognitive ability tests with relatively concrete items were perceived to be more job related than personality, biodata, and cognitive ability tests with abstract items. This study also manipulated whether or not a description was given of the construct being assessed by the cognitive ability test and the face validity of cognitive test items, which was manipulated by rewriting items within a business context. Neither of these manipulations influenced perceptions of job-relatedness. Kluger and Rothstein (1991, April) manipulated both test type (cognitive ability vs. biodata) and distributive outcome (success vs. failure) in a computer simulated job application situation. Dependent variables were a plethora (28 actually) of perceptual, attitudinal, and affective questionnaire measures. Among other effects, biodata tests \were perceived as fairer than cognitive ability tests, and both test 1types were seen as fairer in the success than in the failure condition. (thoditionally, the company image was perceived to be better in the 43 success than failure conditions. The authors suggested that differences in the cognitive demands of the two test (cognitive ability being more demanding than biodata) led to reactional differences on the part of test takers. Some perceptual data was offered to support this suggestion. Although, both of these studies make useful contributions in terms of investigating aspects of selection tests that influence applicants’ reactions, they both suffer from and highlight a limitation of this type of research. The first study used descriptions of selection procedures while the second used simulated procedures, and as such, the results may be highly dependent on the test descriptions or test samples presented. This may account for the discrepancy in the findings of these two studies. Smither and Pearlman (1991, April) found that cognitive ability tests were perceived as more job-related than biodata inventories, while Kluger and Rothstein (1991, April) found the opposite results for perceived fairness. Future research would likely produce more consistent results if specific procedural characteristics of different tests were manipulated rather than the types of tests themselves. For example, Kluger and Rothstein’s hypothesis about cognitive demand influencing test reactions could be assessed by manipulating difficulty within, rather than across test types. Qompgtsrizsd Testing Three studies have examined applicants’ reactions to computerized testing. Schmidt et al. (1978) assessed reactions of examinees toward (computer adaptive testing (CAT) in which the test was customized to individuals on the basis of test performance and item difficulty. (Overall reactions were positive, and when compared to paper-and-pencil 44 tests, 83% of examinees preferred CAT. Specific advantages cited were that CAT was fast, required fewer questions, provided immediate feedback, and allowed for faster notification of qualification for a job. Similarly, Arvey et al. (1990) compared reactions of military personnel after completing a paper-and-pencil and a CAT version of the ASVAB and found both motivation and concentration to be higher with the CAT version. Additionally, belief in the validity of tests was higher for CAT testing than paper-and-pencil testing. Somewhat different results were found when computerized testing was extended to the interview situation (Martin a Nagao, 1989). Simulated applicants for a ”high-status" job expressed more resentment toward both computerized and paper-and-pencil interviewing than traditional face-to-face interviewing. The interpretation given was that these non-traditional interview styles violated expectations generated by a fair interview script. Additional explanation can be found when one compares the findings from the Schmidt et al. (1978) study of CAT. Advantages of CAT over paper-and-pencil testing were cited in terms of speed, test length, and feedback immediacy. None of these factors differentiated the computer interview format from the traditional interview format. Thus, it is likely procedural issues, rather than the test medium that influences individuals’ reactions to computerized testing. Qrgg Testing Recently, a considerable volume of research has begun to examine reactions to drug testing. This is likely a result of the dramatic ‘1ncrease in the implementation of organizational drug testing programs, as well as individual concerns over invasion of privacy issues and 45 concerns over negative outcomes that result from testing positive (Konovsky a Cropanzano, 1991). Research has examined both procedural and distributive factors that influence reactions to drug testing, including who gets tested (what jobs drug tests are used for and how people are selected for testing), how privacy is maintained, and what outcomes result from testing positive. Additionally, some research has linked these perceptions to further organizational outcomes. Murphy, Thornton, and Reynolds (1990) examined college students’ attitudes toward drug testing under various procedural conditions. While the most dominant finding was considerable variability in attitudes, some procedural conditions were clearly related to perceptions of the favorability of a program. The type of job influenced favorability perceptions, with a correlation found between Fuarceived job danger and ratings of acceptability. Additionally, random drug testing was seen as less favorable than testing all individuals or only testing those with a history of drug use. Administrative procedures such as how the test was conducted, the type of test, and to Whom the results were made available influenced attitudes toward the Program. Finally, the consequences of the program influenced attitudes, 1" that more negative attitudes were found toward dismissal policies than with less punitive consequences. Murphy, Thornton, and Prue (1991) further examined the relat 1onship between job characteristics and acceptability of drug test“ rig programs by comparing perceived danger, PAQ, and DOT ratings with acceptability ratings across a number of jobs. A number of job charficteristics were related to acceptability, including repetitiveness, de: gree of monitoring, involvement of psychomotor skills, performance 46 under stress, and contact with the public. Consistent with Murphy et al. (1990) perceived danger was the best predictor of acceptability, and some evidence suggested that perceived danger mediated the relationship between job characteristics and acceptability. Stone and Kotch (1989) found more negative attitudes toward drug testing programs that provided no advanced notice of testing or that resulted in dismissal rather than rehabilitation. Similarly, in a survey of human resource managers’ perceptions of the effectiveness of their drug testing program, Gomez-Mejia and Balkin (1987) found that effectiveness perceptions were related to administration factors such as use of random testing and extent of program monitoring, as well as outcome factors such as detection for rehabilitation purposes. Rather than directly assessing reactions to drug testing, Grant and Bateman (1990) examined attitudes toward the company and intentions ta: apply for a job as a function of the presence of and need for a drug testing program. Need for a testing program was manipulated through descriptions of accident rates, employee theft, and absenteeism, as well as a statement linking these problems to drug testing. Attitudes and intentions were more negative when a drug testing program was present than when it was not present and were more negative when a need for a Program was perceived. Konovsky and Cropanzano (1991) incorporated an organizational Just-g Ge perspective into an examination of the relationship between . per""3962>‘t.ions of a drug testing program and attitudinal and behavioral organ ‘5 zetional outcomes. Perceptions of procedural fairness and company exDianation for testing predicted trust in management, affective c can” tlatent, job satisfaction, intentions to quit, and employee 47 performance. Outcome fairness accounted for no unique variance in any of the organizational outcomes. n rit stin As with drug testing, both procedural and distributive issues may be a concern for examinees. While overt integrity tests are face valid, the content of the questions may be objectionable to individuals and concerns may arise regarding fakability. Personality-based measures lack face validity and so may also be perceived to be unfair. Given the high false positive rates of many integrity tests (Sackett et al., 1989), perceived accuracy or concerns about distributive justice may also be an issue. In a recent review of integrity testing, Sackett et al. (1989) found only one study that examined test takers’ reactions to overt integrity tests. Results indicated that after taking an integrity test, most college students felt the use of this test was appropriate. However, 10:: of the students reported they would refuse such a test in a selection testing situation. In a more recent study, Stone and Herringshaw (1991, April) examined how three procedural factors influenced reactions to honesty testing. Using hypothetical organizational scenarios and an eXIDer‘imental design, these researchers manipulated the purpose of the honesty test (job application vs. incumbent theft detection), the relevance of honesty testing to the job (the degree to which theft could be a problem), and the use of test results (supporting evidence vs. primary evidence). Reactions, which included perceived fairness, invaSion of privacy, and resentment, varied as a function of test Du erse and use of results and were more negative in the incumbent theft de taction conditions and primary evidence conditions than job applicant 48 and supporting evidence conditions. Reactions were not influenced by the relevance of honesty testing to the job, however manipulation checks indicated that participants did not perceive this manipulation. Msthodologjsal Advsgces For the most part, research examining reactions to selection procedures has tended to use single or multi-item scales that are unique to individual studies. Only recently, have researchers attempted to develop scales using psychometrically rigorous procedures. Lounsbury, Bobrow, a Jensen (1989) developed a 17-item unidimensional scale to measure attitudes toward employment testing. Items dealt with the ability of tests to predict job performance, fairness to different people, and perceived use of tests, as well as perceived fairness factors such as invasion of privacy and reduction of favoritism. When related to previous experience with employment testing, attitudes were uKDre favorable among people who had received feedback on their test performance and who were told how the test related to future job performance. A second study compared attitudes of four groups of people Who had undergone employment testing. People who had failed the test and did not get a job expressed the most negative reactions, followed by a Control group who had taken the test but had not yet received results. TIHDSBEE who passed the test and were waiting for a job or who had been Offe Fed a job expressed the most positive reactions. As the test was the same across groups, the differences in attitudes are attributable to test performance and outcome. Avrey et al. (1990) developed a Test Attitude Survey (TAS) that "lees“ red nine dimensions related to expressed effort and motivation on art a"'bployment test. Dimensions included motivation, concentration, test 49 ease, anxiety, attributions, achievement needs, and preparation. Two additional factors, belief in tests and future effects, can be loosely related to perceptions of procedural and distributive justice. Belief in tests assessed attitudes toward the suitability and validity of tests in general (not the specific test completed), and future effects tapped beliefs about the relationship between performance on the given test and future outcomes. As previously mentioned, Arvey et al. (1990) found different attitudes on a number of dimensions toward computer and paper- and-pencil testing. The TAS was also used to compare the motivation of applicants and incumbents, to relate motivation to test performance, and to relate motivation to job performance measures. While both of these measures of employment testing attitudes represent worthwhile assessment tools and both demonstrated good scale development techniques, neither assessed many of the procedural elements associated with how selection procedures are implemented. That is, neither address issues such as face validity, consistency in administration, explanation of procedures, and interpersonal treatment. These factors and others will be discussed in more detail below in the presentation of a 'model of applicant reactions to selection systems. W The review of literature on perceptions of employee selection p"Ocedures indicates that different procedural concerns have been add r~essed with different types of selection procedures. Research on perce ptions of interviews has tended to emphasize characteristics and beha\’iors of the interviewer, while issues such as interview dynamics (e. 9 ~ two-way communication) and interview structure have not been exan‘- ‘ ned. Perceptions of work sample tests and assessment centers 50 include perceptions of fairness, accuracy, and job relatedness as well as the influence of test performance on these perceptions. Different types of paper-and-pencil tests have been compared in addition to comparisons with other selection procedures. Some research suggests that the cognitive demands of the selection test may influence the favorability of reactions. Researchers who have examined a number of alternate types of predictors (computerized tests, drug testing, and integrity tests) have done more in terms of examining procedural aspects of the test or the testing situation that influence examines reactions. Research indicated that computerized testing was preferred over paper-and-pencil testing because of the immediacy of feedback on test performance and hiring decisions. In research on drug testing the following procedural aspects have all been related to acceptance of testing: job relatedness, privacy of results, procedures for selecting employees to be tested, outcomes of testing positive, and explanation about the need for testing. In addition, recent research has compared procedural and distributive aspects of perceptions of drug testing, and found that procedural aspects demonstrated a stronger relationship to a variety of o(“sariizational outcomes. Issues examined with regard to integrity t8Siting include job relatedness and the use of test information. Finally, research that was oriented toward developing solid measures of reactions to testing also identified procedural and distributive factors that influenced reactions. Feedback on performance an": ‘Esxplanation of the job relatedness of the test were both related to DOST t ive reactions. Hiring decisions that resulted from test outcomes "‘3 '~‘5' also related to reactions to testing. 51 In addition to highlighting the importance of a number of procedural features of both tests and testing situations, this research may indicate that different procedural issues are more salient for some test types than for others. For example, interpersonal treatment may be the most salient procedural feature that influences reactions to interviews. Alternately, timeliness and informativeness of feedback may be a salient feature of computerized testing. Additionally, distributive concerns may be more salient for integrity and drug testing than for interviews and conventional paper-and-pencil tests. While research must assess the validity of these suggestions, this literature review does offer an initial point from which hypotheses can be generated. CHAPTER 2 A MODEL OF APPLICANT REACTIONS TO SELECTION SYSTEMS A review of the organizational justice literature identified a number of theories and issues that have direct applicability when considering how job applicants react to selection systems. Issues of procedural justice, distributive justice, and the interaction between the two have theoretical importance in selection processes. In the review of the literature on applicant reactions to different selection tests, many characteristics of tests that influence reactions were identified. However, missing from this literature is a theoretical framework or model that could be used to both organize research findings and to generate predictions about how applicants will react given a particular selection situation. Previous attempts to model social issues involved in the selection process can be found in the literature. Herriot (1989b) proposed that an often overlooked perspective in the selection literature is to view selection as a social process. The selection process can be conceptualized as a series of decisions by both the individual applying for a job and by the organization. The individual must decide to apply for a position, to attend each of the selection processes, and finally to a<.:cept a job offer. Similarly, the organization must decide whether Or not to evaluate the individual with a series of selection procedures a. "d then offer a position. The interplay between these sets of 52 53 decisions is what comprises the social process. Herriot (1989a) presented a model of the influence of performance in selection procedures on applicant self-efficacy and self-esteem. While no mention was made of how procedural aspects of the selection process may influence these outcomes, Herriot did suggest that rejection in one selection process may be attributable to inadequacies of the selection procedure, especially if individuals have an external locus of control. Additionally, Herriot suggested that the selection process can convey information about an organization’s expectations during selection and on the job, and about how an organization treats individuals. These, as well as self-efficacy and self-esteem may influence an applicant’s decision to persevere in the remainder of the selection process and to accept a job offer. Schuler (in press) adopted the term social validity to refer to perceptions of the acceptability of selection situations. As a preliminary model of social validity, he suggested that four factors influence the acceptability of selection situations: Relevant information, participation, transparency, and feedback. Relevant information includes task domain and social/organizational context information related to the job for which one is applying. This is information that will aid the applicants in their decisions to accept a Job offer. Participation refers to direct and representative (e.g. by union) involvement in the development or choice of the selection Procedure, as well as to the ability to exert influence or control over the selection situation. In the latter sense, participation is similar to the procedural justice concept of voice. Transparency refers to the degree to which applicants can ascertain the objectives of the selection 54 situation and the evaluation process (e.g. decision criteria and standards), see the task relevance of the diagnostic instrument (i.e. face validity), and conduct some self-assessment during the measurement process. Finally, social validity is influenced by the content and form of feedback provided regarding performance in the selection situation. Schuler (in press) acknowledged that his was a preliminary model, and as such he provided no indication of how the four components of social validity may combine or integrate. Additionally, he does not speculate on outcomes associated with acceptable or unacceptable selection situations. The value of this model is that it attempts to organize and initially define four procedural components of selection systems that may influence applicant reactions. Addressing a similar issue as Herriot (1989b) and Schuler (in press), Arvey and Sackett (in press) argue for an expansion of the definition of test fairness to include perceptions of fairness. They presented a list of determinants of selection system fairness perceptions that were organized within five categories: selection system content, system development process, system administration process, system context, and selection outcomes. Selection system content included job relatedness, objectivity, invasiveness, fakability, applicant control, and thoroughness of KSA coverage. Selection system development process concerned the adequacy of job analysis and validity evidence, as well as involvement of professional and other representation in development. Procedural issues such as consistency across candidates, opportunity to review scores and be reconsidered, confidentiality, and prior information about process were included under the heading of selection system administration process. Selection 55 system context involved the firm’s history, the selection ratio, and organizational resources, while selection system outcomes included the maximization of performance and utility, optimization of representation, moral outrage, satisfaction and equity, turnover, and recruitment. The Arvey and Sackett (in press) list of determinants of perceived selection system fairness is the most comprehensive attempt to define the domain of factors that may influence applicants’ reactions. What is missing from this paper is a clear model of how the different determinants combine to influence perceptions. The authors suggested that different stakeholders will be sensitive to different variables in forming their perceptions, but offer no elaboration of the nature of these differences. Additionally, no attempt was made to tie the list of determinants to past research on reactions to selection procedures nor to theory from organizational justice. A final model of social aspects of selection was developed by Robertson and his colleagues (Iles & Robertson, 1989; Robertson a Smith, 1989) as a conceptualization of the psychological impact of personnel‘ selection methods on individual candidates. It was hypothesized that features of the selection method, including intrusiveness, face validity, type/degree of feedback, and sympathetic treatment, would influence applicants’ cognitive and affective reactions toward the process. The nature (e.g. accept vs. reject) and kind (e.g. Specificity) of decision feedback provided was also proposed to impact reactions. In addition, they suggested that the relationship between method and reactions was moderated by personal characteristics such as ”fe/career stage, self-efficacy, self-esteem, locus of control, and ”‘96 rience with selection procedures. Finally, it was proposed that 56 cognitive and affective reactions should lead to outcomes such as work commitment, performance, turnover, psychological well-being, and personal agency. The Robertson model represents the best example in the literature of a model that attempts to define both the content and process factors that influence reactions to selection procedures. In addition, Iles and Robertson (1989) reviewed research on attitudes and reactions to different selection procedures, prior to presenting a model. As with all of the other models, the Robertson model was presented as an initial conceptualization or a preliminary model, and they acknowledge that research is needed to more completely understand the underlying factors and processes. Clearly what is missing from the previous models of social aspects of the selection process is a solid link to psychological theory. Herriot (1989b) suggested that theory from social psychology can be used to understand the underlying social processes of selection systems, and indeed incorporated theory on self-efficacy and self-esteem in his model. Additionally, Arvey and Sackett (in press) suggested that research from organizational Justice theory is consistent with their perspective, although they declined to make the connections explicit. Clearly, a useful model of applicants’ reactions to selection processes would include both psychological theory from organizational Justice and Prior research on reactions to selection procedures. Such a model is developed and presented in the following section. The Conceptual Model The conceptual model of applicant reactions to selection systems is developed from both the organizational Justice literature and the 57 selection test reactions literature. The model is presented in Figure 1 and proposes that situational and personal factors influence the extent to which procedural and distributive rules are perceived as satisfied or violated. That is, factors such as test type, human resource policy, and human resource personnel influence applicants’ perceptions of the procedural Justice of the selection system. Procedural Justice is conceptualized as being comprised of 13 rules that are related to formal characteristics of the selection system, explanations offered during the selection process, and interpersonal treatment. Evaluations of the extent to which each of these rules is satisfied or violated, are combined to form an overall evaluation of the fairness of the test and selection system. In terms of distributive Justice related to test performance and subsequent hiring decisions, performance expectations influence perceptions of equity in either the test outcome or the hiring decision outcome. While equity is predicted to be the primary distributive rule, situations that influence the salience of discrimination and the salience of special needs on the part of certain applicants, can impact the extent to which equality and special needs distributive rules also contribute to evaluations of test outcome and hiring decision fairness. In addition to these main effects, procedural rules are predicted to moderate the distributive Justice to outcome fairness relationship, and distributive rules are predicted to moderate the procedural Justice to Procedural fairness relationship. A final portion of the model deals with the relationship between fai f‘ness perceptions and organizational outcomes. These outcomes ’"C 7 Ude some variables comon to both accepted and reJected candidates 58 .2652. cote-.230 nos .00 eoeeeztes 325.5330 .ece:eu.ceoco c0203303 “OOF 232:3... gee-ea nos 33:“. 33......30 Bezeeuzeo no.2en .eoea .ceEeecoecu .252.— 2.2..on eoce.eeoo< see 523:2; nos eco:euceEEooe¢ :oZeoznn< $9209.50 :o_e.ooo 35:... :o_.oo_ow ecu aOOP *0 000:2”...— III 5.2.3.5 3.2.5 wmADm NOT—ha. m>_.—.Dm_m._.w_o one 2.32:0 illlnl 38. .o noes—an. angw I co:eo_:sEEo0 >einoih co.eeecasa e29 .ecoecee 33:255.; .0 eeeces:oe:m .ecoeeeaceus «ceeuaeck Eco-332:. eeesoeoo... .o cote—.235 octooa co cot-5.2... 5:252... 2.3.; ..oeaaeeu cozecflexw >.._e..ceo..coo use keel... :o_.e...e.c.Eo< .o >oce:_ecoo :33. co. 52:33.30 3.33.33; Eeotee o. 3.53.230 32322:. 39.. 3.3.3.0220 .ectou mmqnm m0_._.w3... 4'1<:eptually similar to Job relatedness (recall Table 1). Leventhal proposed the accuracy rule which states that decisions should be based 62 on as much good information as possible, and that procedural fairness is violated when performance is evaluated on the basis of inappropriate information. Similarly, Sheppard and Lewicki suggested the resource utilization rule in which managers are supposed to utilize available expertise and resources accurately to make good decisions. Considerable prior research has examined the Job relatedness of tests, and the effects of relatedness on fairness perceptions. Positive reactions have been documented with assessment centers (Dodd, 1971) and work sample tests (Schmidt et al., 1977) which are both highly Job related. Smither and Pearlman (1991, April) directly assessed perceived Job relatedness and found that interviews and assessment center tasks were perceived to be more Job related than personality tests, biodata forms, and cognitive ability tests with abstract items. Similarly, research cited by Schuler (in press) had individuals indicate whether or not a given test was restricted to Job relevant characteristics. While the majority agreed that spelling and work sample tests were restricted to Job relevant characteristics, no one made such Judgements about vocational interest inventories or personality inventories. Finally, research from the drug testing literature is somewhat related in that drug testing is seen as more acceptable when the need for drug testing is indicated by characteristics of the Job such as perceived danger (Murphy et al., 1991). With drug testing, the procedural rule may be more aptly described as Job relevance rather than Job relatedness. Qeeegtunity te eerferm. Considerable research in the organizational Justice literature on voice suggests that procedures are TMarceived to be more fair if recipients of the decision outcome have the Opportunity to express themselves prior to the decision. For example, 63 Greenberg (1986a) identified solicitation of input as one factor that influences the perceived fairness of performance appraisals, and Tyler and Bies (1990) state that one of the norms of proper decision making conduct is adequately considering an employee’s view. In the selection domain, this can be interpreted as having adequate opportunity to demonstrate one’s knowledge, skills, and abilities in the testing situation. Bies and Shappiro (1988) presented individuals with recruiting interview scenarios in which the interviewee either did or did not have the opportunity to demonstrate competencies and ask questions of the interviewer. Perceptions of procedural fairness were higher when the interviewee had the opportunity to offer input. Opportunity to perform can be seen as distinct from Job relatedness in that a test may be Job related but it may only tap a minor portion of the performance domain of the Job. For example, a typing test is certainly Job related to most secretarial Jobs, but it only assesses a fraction of the skills most secretaries actually use on the Job. Similar to the content validation process which requires that the test adequately sample all important aspects of the Job domain being assessed, perception of test fairness may also reflect this concern. To my knowledge, no research in the selection domain has examined the impact that opportunity to perform has on perceptions of test fairness. Fakability. A third characteristic of selection tests is their fakability or the extent to which information can be distorted. Arvey and Sackett (in press) discuss the scenario in which applicants may be torn between wanting to be honest and wanting to tell the company what they want to hear. This would seem to be a particular concern with overt integrity tests where it is often clear what the “correct" 64 response should be. Similarly, Rynes (in press) provided a considerable discussion involving examples and research that has examined faking in selection situations. It seem that tests that are fakable would likely be perceived as less fair than those in which fakability is not an issue. Only one study has examined the issue of the perceived fakability of different test types. Kluger and Rothstein (1991, April) found biodata inventories to be perceived as more fakable than cognitive ability tests, trainability tests, and work sample tests. Somewhat surprising was that the correlation between perceptions of fakability and test fairness was almost zero. While this result may question the impact that test fakability has on perceptions of test fairness, a number of methodological and measurement problems with this study may qualify the findings. The issue of fakability can perhaps best be summarized by a recent discussion I had with an individual who completed an overt integrity test prior to employment. It was bogus! I had to answer questions about whether it was right for a man to steal bread for his starving children. I found out my test score (an accident on the part of my boss), and it turns out that I Just barely made the cutoff. Perhaps I was too honest; maybe I should have given the answers I knew they were looking for. geeegegnity foregeteee. An often cited factor that contributes to perceptions of procedural Justice is the opportunity to challenge, modify, or receive a second chance at a decision making procedure. O"sanizational Justice research directed at identifying dimensions of 65 procedural fairness (e.g. Table 1) has consistently demonstrated the dimension of correctability (Sheppard a Lewicki, 1987), modifiability (Leventhal, 1980), or the ability to challenge or rebut an evaluation (Greenberg, 1986a). While test reaction research has not examined this factor as a determinant of fairness perceptions, many organizations provide the opportunity for retesting applicants. Additionally, the SIOP Principles call for the provision of retesting when this is technically feasible (SIOP, 1987). gensistency of administration. In the organizational Justice literature, consistency or applying consistent standards was a procedural factor found in all three lines of research listed in Table 1 (Greenberg, 1986a; Leventhal, 1980; Sheppard a Lewicki, 1987). Similarly, in a summary of research on interactional Justice, Tyler and Bies (1990) suggested five norms of proper decision making conduct, one of which was applying decision criteria consistently. Consistency refers to ensuring that decision procedures are consistent across people and over time. When considering consistency across people, Leventhal (1980) pointed out that this rule is similar to the distributive rule of equality, which suggests that all people should have an equal chance of obtaining the decision outcome. Concern for standardization or consistency of test administration has been shown to be a factor that influences the acceptability of drug testing. Drug programs received greater acceptance or perceived effectiveness if they involved testing all individuals or only those with a history of drug use, rather than random testing (Oomez-MeJia a Balkin, 1987; Murphy et al., 1990). It seems reasonable to expect that concerns for consistency may be influenced by test type and may be more 66 salient for some test types (e.g. interviews) than for others (e.g. paper-and-pencil tests). Priv c an nfi nti l . Applicants’ reactions to the selection process are likely influenced by issues of invasion of privacy and maintenance of confidentiality of testing results. While these issues have not been addressed in much detail in the Justice literature, Leventhal (1980) mentioned an ethicality rule. He suggested that procedures must be compatible with the ethical values of the affected individual, and that information gathering methods that involve deception or invasion of privacy will likely be perceived as unfair. The strongest support for privacy and confidentiality concerns can be found in literature on drug testing. A number of studies have indicated that reactions to drug testing programs are influenced by the existence of safeguards to ensure privacy and confidentiality of results (Gomez-MeJia a Balkin, 1987; Murphy et al., 1990). Issues of privacy have also been studied for other types of selection procedures. Schuler (in press) cited research that had participants in a selection system indicate whether or not they felt different tests respected their privacy. Cognitive ability and work sample tests received the most support in terms of respecting privacy, whereas less than ten percent of the respondents felt that self descriptions and personality inventories respected their privacy. Thus, concerns about invasion of privacy are probably most dominant with drug testing, personality testing, and also, probably, integrity testing. F back. The provision of timely and informative feedback is cited as an important factor in perceptions of interactional Justice (Tyler a Bies, 1990). Timeliness was also a rule of managerial fairness 67 developed by Sheppard and Lewicki (1987). In the testing literature, reactions to testing were more favorable among people who had received feedback on their test performance than among those who had not (Lounsbury et al., 1989). Research has also specifically addressed the issues of timeliness and informativeness. Schmidt et al. (1978) cited the speed with which feedback was provided, both in terms of test performance and qualification for a Job, as an advantage of computer adaptive testing over paper-and-pencil tests. With respect to the selection process in general, Arvey, Gordon, Massengill, and Mussio (1975) found that as the time lag between initial application for a Job and initiation of the selection process increased, the percentage of applicants withdrawing from the selection process also increased. In terms of the informativeness of feedback, Dodd (1977) indicated that feedback which is developmental and provides information on how to remedy deficiencies is valued regardless of the selection or promotion decision. Perceptions of fairness might also be increased if individuals are provided informative feedback on why they did not receive a Job offer (Bies a Shapiro, 1988). The timeliness and informativeness of feedback may represent an important procedural factor because it is one factor that organizations could easily improve without the additional costs associated with selection system development. yelidity informetion. One of the most commonly examined variables in the interactional Justice literature is the provision of Justification for a decision. For example, Greenberg (1990a) found the adverse outcomes following pay reductions were lessened when a adequate ‘ explanation was offered for the reduction. In a more relevant context t0 selection issues, Bias and Shapiro (1988) found that perceptions of 68 the procedural fairness of scenario-based recruiting situations were greater when Justification was offered for a negative decision than when no Justification was offered. In terms of Justification for a selection procedure, fairness perceptions are likely influenced by information on the extent to which a particular test is related to Job performance and by indicating which aspects of performance are related to particular tests. Lounsbury et al. (1989) found that attitudes toward testing were more favorable when people were told how the test related to future Job performance. It is possible that validity evidence would be particularly useful for tests with low face validity such as cognitive ability tests. With the current interest in cognitive ability testing, the provision of validity evidence may be one relatively cost-free method of improving the acceptance of such testing. Arvey and Sackett (in press) suggested that validity evidence may be perceived differently by the public than by the selection specialist. Specifically, whereas selection specialists are concerned with improved overall prediction, the public may be more concerned with the extent to which the selection system makes mistakes. This may be particularly true of integrity and drug testing, where negative outcomes denote deviant behavior in some fashion. Finally, a factor that may influence the impact of validity information relates to the way in which it is presented. As suggested in the persuasion literature (Petty a Cacioppo, 1981), the expertise of the persuasion source influences the credibility of the source and the impact of the persuasion message. This suggests that if selection specialists are involved in the development of a selection system, the 69 communication of this fact may increase the credibility of the validity information, providing of course that selection specialists are seen as knowledgeable and trustworthy experts in selection system development. In terms of the content of information provided, it may be possible to include Job analysis information, in addition to validity information, as a means of increasing credibility. Clearly, the provision of validity information is one issue in selection that should receive more research attention. Informetion on eeering. Just as providing validity evidence may improve the perceived fairness of a selection system, providing information on how tests are scored and how scores are used in decision making may also increase procedural fairness. Leventhal (1980) discusses the importance of communicating allocation criteria to the recipients of rewards. Generally, it would seem that the more information an applicant has on how decisions are made, providing they are made fairly, the more likely the applicant is to perceive the process as fair. Infermation on procegggee. A final type of information that may influence fairness perceptions is a priori information on the selection process. This includes providing adequate instructions on how to complete specific tests as well as information on what the entire selection process entails. Sheppard and Lewicki (1987) suggested that one of the rules for managerial fairness is providing information necessary to perform tasks. Arvey and Sackett (in press) hypothesized that the reduction in uncertainty that such information would provide, particularly with unfamiliar selection processes, would reduce albplicants’ beliefs that they performed poorly because they did not know 70 what to expect. Information on the selection process may also help applicants prepare for the process. Imagine applying for an academic Job and receiving an opportunity to visit a university without realizing that you must be prepared to give a Job talk. While this is an extremely unlikely scenario, many Jobs contain selection processes that one can prepare for or practice in advance without invalidating the procedure. Intereereenal effeetiveneee ef adminietratoc. The final three procedural factors relate to the interpersonal treatment that applicants receive. The interpersonal effectiveness of the test administrator refers to the degree to which applicants are treated with respect and honesty. Bies and Moag (1986) content analyzed recruitees’ descriptions of fair and unfair treatment and found that two of the four dimensions were related to respect, or alternately rudeness, and truthfulness, which included both candidness and deception. Research on reactions to interviews demonstrated that the warmth and thoughtfulness of an interviewer was the strongest predictor of impressions of the company and expectations regarding Job offers and acceptance of those offers (Schmitt a Coyle, 1976). An additional predictor of these outcomes was the interviewers’ correctness and presentation style. Similarly, Liden and Parsons (1986) found that the strongest predictor of general affect toward an interview was how personable the interviewer was, which included warmth, sincerity, interest in applicant, and believability. While interpersonal effectiveness is clearly an important factor in applicants’ reactions to interviews, it may also be an important factor in other aspects of the selection process. For example, a test 71 administrator who simply administers a paper—and-pencil, work sample, or drug test may be able to affect the comfort and stress level of applicants and influence applicants’ reactions to the testing process in general. Rynes (in press) provided an example of an applicant who felt like a criminal during drug testing because she was escorted by a uniformed guard to a doorless toilet stall. Personal bias seeereseion. Personal bias suppression was discussed by both Leventhal (1980) and Sheppard and Lewicki (1987) as a rule of procedural Justice. Similarly, Bias and Moag (1986) found that one of the dimensions that influenced recruitees’ perceptions of fairness was propriety of questions asked during recruitment. Question propriety included both improper questioning and preJudicial statements. Rynes (in press) provided examples of individuals who received both improper questioning involving personal issues and sexist statements from interviewers during a campus recruiting process. Research on reactions to selection processes has not examined the issue of the extent to which personal biases influence evaluations of fairness, although it is easy to speculate on the significant impact this treatment has on perceptions and later decision making. Perhaps the reason for this lack of research is because of the illegality of this behavior during the selection process (Arvey a Sackett, in press). Many companies are not likely to admit that such behavior occurs, if it is even recognized (Rynes, in press). A reasonable first step for research on this procedural factor would be to define and document the existence and prevalence of the problem, as this is probably known with less certainty than the impact that these biases have on applicants. 72 Teo—wey eemmunieetion. Two-way communication refers to the opportunity for applicants to offer input into the selection process, but differs from the opportunity to perform in that it relates primarily to interpersonal interaction, which is not limited to Just the opportunity to demonstrate one’s KSAs. The research by Martin and Nagao (1989) demonstrated the difference between two-way communication and opportunity to perform. Simulated applicants for a high-status Jobs expressed more resentment toward computerized and paper-and-pencil interviewing than toward traditional face-to—face interviewing. While all interviewing formats presumably provided adequate opportunity to perform, the non-traditional interview formats did not allow for the two-way communication that applicants appear to expect from interviews. Two-way communication can also refer to the opportunity to ask questions regarding the Job, the organization, or even the selection process. The selection system must provide applicants with adequate opportunity to gain information that is relevant to making acceptance decisions. If such opportunities are not found, favorability and satisfaction with the selection process will likely be lessened. Clearly, procedural Justice research on managerial fairness and performance evaluation demonstrate the importance of two-way communication (Greenberg, 1986a; Sheppard a Lewicki, 1987). As applicants are not likely to expect two-way communication during all aspects of the selection process, it would be useful to determine for which selection procedures two-way communication is a salient issue. geneleelege. The model outlined in Figure 1 suggests that test type, human resources policy, and human resources personnel are all expected to influence the extent to which each of these 13 procedural 73 rules are violated. In addition, some of these rules may be more salient in some situations than others. I discussed the possibility that two-way communication may not be a salient concern during all aspects of selection, but may be limited to certain selection procedures. It was also suggested that concerns for invasion of privacy would be more salient with drug, personality, and integrity testing. One line of research that could address these issues would be to examine the importance of each of these 13 procedural rules for a variety of selection procedures. This could be done by either asking for incidents of fair and unfair treatment associated with different selection procedures and taking the frequency of incident generation as an indication of importance, or by directly collecting importance ratings of the 13 rules given a particular selection procedure. A final consideration would be the importance of these 13 rules for the selection and recruiting process as a whole. From a predictive perspective, this is the most important concern because it is likely an applicant’s overall reaction to the selection process that has the greatest impact on outcomes of individual and organizational relevance. Qietribetivegguetice Rules In the most general form, distributive Justice theories are concerned with the extent to which recipients receive rewards in an amount that is consistent with a given distribution rule. Three aspects of this conceptualization that require consideration when extending distributive Justice theories to the selection situation are the recipients of rewards, the receipts, and the distribution rules. The recipients are clearly defined as applicants that experience the selection system. While it may seem obvious that an individual’s 74 perceptions of distributive Justice should be influenced by the outcomes that he/she personally receives, recent research has assessed distributive Justice in a more general fashion, reflecting the distributive Justice of a selection program rather than an individual’s experience in the selection program. Specifically, Konovsky and Cropanzano (1991) assessed outcome fairness with two questions that asked whether drug test results are used fairly and whether people get what they deserve as a result of drug testing. This conceptualization of outcome fairness is almost an instrumentality assessment and is clearly distinct from traditional theories of distributive Justice that are concerned with the outcomes an individual receives personally. To emphasize this distinction, consider that Konovsky and Cropanzano could have asked questions about whether an individual’s drug test results were used fairly, and whether they personally got what they deserved as a result of drug testing. Future research should likely pay more attention to the recipient when operationalizing distributive Justice, and possibly assess whether different operationalizations result in the assessment of different constructs. Receipts are somewhat less clear as outcomes can refer to either test scores or the final hiring decision. If the Justice of a particular selection test is being examined, then the most proximal outcome is the test score. The operational translation of the distributive Justice of test scores can be seen as perceptions of test accuracy. If the Justice of the entire selection system is of interest, than the hiring decision (accept/reJect) is the logical outcome to consider. This distinction between test score as an outcome versus score as one component that leads to a hiring decision is directly 75 comparable to the performance evaluation situation in which both the performance evaluation and the outcome of the evaluation (e.g. salary raise or promotion) are important receipts (Greenberg, 1986a, 1986b). When the outcome of primary interest is the test score, it may still be necessary to consider further outcomes of the selection test as these can impact reactions. This point is clearly demonstrated in the drug testing literature where a number of studies have indicated that the outcome associated with testing positive can influence the acceptability of a drug testing program. Drug testing programs with dismissal policies rather than rehabilitation concerns were seen as less acceptable (Murphy et al. 1990), invoked more negative attitudes (Stone a Kotch, 1989), and were perceived as less effective (Gomez-MeJia a Balkin, 1987). Similarly, reactions toward integrity testing were more negative when results of the testing were used as the primary evidence for dismissal than when results were used as the secondary or supporting evidence. The third aspect of distributive Justice theories that merits attention is the distribution rule. Three rules have been identified in the distributive Justice literature: equity, equality, and special needs. While equity is clearly the dominant rule that guides perceptions of distributive fairness, under some circumstances the other rules may become more salient. Each of these distribution rules are considered with respect to the selection situation. figeley. The equity distribution rule suggests that people should receive rewards that are consistent with the inputs they contribute to a .58). The means for each measure, based on the average of the four items, were approximately one standard deviation above the mid-point of each scale. The measures of procedural and outcome fairness were highly correlated (g = .72, corrected for unreliability 5 = .84), which suggests either a lack of discriminant validity or high conceptual similarity in these constructs. Application/recommendation intentions were assessed with four items. This scale was similar to the fairness perceptions measures in terms of internal consistency (q = .83, all corrected item-total Ls > .52) and average participant response (5 = 3.46). However, given the low correlations between application/recommendation intentions and the two fairness perceptions (see Table 3), it appears that a distinct 136 Table 2 Dependent Measure Internel Consiegenciee. Means and Standard Devietions Variable No. of Iteme e e fleee §LQL Procedural Fairnessa 4 258 .85 3.62 .78 Outcome Fairness‘ 4 258 .86 3.61 .77 Recommendationsa 4 258 .83 3.46 .75 Test motivation‘ 10 260 .94 2.43 .74 Concentration’ 4 260 .37 2.12 .72 Belief in testsa 4 260 .72 2.85 .70 Comparative anxietya 10 256 .83 2.51 .57 Test easea 4 256 .68 3.05 .56 External attribution’ 5 256 .73 2.20 .64 Need achievement’ 3 255 .75 2.20 .69 Global Self-esteem’ 10 260 .85 4.10 .57 Affective Self-esteem“ 16 260 .87 3.26 .52 Self-efficacy quantity” 2 257 .91 0 .96 Self-efficacy quality“ 2 257 .91 0.02 .94 Performance quantity 1st 1 130 n.a. 6.08 1.59 Performance quantity 2nd 1 130 n.a. 10.12 2.41 Performance quality 1st 1 130 n.a. 2.07 .98 Performance quality 2nd 1 130 n.a. 2.26 .84 Job Satisfaction‘ 18 131 .91 3.41 .61 ‘ All items based on 5-point scales. b Items were standardized such that values represent z-scores. 1137 Table 3 Intercorreletione of Independent and Deeendent Measures Correlations tggjgglg 1 2 3 4 s a 7 a 9 10 1. Procedural fairness 2. Outcome fairness .12' 3. Reconnendations .30t .23’ 4. Test activation —.13’ .10 -.43’ 5. Concentration -.17' .10 -.29‘ .67’ 6. Belief in tests -.33’ .10 -.24’ .44’ .43’ l. Coaoarative anxiety -.13' .09 - 11 .40’ .54’ .40' a. Test ease -.1a’ .05 -.06 17’ .11 .20’ -.17’ 9. External attribution -.19’ .20’ -.1e’ .20’ .12’ .20’ .59’ -.33’ 10. Need achievement -.10 .11 -.26’ .41’ .34' .14’ ' .31t .03 .22’ 11. 010011 Self-esteem .14’ .11 15’ -.1e’ - 21’ -.06 -.10’ .00 -.32’ -.30 12. Affect Self-esteem .13’ .19’ .23’ - 13’ -.2o’ 0 -.27’ .05 -.21’ -.23 13. Self-efficacy Ont. .05 .03 21’ -.1a’ -.24’ - 11’ -.41’ .20’ -.32’ -.31 11. Self-efficacy qual. -.02 0 .23’ - 29’ -.24’ - 12 -.32’ .10 -.1f’ -.37 15. Perf. qnt. 1st .00 .09 -.09 -.01 -.02 -.11 -.22’ -.02 -.11 .04 16. Peri. qnt. 2nd .14 .06 0 -.16 -.02 -.13 -.14 -.01 -.09 -.14 17. Perf. qual. 1st -.08 0 06 -.07 -.05 -.01 -.02 -.05 -.02 .02 18. Peri. qual. 2nd ~.01 .04 05 -.04 -.09 12 -.08 -.02 -.04 -.07 19. Job satisfaction .37’ .35’ .31’ - 17’ -.21’ -.31’ -.21’ -.01 -.29’ -.23 7est type .25’ .11’ 09 -.17’ - 05 - 15’ 00 -.17’ .15’ -.07 Explanation -.0e -.08 04 -.02 -.0a -.03 -.0e .05 -.02 -.02 Outcome .23’ .32’ 04 —.09 -.10 -.10 -.13’ -.07 -.05 -.05 1138 Table 3 (cont’d) Correlations Xagiaglg_ 11 12 13 11 15 1e 17 19 19 11. Glooal Self-esteem 12. Affect Self-esteem .43' 13. Self-efficacy out. .33’ .35’ 11. Self-efficacy 0031. .29’ .33’ .15’ 15. Perf. Ont. 1st .04 .17 .10 0 10. Perf. ont. 2nd .21’ .01 .10 0 .63’ 17. Perf. Oual. 1st .07 .06 0 -.05 .04 -.03 19. Perf. 0111. 2nd .17 .09 .01 -.01 .10 -.03 .52’ 19. 100 Satis. .13’ 11 09 15 -.11 10 03 - 01 Test type .02 -.12 -.08 -.02 -.04 .15 .07 -.05 .04 Explanation .02 .02 .01 .01 .19 -.05 .15 .19’ -.17’ Outcome -.08 .12 .04 .08 - - - - - ’ g < .05 139 construct was assessed. These low intercorrelations are particularly noteworthy given that application/recommendations intentions were assessed in the same questionnaire using the same format as the fairness perceptions. In an effort to futher document the convergence and discriminability of the the two perceived fairness measures and the application/recommendation intentions measure, a principal factors analysis was conducted on the twelve items used to assess these constructs. A three factor solution that accounted for 61.6 percent of the variance was rotated with a varimax rotation. In confirmation of the other psychometric evidence, the three a priori factors emerged with item loadings exceeding .48. The prerotation eigenvalues and rotated factor loading matrix is presented in Appendix H. The seven scales of the TAS were assessed with between three and ten items each and internal consistency estimates ranged from a = .68 to o = .94. These values exceed those reported by Arvey et al. (1990) but the difference is likely because items were randomly ordered in previous research and were grouped by scale in my questionnaire. It should be noted that all scales are negatively coded such that lower values indicate more positive attitudes. Negative coding is consistent with Arvey et al.’s (1990) presentation of the scales, except for test motivation which these authors had positively scored. In this study the test motivation dimension was negatively scored so that test motivation would be positively correlated with the other six dimensions. Scale intercorrelations ranged from g = -.33 to L = .67 (see Table 3), and the pattern of intercorrelations is similar to that reported by Arvey et al. (1990). Test motivation was essentially uncorrelated with outcome 140 fairness and only weakly correlated with procedural fairness and application/recommendation intentions. The greatest observed correlation was between application/recommendation intentions and test motivation (5 = -.43) indicating that those who expressed greater test taking motivation were more likely to recommend the proJect to others. Global self-esteem demonstrated internal consistency that fell within the range found in prior research (a = .85) and participants tended to express high self-esteem (fl = 4.10). Affective self-esteem also demonstrated good internal consistency (o = .87), but mean ratings were not as high as with global esteem (H = 3.26). The correlation between the two esteem measures was only moderate (5 = .43) suggesting that two separate constructs were assessed. Self-efficacy was assessed along dimensions of performance quality and quantity, and each was assessed in two ways. Participants were asked to indicate whether or not they could perform the coding task at or above each of four levels and then indicate the confidence in this Judgment (given that the Judgment was affirmative). Generally, participants had problems understanding that they were suppose to indicate e11 levels at which they could meet or exeeee the performance standard. Many participants checked only one level. Rather than treat these responses as uninterpretable (and therefore missing data), I inserted the missing Judgments. That is, if a participant indicated that they could perform the coding task at or above the level of an above average performer with a confidence Judgment of 4 but did not indicate any Judgments for lower levels of performance, I assumed they could meet or exceed lower performance levels with confidence Judgments of 5. While this may inflate some of the levels of the confidence 141 Judgments, I do not believe that this inflation would be systematically related to any manipulated variables and as such should only increase the unreliability of this measure. The number of levels at which a person expected to meet or exceed the performance standard were counted and the confidence Judgments were summed. As the standard deviations of these self-efficacy estimates differed greatly, the two estimates were standardized and then averaged such that the resulting means were approximately zero and the standard deviations were approximately one (see Table 2). The correlation between the number of levels and the sum of confidence Judgments was .83 and r _.q were the standardized coefficient alphas (“quantity = .91 and “Quality = .91). high for both self-efficacy constructs (_1;q on“, = .84) as umtny = The correlation between quantity self-efficacy and quality self-efficacy was high (E = .65), although not perfect even when corrected for unreliability (g = .71) suggesting that participants may have discriminated quantity and quality dimensions of performance efficacy. Additionally, the correlations between self-efficacy measures and the' self-esteem measures suggest that these constructs are differentiable (gs ranged from .28 to .35). Performance was assessed along dimensions of quantity and quality. Additionally, because of the possibility that effects on work performance would be shortlived and the fact that the work sessions were separated by a week, performance was assessed separately for the first and second work sessions. Performance quantity was operationalized as the number of manuscripts coded in each session, and on average individuals coded 16.20 manuscripts across both sessions. In the first, 75 minute session, individuals coded 6.08 manuscripts on average, while 142 they coded 10.12 in the second, 105 minute session. The correlation between first and second session performance quantity was 5 = .63. Performance quality was defined as the number of coding errors per manuscript and was also calculated for each session. Individuals averaged 2.07 errors per manuscript in the first session and were somewhat less accurate in the second session in which they averaged 2.26 errors. The correlation between first and second session performance quality was ; = .52. As can be seen in Table 3, performance quantity and quality were uncorrelated as intercorrelations ranged from g = -.03 to g = .10. Additionally, almost all of the correlations between the performance measures and the perceptual measures were nonsignificant. The 18 item Job satisfaction measure demonstrated high internal consistency (o = .91) and reasonable differentiation from the other perceptual constructs (absolute values of gs ranged from .04 to .47). Mean Job satisfaction was more than half a standard deviation above the scale midpoint (fl = 3.41) indicating that thelJournal coding task was far from intolerable. Menipuletion Checke The adequacy of the Job relatedness manipulation was assessed by asking participants to indicate their level of agreement with seven statements regarding the perceived Job relatedness of the selection procedure (a = .93). On a five-point scale that ranged from 1 "strongly disagree" to 5 "strongly agree”, participants indicated an average agreement level of 3.05 (ee = .73). A 3 (Job relatedness: overt integrity test, cognitive ability test, work sample test) x 2 (explanation of validity v. no explanation) x 2 (hired v. not hired) ANOVA on Job relatedness perceptions demonstrated a strong effect of 143 test type (E(2,248) = 90.60, g < .05) as well as a selection decision effect (E(1,248) = 8.83, e < .05). In confirmation of the Job relatedness manipulation the three tests represented high (ME = 3.76), medium (Mu = 2.88), and low (Mm = 2.64) Job relatedness. Comparisons of these means with Tukey’s H80 indicated that all means were significantly different (p < .05). Job relatedness was also perceived to be higher among accepted participants (5 = 3.17) than among reJected participants (M = 2.92). As discussed in the method section, an attempt was made to equate the three tests on perceived difficulty. Four items that assessed perceived difficulty were aggregated (a = .71) and had a mean rating of 3.75 (§Q = .58), with lower numbers indicating greater perceived difficulty. Unfortunately this measure demonstrated a significant test type effect (E(2,248) = 8.43, e < .05), with means of E“ = 3.76, M“ = 3.57, and em = 3.90. Although the means differences were not large, comparisons with Tukey’s HSD indicated that the cognitive ability test was perceived to be more difficult than the overt integrity test. This could potentially limit the magnitude of observed differences between these two tests as more difficult selection procedures may be perceived to be less fair (Kluger a Rothstein, 1991, April). A selection decision effect was also found on the difficulty measure (E(1,248) = 15.41, e < .05), with the selection process perceived to be more difficult among reJected participants (M = 3.61) than among accepted participants (M = 3.88). A manipulation check for the explanation manipulation consisted of five items regarding perceived explanation of the selection process and hiring decision (a = .84, M = 3.21, §Q = .80). Participants in the 144 explanation conditions indicated higher levels of perceived explanation (M = 3.50) than those in the no explanation condition (M = 2.91, E(1,248) = 51.17, p < .05). Significant effects were also found for test type (E(2,248) = 27.04, g < .05) and selection decision (f(i,248) = 5.31, e < .05). Less explanation was perceived with the overt integrity test (Mm = 2.81) than with the other two tests (M's = 3.48, M“ = 3.43), and less explanation was perceived among reJected participants (M = 3.11) than among accepted participants (M = 3.31). Although, the selection decision was perceptually obvious in terms of whether one was accepted or reJected, the underlying construct that was manipulated was hiring outcome equity. The adequacy of this manipulation was assessed with the scale of the selection system fairness measure that assessed perceived equity. This scale was comprised of seven items regarding whether or not one received outcomes consistent with what one expected or felt they deserved (a = .85, M = 3.14, §Q = .73). In confirmation of the equity manipulation, those in the accepted group perceived the hiring decision to be more equitable (M = 3.63) than those in the reJected group (M = 2.63). No other main effects nor interactions were demonstrated (all es > .05). Qeeengent Variablee fierceivedefejrneee. Perceived procedural fairness and perceived outcome fairness were highly correlated (5 = .72) so they were initially analyzed together with a 3 (Job relatedness: overt integrity test, cognitive ability test, work sample test) x 2 (explanation of validity v. no explanation) x 2 (hired v. not hired) MAMOVA. Significant effects were found for Job relatedness (E(4,492) = 4.49, e < .05), selection decision (E(2,245) = 12.27, 9 < .05), and the interaction between 145 explanation and selection decision (£(2,245) = 2.95, e < .10), but not for explanation (E(2,245) = .68, n.s.) nor any of the other interactions (9 > .10). In order to further examine these effects, procedural fairness and outcome fairness were separately examined with 3 (Job relatedness) x 2 (explanation) x 2 (selection decision) AMOVAs (see Table 4). Hypothesis 1a suggested a main effect of selection decision on procedural and outcome fairness. This hypothesis was confirmed as both fairness measures demonstrated outcome effects (e < .05 - see Table 4). Perceived procedural fairness was greater among selected individuals (”selected = 3.79, §Q = .73) than among reJected individuals (Mm-“m : 3.44, §_Q = .78), as was perceived outcome fairness (”selected = 3.85, SQ = .59 vs. Mrejected = 3.37, _S_D = .82). It is interesting to compare the magnitude of the effects of selection decision, an outcome manipulation, on procedural and outcome fairness. As would be expected, the mean difference between the hired and reJected groups was greater for outcome fairness (Mfiflf = .48) than for procedural fairness (Mdiff = .35), while the standard deviations are comparable. Similarly, the effect size was greater for outcome fairness (eta2 = .091) than for procedural fairness (etaz = .046). The next hypothesis, that the Job relatedness of the selection test would have a main effect on both fairness measures (Hypothesis 1b), was also confirmed (e < .05 - see Table 4). The means for the procedural fairness measure were in the predicted direction for the high (MS), medium (CA), and low (OI) Job relatedness conditions (519 = 3.93, §_o = .65; v. 1.1,, = 3.55, ee = .76; v. 1 = 3.44, ee = .82). Post hoc Ho 146 Table 4 Analysis of Variance Summary for Fejrness Perceptions end Applicetion/Recommendation Intentione F Values for Dependent Variables §eegee g: Procedural Outcome Recommend Test Type 2 9.08” 4. 21” .97 Explanation 1 .37 1.30 .49 Outcome 1 11.83” 24. 64” .29 Test Type x Explanation 2 2.01 .14 1.46 Test Type x Outcome 2 .92 2.42’ .34 Explanation x Outcome 1 . 91 .93 3. 00’ Test Type x Explanation x Outcome 2 1.16 2.03 1.05 Mithin cells 246 (.55) (.51) (.57) Note: Values in parentheses represent mean square terms. 9 < .10, ”g < .05 fl 147 examination of the means with Tukey’s H80 (H80 = .28, e < .05) indicated that perceived procedural fairness was greater with the work sample test than with either the cognitive ability test or the overt integrity test, which did not differ. Similarly, the means for the outcome fairness measure were in the predicted direction (”115 = 3.81, if; = .60; v. M“ = 3.59, §Q = .72; v. M“ = 3.47, §Q = .86). Comparisons of the means for perceived outcome fairness (HSD = .27, e < .05) indicated a significant difference between the work sample and overt integrity test conditions. Outcome fairness for the cognitive ability test condition fell between these two means, but did not differ significantly from either. As with selection decision, it is possible to compare the magnitude of the effects and effect sizes for procedural and outcome fairness. Consistent with the fact that test type was a procedural manipulation, the mean difference between the work sample and overt integrity tests was greater for procedural fairness (Mm-ff = .49) than for outcome fairness (Mflfl1= .34). Likewise, the effect size was greater for procedural fairness (eta2:= .069) than for outcome fairness (eta2 = .033). Hypothesis 1c stated that perceived procedural and outcome fairness would be greater when explanation was offered for the selection system than when no explanation was offered. This hypothesis was not supported, as explanation did not have an effect on procedural fairness (E(1,246) = .75, n.s.) or outcome fairness (£(i,246) = 1.44, n.s.). The final two hypotheses regarding perceived fairness suggested interactions between the selection decision and the Job relatedness of the selection system (Hypothesis 1d) and the explanation offered for the selection system (Hypothesis 1e). While the Hypotheses 1d was not 148 supported for the perceived procedural fairness measure, a significant interaction was found with the outcome fairness measure (9 < .10 - see Table 4). The means for this interaction are presented numerically in Table 5 and graphically in Figure 6. A comparison of Figure 2 with Figure 6 indicates that the means are in the same pattern as suggested by Hypothesis id. Simple effect tests demonstrated a Job relatedness main effect among reJected applicants (E(2,124) = 4.43, e < .05) with participants who completed the work sample test expressing greater outcome fairness than those completing the overt integrity test (p < .05). Means for the cognitive ability test fall between the two and did not differ significantly from either. The simple Job relatedness effect for hired applicants was not significant (5(2,128) = .91, n.s.). Simple selection decision effects were significant at all levels of Job relatedness (all 35 > 2, e < .05). Thus, as predicted, Job relatedness influenced perceptions of outcome fairness for reJected applicants but not for hired applicants. Hypothesis 1e was not support as the explanation by selection decision effect was not significant for either procedural or outcome fairness (see Table 4). Applieeeion/recommendation iQEentione. Application/recommendation intentions refer to how likely a Job applicant would be to apply for, or recommend to their friends similar proJects. It was expected that this measure would be related to perceived fairness, however as indicated earlier, the application/ recommendation intentions measure was only weakly correlated with procedural fairness (5 = .30) and outcome fairness (g = .23). Hypotheses 2a through 2e were essentially the same as hypotheses 1a through 1e, except that they referred to application/recommendation intentions rather than perceived fairness. 149 Table 5 Means and Standard Devietion for Test Type by Selection Decision Interaction on Perceived Outcome Fairness Selection Test Type (Job Relatedness) Selection Overt Integrity Cognitive Ability Mork Sample Decision (Low) (Medium) (High) Hire 3.83a 3.76‘1 3.95’1 (.59) (.64) (.54) Reject 3.12b 3. 42°” 3.65c (.94) (.75) (.64) Note. Numbers in parentheses represent standard deviations. In comparisons within columns and rows, means sharing a common letter are not significantly different (9 < .05). momma-sum?! maoo—oco QCD<-CDO‘1¢D'U 150 Figure 6 Jeb Realetedgeee e1 §eleetien Decision Intereetien en Qeteeme Feirgeee 4 -- 3.95 3.8 -~ 3.63 3.6 ~- Reject 3.42 3.4 ~- 3-2 “F 3.12 3 Overt Integrity Cognitive Ability Work Sample Job Relatedness 151 These hypotheses were tested with a 3 (Job relatedness) x 2 (explanation) x 2 (selection decision) AMOVA and the results are summarized Table 4. Support was not gained for Hypotheses 2a, 2b, or 2c as no main effects were demonstrated on the application/recommendations measure (Es < 1). Hypotheses 2d and 2e suggested interactions between each of the procedural manipulations and the selection decision outcome. Although the interaction between Job relatedness and selection decision was not supported (E < 1), evidence was found for the interaction between explanation and selection decision (E(1,246) = 3.00, e < .10). The pattern of means for this interaction are summarized numerically in Table 6 and graphically in Figure 7. Simple explanation effect tests indicated that the explanation manipulation had a positive effect for reJected individuals (;(125) = 1.70, e < .10) but no effect for hired individuals (E(129) = .71, n.s.). Similarly, simple selection decision effects demonstrated an effect for the no explanation condition (L(125) = 1.68, e < .10) but not the explanation condition (3(129) = .77, n.s.). This pattern of results is consistent with the effect proposed in Hypothesis 2e and summarized pictorially in Figure 3. Test motivation. Test motivation was assessed along seven dimensions, some of which referred to the selection test and some of which referred to tests in general. The hypotheses generated with regard to test motivation were not specific in terms of differentiating different dimensions, rather they were global for all motivation dimensions. Hypothesis 3a stated a main effect of selection system Job relatedness on test motivation and Hypothesis 3b stated an explanation main effect. The test motivation measure was administered prior to 152 Table 6 Means and Standard Deviation for Explanation by Selection Deciejon Inteeection on.Applicetion/Recommendation Intentione Explanation for the Selection System Selection Decision No Explanation Explanation Hire 3.53“ 3.45“ (.74) (.75) Reject 3.31b 3.54“ (.73) (.79) Note. Numbers in parentheses represent standard deviations. In comparisons within columns and rows, means sharing a common letter are not significantly different (Q < .10). 30-w-0-NQDC‘DBBOOCDN 153 Figure 7 Won WW0“ 3.6 “— 3.4 “r 3.3 ‘— 3.54 3.2 No Explanation Explanation Explanation 154 determination of the selection decision so no decision outcome effects were tested. Given that the test motivation dimensions were moderately correlated (see Table 3), hypotheses were initially examined with a 3 (Job relatedness) x 2 (explanation) MANOVA. A main effect was found for Job relatedness (E(14,488) = 4.54, e < .05), but the explanation effect was not significant (E(7,243) = .72, n.s.), nor was the Job relatedness by explanation interaction significant (£(14,488) = .88, n.s.). To examine the Job relatedness effects more closely, univariate 3 (Job relatedness) x 2 (explanation) ANOVAs were conducted on each of the seven dimension scales (see Table 7). Test type main effects were found for the test motivation, belief in tests, comparative anxiety, test ease, and external attribution dimensions (es < .05 - see Table 7). The means for these effects, as well as post hoc comparisons using Tukey’s HSD are summarized in Table 8. Test motivation was directly related to test type, such that as the Job relatedness of the selection procedure increased, test motivation increased (indicated by lower scale values). Post hoc comparisons of the means indicated that while test motivation toward the overt integrity test was lower than toward the work sample test, the cognitive ability test fell between the two and did not differ significantly from either. The second scale, lack of concentration, did not vary with test type (E < 1, see Table 7). The belief in tests dimensions contained specific and general items regarding whether the test (or tests in general) are valid and useful. Lower numbers reflect greater belief in tests and a significant test type effect (p < .05) indicated greater belief for the work sample test than for the overt integrity or cognitive ability tests, which did not differ (see Table 8). 155 Table 7 Analysis of Vagience Summary for Tee: Hotiyetion Scales F Values for Deeendent Variables Test Concen- Test Compar. Test Extern. Need Source efi Motiv. tration Belief Anxiety Ease Attrib. Ach Test Type 2 3.89’ .51 8.01’ 8.18’ 12.14’ 8.05’ .64 Explanation 1 .24 2.15 .43 1.36 1.12 .20 .10 Test Type x Explanation 2 .83 .07 1.62 .09 .04 1.26 .76 Within cells 245 (.53) (.51) (.46) (.31) (.29) (.39) (.48) Note: Values in parentheses represent mean square terms. *9 < .05 156 Table 8 Means and Standard Deviation for Ieet Tyee Effects on Teet Motiyetion Scales Selection Test Type (Job Relatedness) Motivation Overt Integrity Cognitive Ability Nork Sample Scale (Low) (Medium) (High) Test 2. 55“ 2. 44““ 2. 24“ Motivation (.77) (.74) (.61) Test 2 89“ 3.02“ 2 60“ Belief (.69) (.72) (.62) Comparative 2 39“ 2.70“ 2 45“ Anxiety (.51) (.57) (.60) Test 3 24“ 2.84“ 3 03“ Ease (.52) (.59) (.50) External 2 02“ 2.39“ 2 22““ Attributions (.71) (.60) (.52) Note. Numbers in parentheses represent standard deviations. Row means sharing common letters are not significantly different (p < .05). 157 The fourth scale, comparative anxiety, reflected how well an individual felt they performed on the test, how anxious they felt, and whether test performance was seen to reflect true abilities. Higher comparative anxiety reflected poorer perceived performance and greater experienced nervousness. Comparative anxiety was found to vary as a function of test type (9 < .05), however the means were not a reflection of Job relatedness (see Table 8). Greater comparative anxiety was experienced with the cognitive ability test than with the other two tests. This pattern of means is consistent with the test difficulty manipulation check that was described earlier in which the cognitive ability test was seen as more difficult than the overt integrity test. This interpretation is also consistent with the findings from the next scale, test ease, which were very similar to the manipulation check for test difficulty in that lower numbers reflect greater perceived test difficulty. Test ease demonstrated a significant test type effect (9 < .05), with the overt integrity test being perceived as less difficult than the cognitive ability or work sample test (see Table 8). One observation that would argue against the interpretation that comparative anxiety and test ease both reflected test difficulty effects was that the correlation between these two scales was low (E = -.17). The external attribution scale reflected perceived external influences on test performance such as fatigue, preoccupation, test ambiguity, and time pressure. External attributions varied with test type and again it was the overt integrity and cognitive ability tests that differed (see Table 8). Means for the work sample test fell between the other two and did not differ significantly from either. The final scale was general need achievement which did not refer to the 158 selection test in particular or test taking in general, but rather referred to general motivation. General need achievement was not systematically related to test type (E < 1). In summary, Job relatedness effects were found for five of the seven scales of the test attitude survey. However, examination of the means indicated that only two of these five effects truly varied in a manner that was consistent with the Job relatedness manipulation and Hypothesis 3a. Effects for two of the other scales appeared to reflect the test difficulty confound that accompanied the Job relatedness manipulation. Self-esteem and self-efficacy. Self-esteem and self-efficacy were both expected to be influenced by the selection decision outcome (Hypotheses 4a and 5a). In addition, interactions were predicted between selection system Job relatedness and selection decision outcome (Hypotheses 4b and 5b), and between explanation and selection decision outcome (Hypotheses 4c and 5c). To test these predictions, self-esteem was assessed both globally and affectively, and self-efficacy was assessed along both quantity and quality work performance dimensions. Given a reasonable independence between the two self-esteem measures (E = .43), separate 3 (Job relatedness) x 2 (explanation) x 2 (selection decision) ANOVAs were used to assess the predicted effects. The results of these analyses are summarized in Table 9. The global self-esteem measure failed to demonstrate any significant main effects or interactions. In particular the hypothesized outcome effect and the two hypothesized interactions did not even approach significance, indicating that global self-esteem was not influenced by the manipulations of this study. Affective self- 159 Table 9 Analysis of Variance Semmary for Self-Esteem and Self-Efficacy F Values for Deeendent Variables eleeel Affective Self-Efficacy geegee e: Self-Esteem Self-Esteem Quantity Quality Test Type 2 . 13 2.40’ 1.84 .70 Explanation 1 .03 .06 .03 .10 Outcome 1 1.58 4. O6” .43 1.64 Test Type x Explanation 2 .05 .64 2.20 .80 Test Type x Outcome 2 .06 1.87 7. 35” 6.14u Explanation x Outcome 1 .11 1.37 .13 .13. Test Type x Explanation x Outcome 2 1.92 1.10 .66 1.48 Within cells 245 (.34) (.27) (.87) (.86) Note: Values in parentheses represent mean square terms. ’9 < .10, **e < .05 160 esteem demonstrated the hypothesized main effect of selection decision outcome (E(1,245) = 4.06, e < .05), with reJected individuals experiencing lower affective self-esteem (M = 3.19, §Q = .59) than hired individuals (M = 3.32, §D = .44). Neither of the hypothesized interactions were significant (es > .05), however a Job relatedness main effect was marginally significant (E(2,245) = 2.40, e < .10). The means for the different selection test conditions are in the direction of the hypothesis for low self-esteem individuals. That is, self-esteem decreased as Job relatedness increased (Mfl = 3.33, §Q = .51; v. M“ = 3.26, g = .58; v. M"s = 3.16, g = .47). The two self-efficacy measures were strongly correlated (L = .65) so a 3 (Job relatedness) x 2 (explanation) x 2 (selection decision) MANOVA was used to initially assess the hypothesized effects. Contrary to hypothesis 5a the MANOVA failed to demonstrate a main effect of selection decision outcome (E(2,244) = 1.34, e > .10). Hypothesis 5b suggested a Job relatedness by selection decision interaction, and this hypothesis received support (E(4,490) = 4.45, p < .05). The final hypothesis, which stated an interaction between explanation and selection decision (Hypothesis 5c), was not supported by the results of the MANOVA (E(2,244) = .50, n.s.). The only other observed significant effect was an interaction between Job relatedness and explanation (E(4,490) = 2.52, e < .05). The significant interactions were examined more closely with separate 3 (Job relatedness) x 2 (explanation) x 2 (selection decision) ANOVAs (see Table 9). Consistent with the results of the MANOVA, a Job relatedness by outcome interaction was observed on both the quantity self-efficacy and quality self-efficacy measures (9 < .05). The means 161 for these interactions are summarized in Table 10 and displayed visually in Figure 8 and Figure 9. If one compares the quantity self-efficacy mean in Figure 8 with the hypothesized effect depicted in Figure 4, the pattern of results can be seen to be similar to the hypothesis. Tests of simple effects on the quantity self-efficacy measure demonstrated a significant Job relatedness effect for reJected applicants (E(2,122) = 5.85, e < .05) with the work sample test leading to lower self-efficacy than the overt integrity test (p < .05). Means for the cognitive ability test fell between these two extremes and did not differ significantly from either. Among accepted applicants the trend was toward higher self-efficacy for those given the work sample test than those given the other tests (E(2,128) = 2.89, e < .10). Tests for simple selection decision effects demonstrated that the hired and reJected groups differed with the work sample test (3(73) = 3.07, p < .05), but not the cognitive ability test (L(82) = .18, n.s.). Although the difference between the hired and reJected groups approached significance with the overt integrity test (3(95) = —1.96, e < .10), this effect is in the opposite direction to the prediction. It appears that applicants rejected with the overt integrity test may have demonstrated higher self-efficacy for performance quantity than those accepted with the overt integrity test. The pattern of means for the quality self-efficacy measure is also very similar to Hypothesis 5c (c.f. Figure 9 with Figure 4). Among hired applicants, a significant Job relatedness simple effect (E(2,128) = 4.23, e < .05) was primarily a result of a higher self-efficacy for quality with the work sample test group than the cognitive ability test group. The overt integrity test group did not differ from either of the 162 Table 10 Means and Standard Deviation for Te§t Type by Selection Decieion Interaction on Self-Efficecy Measures Selection Test Type (Job Relatedness) Selection Overt Integrity Cognitive Ability Nork Sample Decision (Low) (Medium) (High) Hire -.04a -.15ab .29b Self- (.81) (.85) (.95) Efficacy Quantity Reject .30c -.11“Cd - . 45“ (.91) (.97) (1.13) Hi re .04““ -. 15“ .39“ Self— (.82) (.96) (.80) Efficacy Quality ReJect . 13“ -.01““ -. 40c (.93) (1.02) (1.05) Note. Numbers in parentheses represent standard deviations. In comparisons within columns and rows, means sharing a common letter are not significantly different (p < .10). ~<—e—-—o-:Jmc0 ~ .05). In the first work session, participants who completed the cognitive ability test coded more manuscripts than those who completed either the overt integrity or work sample tests. Therefore, contrary to Hypothesis 6a, the test type effect on performance quantity did not appear to be a function of Job relatedness, and instead is more similar in form to the test difficulty effect discussed with regard to the manipulation check and the test motivation measures. That is, the cognitive ability test was perceived to be the most difficult test but was associated with the highest work performance quantity. In addition to the test type effects, an explanation by work session interaction was observed (E(1,124) = 4.25, p < .05). The means for this effect are summarized in Table 13 and appear to indicate that performance quantity was higher in the first work session among those provided an explanation than those provided no explanation. However, in the second work session, performance quantity appeared to be slightly lower among those in the explanation condition. Tests of simple effects failed to demonstrate significant effects for either the first (E(128) = 1.50, e > .05) or second work session (e(128) = .61, n.s.). As such, the mean differences in the first work session cannot be taken as support for Hypothesis 6b. The final significant effect on performance quantity was a strong work session effect (see Table 11). On average, participants coded 6.08 (§Q = 1.59) manuscripts in the first session and 10.12 (§Q = 2.41) manuscripts in the second session. However, when considering these mean differences it is important to recall that the second session lasted 105 169 Table 13 Means and Standard Devietion for Explanation by Session Inteyection on Eerformence Quantity Explanation for the Selection System Work Session No Explanation Explanation Session 1 5.88 6.30 (1.51) (1.66) Session 2 10.24 9.98 (2.57) (2.26) Note. Numbers in parentheses represent standard deviations. 170 minute while the first session lasted only 75 minutes. If the second session mean is corrected for the length of the session, the mean number of manuscripts coded in the second session per 75 minute is 7.23. It appears then that participants may have been coding more quickly in the second session than in the first session. This is likely a practice effect and is of little theoretical interst. In terms of performance quality, Hypothesis 6a did not receive support as the ANOVA on performance quality failed to demonstrate a test type effect (see Table 13). A main effect was observed for the explanation manipulation (E(1,124) = 4.34, e < .05), however contrary to Hypothesis 6b, the means are in the direction of less accurate performance in the explanation condition. Participants who were not offered an explanation had fewer errors per manuscript (”00.910131 = 2.01, D = .74) than those offered an explanation (Eb _ 11111: 2.31, ea = .83). Thus, the procedural manipulation of explanation that had no effect on perceived fairness, had a negative effect on performance quality. The only other significant finding with regard to performance quality was a work session effect. Participants made more coding errors per manuscript in the second work session (M = 2.26, §Q = .84) than in the first work session (M = 2.07, §Q = .98). Rather than getting more accurate with practice, participants appeared to get less accurate. While this may seem indicative of a quantity/quality tradeoff in which performance quantity increased and performance quality decreased in the second session, there was no correlation between performance quantity and quality measures to support a tradeoff hypothesis. 171 Job setiefection. The final dependent measure in this study was a perceptual measure of Job satisfaction collected at the end of the second work session. Satisfaction was hypothesized to be directly related to the Job relatedness of the selection system (Hypothesis 78). It was also hypothesized that satisfaction would be higher among those offered an explanation than those offered no explanation (Hypothesis 7b). A 3 (Job relatedness) x 2 (explanation) ANOVA was used to test these hypotheses and the results are summarized in Table 14. The hypothesized test type effect was marginally significant (E(2,125) = 2.68, p < .10), but the means were in the direction of higher satisfaction in the work sample (flu = 3.53, SQ = .57) and overt integrity test conditions (Mm = 3.46, §Q = .62) than in the cognitive ability test condition (Mca = 3.24, §Q = .60). While not Significant, it is difficult to determine whether the trend among these means was a function of the Job relatedness manipulation or the test difficulty confounding. The hypothesized explanation effect was significant (E(1,125) = 3.89, e < .05), however the means are in the opposite direction to the hypothesis. Participants expressed greater satisfaction when they received no explanation (M = 3.52, SQ = .59) than when they received an explanation (M = 3.31, §Q = .60). While contrary to Hypothesis 7b, this negative effect is consistent with the finding for performance quality. Supplementery Anelyses of the Selection Syetem Fairness Measere Although the selection system fairness measure did not provide any of the dependent measures in this study, participants were asked to complete this measure in order to conduct an initial examination of its psychometric properties. The 67 items were developed to represent 13 172 Table 14 Analysis of Variance Summaryyfor Job Satisfaction Source g: F Values for Setisfaction Test Type 2 2.68 Explanation 1 3.89. Test Type x Explanation 2 1.03 Within cells 125 (.35) Note: Values in parentheses represent mean square terms. 49 < .05 173 procedural and distributive scales. Measure properties were examined with exploratory factor analyses and varimax rotations, as well as examination of a priori scales via internal consistencies, corrected item—scale correlations, and scale intercorrelations. A principal factors analysis was conducted on the correlation matrix of the 67 items of the selection system fairness measure. Examination of the unrotated eigenvalues suggested an 18 factor solution using an eigenvalue greater than one criterion and an 8 factor solution using a scree criterion. As such three solutions were rotated, the a priori 13 factor solution, an 18 factor solution, and an 8 factor solution. The varimax rotation failed to converge for the 18 factor solution, but solutions were obtained for 13 and 8 factors. The 13 factor solution accounted for 47 percent of the variance. The rotated factor matrix for the 13 factor solution is presented in Table 15. For ease of interpretation, items were clustered by highest loading on the factors. As can be seen, the first factor was primarily comprised of the seven equity items. Two opportunity to perform items also loaded on this factor. The second factor appeared to be mainly an information or explanation factor as six selection information items and two feedback items loaded most heavily on it. Additionally, two opportunity to perform items were also found loading on this factor. The third factor did not appear to represent a clear content domain as items from procedural domains of honesty, selection information, two-way communication, bias suppression, and consistency as well as the distributive domain of special needs all loaded on this factor. The fourth factor was clearly a Job relatedness factor while the fifth factor was a fakability factor. The sixth factor was again basically an 17H4 Table 15 Roteeed Feetor Metrix For 13 Feetor Solution F52 Equity .756 .080 .084 .205 .052 .029 .035 F51 Equity .721 .104 .091 .164 .018 .103 .094 F30 Equity .643 .121 .161 .179 .015 .125 .041 F65 Equity .625 .130 .059 .032 -.230 .030 .206 F9 000. Perf .601 .104 .003 .080 .086 -.085 .105 F25 Equity .533 .142 .169 .276 .012 .122 -.014 F37 Oop. Perf. .515 .156 .123 .140 .138 -.015 .192 F21 Equity .512 .275 .172 .062 .131 .042 .263 F44 Equity .492 .176 .176 .134 .039 .123 .132 F40 Sel. Info. .095 .752 .107 .072 .118 -.073 -.024 F39 Sel. Info. .076 .726 .142 .039 .019 .012 -.031 F54 Sel. Info. .049 .607 -.027 .138 -.002 .073 -.044 F22 Sel. Info. .139 .590 .184 .308 .041 .032 -.091 F41 Feedback .133 .539 .074 .098 .049 -.098 .001 F63 Opp. Perf. .235 .463 .006 .340 .149 .023 -.029 F64 Sel. Info .232 .447 .149 .174 .021 -.047 .066 F3 Feedback .233 .434 .327 .139 -.000 -.016 .136 F23 Ooo. Perf. .154 .324 .163 .024 .068 .049 .049 F33 Sel. info. .064 .284 .251 .079 .070 .078 -.043 F7 Honesty .144 .229 .557 .096 .115 -.003 .113 F2 Honesty .121 .203 .531 .108 .166 .012 -.004 F10 Sel. Info. .033 .116 .528 -.130 .080 .184 .094 F15 Special Needs -.081 ~.054 .526 .063 .117 .275 -.004 F34 Honesty .100 .183 .500 .259 .061 .043 .10? F11 Communication .184 .359 .465 .021 .058 .047 .115 F6 Bias Suoores. .099 .015 .462 -.063 .139 .368 -.048 F4 Interpersonal .161 .015 .427 .151 -.042 .187 .177 F5 Consistency .068 .011 .416 .117 .037 .186 .042 F12 Communication .142 .282 .409 .184 -.157 .122 .097 F13 Consistency .139 .141 .391 .344 .000 .093 .123 F19 Job Relate. .139 .128 .022 .762 .157 .031 .018 F53 Job Relate. .125 .133 .110 .715 .200 -.094 -.034 F18 Job Relate. .177 .061 .118 .611 .069 .022 .083 F8 Job Relate. .116 .277 .131 .593 .017 .053 .075 F55 Job Relate. .323 .170 .065 .464 .289 .006 -.017 F1? Opp. Perf. .267 .2l9 -.107 .341 .177 -.000 -.116 1'75 Table 15 (cont’d) Item Content FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 FACTOR 7 F45 Fakability .032 .177 .131 .120 .680 -.O39 -.O69 F32 Fakability .086 .013 -.054 .110 .615 .071 -.204 F58 Fakability -.076 -.O44 .014 .069 .592 .114 -.057 F16 Fakability .124 .130 .166 .096 .590 -.204 .018 F60 Fakability -.044 -.044 .llO .117 .474 .196 -.002 F20 Bias Suppres. -.062 .268 .302 .142 .390 .057 .183 F1 Job Relate. .198 .051 .070 .253 .262 .060 .070 F67 Special Needs -.036 -.030 .139 -.010 .033 .626 .174 F46 Special Needs .114 -.O33 .138 -.021 .041 .576 .196 F61 Consistency .000 -.154 .172 .045 -.000 .408 .095 F29 Feedback .254 .255 .198 .059 .058 .342 .168 F28 Feedback .229 .028 .163 .023 .199 .301 .039 F46 Consistency .218 .090 .203 .003 .065 .295 .187 F35 Special Needs .148 .121 .065 .000 .198 .251 .177 F50 Difficulty .114 .019 .064 .110 .027 .206 .654 F49 Difficulty .131 -.016 -.041 .079 -.170 .242 .611 F36 Difficulty .194 -.079 .143 -.182 -.164 .006 .587 F38 Difficulty .066 -.056 .077 .023 -.026 .028 .515 F42 Honesty .107 .279 .235 .032 .087 4 .046 F66 Honesty .069 .167 .100 .130 .084 9 .027 F43 Communication .167 .020 .240 .015 -.009 076 .041 F59 Interpersonal .046 .113 .135 .061 -.102 -.009 -.O30 F47 Interpersonal .074 .150 .084 -.071 .065 -.079 .054 F2? Consistency .091 .126 .211 .248 .076 .324 -.O57 F26 Special Needs .122 .214 .012 .129 -.O42 -.l37 -.051 F14 Bias Suppres. - 041 .078 .066 .171 -.041 .307 —.085 F24 Interpersonal .266 .118 .109 .084 .125 -.010 .113 F31 Bias Suppres. .106 .172 .206 .281 .305 .161 .213 F56 Communication .066 .255 .009 .064 -.053 -.161 -.056 F62 Sel. Info. .060 .249 .046 .036 -.001 -.133 .017 F57 Bias Suppres. .000 -.013 .096 .044 .308 .264 .029 1785 Table 15 (cont’d) Ital FACTOR 8 FACTOR 9 FACTOR 10 FACTOR 11 FACTOR 12 FACTOR 13 F52 -.007 .042 .076 .082 -.187 -.026 F51 -.033 -.044 -.054 -.112 -.028 -.097 F30 .100 .044 .212 .280 .141 .117 F65 .063 -.020 .121 .035 -.089 .064 F9 .178 -.011 -.065 -.030 .217 .010 F25 -.013 .149 .302 -.136 .179 -.026 F37 .101 .157 .044 .125 .291 .204 F21 .002 .200 -.080 .191 -.037 .209 F44 .182 .098 .065 .407 .033 .137 F40 .126 .129 .004 -.074 .054 .103 F39 .019 .035 .061 .073 .036 -.034 F54 .000 .016 -.021 .021 -.011 .119 F22 .063 .039 .074 .055 -.010 .039 F41 -.009 .096 .253 .193 .081 -.097 F63 -.019 .126 -.029 .001 .277 -.001 F64 .288 .126 -.119 -.127 .028 .264 F3 .117 -.037 .275 .133 .012 -.038 F23 .047 .057 .011 -.068 .153 .251 F33 .237 .037 -.036 .009 -.198 -.l76 F7 .128 -.027 -.048 -.020 .022 .008 F2 .085 .135 .062 -.104 -.079 .012 F10 .271 .072 .256 -.059 .005 .115 F15 -.141 .328 -.058 .055 .055 .094 F34 .401 .071 -.034 .121 .279 -.092 F11 .166 .106 .174 .196 .051 .148 F6 -.044 .009 .022 .051 .158 .090 F4 .038 .063 -.030 .293 -.082 -.053 F5 .172 .013 -.062 .120 -.154 -.046 F12 .015 .114 .194 .246 .143 .197 F13 .065 .076 -.166 .130 .069 .273 F19 -.047 .030 .013 .093 -.065 .017 F53 .070 .110 -.052 -.072 .113 -.022 F18 .158 -.129 -.030 .187 -.060 .150 F8 .098 -.031 .125 -.061 .069 -.054 F55 -.010 .042 .146 .065 .011 .014 F17 .031 .112 .041 -.090 .242 .057 F45 .057 .017 -.118 .149 -.042 .051 F32 .114 -.008 .074 -.051 .048 -.043 F58 -.045 .115 .021 -.049 .028 .022 F16 -.029 -.074 -.115 -.041 .038 .112 F60 .007 -.123 .160 .039 -.170 -.050 F20 .107 -.001 -.l64 .213 .215 -.166 F1 .135 -.064 .145 .089 “.018 .011 1'77 Table 15 (cont’d) Ital FACTOR 8 FACTOR 9 FACTOR 10 FACTOR 11 FACTOR 12 FACTOR 13 F67 .122 -.042 -.088 -.043 -.047 .073 F48 -.000 -.073 -.092 .063 -.103 -.133 F61 .400 .149 -.122 .010 -.158 .042 F29 .129 -.023 .084 .143 .218 -.018 F28 .191 -.009 -.120 .214 .006 -.133 F46 .046 .211 .075 .056 -.206 -.076 F35 .098 .033 .060 .202 .134 -.025 F50 -.002 -.007 .014 .134 .026 -.081 F49 .049 .018 .052 -.222 -.026 .067 F36 -.023 .088 -.057 .053 -.143 .132 F38 .068 -.040 .022 .050 .043 .008 F42 .453 .114 .090 .198 .224 .170 F56 .451 .022 .136 -.021 .022 .060 F43 .403 .154 -.000 .267 .024 .145 F59 .086 .632 .030 .030 .022 .050 F47 .068 .607 .145 .018 .104 -.154 F27 .142 .348 .085 .329 -.033 -.013 F26 .049 .135 .608 -.033 .089 -.037 F14 .003 -.025 -.366 -.004 .201 -.004 F24 .176 .190 .359 .223 .039 .175 F31 .224 .054 -.071 .337 .064 -.077 F56 .070 .180 .051 .049 .398 .094 F62 .105 -.112 -.012 -.002 .026 .413 F57 .175 -.039 .073 .096 .063 .330 178 uninterpretable mixture of special needs, feedback, and consistency items. The last clearly interpretable factor was the seventh which was heavily loaded upon by the four difficulty items. The remaining factors contained few items and were not readily interpretable. In summary, what emerged from the 13 factor solution were five fairly clear factors that represented the following scales: equity, selection information, Job relatedness, fakability, and difficulty. Quite interesting is the fact that these five scales were also the only procedural and distributive domains manipulated in this study. The test type manipulation likely influenced perceptions of Job relatedness, fakability, and difficulty. The explanation manipulation was designed to influence perceptions of selection information conveyance, and the outcome manipulation influenced equity. Other procedural rules, especially those related to interpersonal treatment, were not manipulated nor was the special needs rule. As such, variance in these scales likely represented random variance, or that due to individual differences and not characteristics of the selection system. Rotation of the eight factor solution produced very similar results and accounted for 39.5 percent of the variance. In this solution, the first factor was a mixture of 22 items from interpersonal domains as well as consistency, honesty, and special needs. The second through sixth factors were comprised of equity, selection information, Job relatedness, fakability, and difficulty items with loading patterns similar to those of the 13 factor solution. That is, the five clear and interpretable factors related to the manipulations of the study emerged in the 8 factor solution as they had in the 13 factor solution. Of the remaining two factors, the second from last was somewhat interpretable 179 as it was loaded with three special needs items and a bias suppression item. The final factor had only two items loading on it, a communication item and a selection information item. In addition to the use of exploratory factor analysis to examine the properties of the selection system fairness measure, I examined internal consistency estimates, item-scale correlations, and scale intercorrelations for the 13 a priori factors. Table 16 summarizes the number of items for each scale as well as the means, standard deviations, coefficient alphas, and intercorrelations. The five subscales that consistently emerged from the factor analyses also demonstrated the best internal consistencies; alphas ranged from .71 to .85. The remaining scales demonstrated poor to barely reasonable internal consistencies with alphas ranging from .49 to .70. Intercorrelations of the scales ranged from I = 4.16 to g = .61 (smallest absolute E = .01), with the majority of the intercorrelations in the .20 to .50 range. Clearly what the results of both the factor analyses and the a priori scale statistics demonstrated is that the current sample and the hiring situation they experienced do not produce sufficient systematic variance on all scales to clearly examine the fairness measure properties. Subsequent research will have to be conducted in which this measure is administered to a sample that has experienced a variety of selection procedures and situations. In this way meaningful variance may be obtained for all the scales of the fairness measure and it should be possible to more thoroughly examine the factor structure and psychometric properties of the fairness scales. However, the current sample can be used to refine those five scales on which systematic 1130 Table 16 Internal Consistenciee. Means and Standard Deviations For the Selection System Fairness Meaeure Correlations 1. Job Related 6 3.11 .67 1.811 2. Opp. Perf. 5 2.90 .65 .50 1.691 3. Fakability 5 3.08 .76 .35 .24 1.76) 4. ConSistency 5 3.94 .44 .36 .27 .18 1.601 5. Feedback 4 3.27 .57 .40 .46 .19 .43 (.591 6. 581. Info. 8 3.12 .64 .40 .51 .18 .33 .56 1.781 7. Honesty 5 3.69 .56 .42 .42 .24 .52 .55 .56 1.701 6 Interpersonal 4 3.49 .49 .36 .30 .08 .43 .42 .33 .43 1.561 9. Communication 4 3.27 .55 .40 .46 .06 .46 .55 .52 .57 .48 1.621 10. Dias Suppres. 5 3.66 .52 .35 .33 .39 .48 .39 .34 .52 .25 .35 1.571 11. Difficulty 4 3.75 .58 .06 .ll -.16 .25 .16 .01 .13 .17 .14 .12 1.711 12. Equity 7 3.14 .73 .46 .61 .09 .40 .52 .39 .40 .44 .45 .26 .30 1.851 13. Special Needs 5 3.87 .45 .03 .01 .14 .42 .16 .02 .23 .11 .11 .45 .27 .10 1.491 Note: L > .12 significant at p < .05. Numbers in parentheses are internal consistency estimates. 181 variance was observed. Suggestions for such refinement follow. With regard to the Job relatedness scale, the factor analysis displayed in Table 4 suggests that item F1 does not load as heavily on the Job relatedness factor, and in fact does not load higher than .26 on any factor. This item also demonstrated the lowest item-scale correlation with a corrected E = .39, while the remaining items ranged from L = .55 to L = .68. The alpha for the Job relatedness scale with this item deleted increases slightly to o = .82. Given this evidence, item F1 could be dropped from the measure. All of the fakability items loaded heavily on the same factor and all demonstrated high item-scale correlations, so this scale can be left intact. In the selection information scale, items F10 and F62 both loaded more heavily on different factors than the rest of the selection information items in the factor analysis and both had corrected item- scale correlations below I - .30, so these two items could likely be dropped. The resulting alpha (.80) is slightly higher. All difficulty scale items loaded on the same factor and demonstrated high item-scale correlations, so this scale need not be modified. Finally, all the equity items loaded heavily on the same factor and all corrected item- scale correlations were above E = .55, so this scale could also be left intact. The equity scale could probably be shortened by one or two items to reduce the length of the selection system fairness measure without reducing the internal consistency of the scale by more than a couple of points. This may be a desirable step as a number of participants complained about the length of the questionnaire and the redundancy of the items. This problem will partially be addressed when the manipulation check items are not included with the other 67 items, 182 but any action toward reducing the length of the measure would likely be advantageous. In summary, 5 of the 13 scales of the selection system fairness measure demonstrated fairly clear convergent and discriminant properties. The evidence for these scales was encouraging. The remaining scales did not emerge from factor analyses and demonstrated somewhat poor internal consistencies. The problem is likely a function of the lack of systematic variance in objective selection system characteristics for the remaining 8 scales. Further investigation will have to include a sample with more diverse experiences. CHAPTER 6 DISCUSSION The main purpose of this study was to examine the influence of procedural and distributive Justice factors in a selection situation, in terms of effects on perceived fairness and a number of other individually and organizationally relevant outcomes. Predictions were clearly outlined in 22 formal hypotheses and the results were similarly organized as tests of those hypotheses. In order to clarify what was learned in this study, the discussion begins with a summary of the results. Issues associated with Job relatedness, explanation of the selection process, and the interactions between procedural and distributive Justice are then discussed. This dissertation concludes with a discussion of limitations and future research directions. gemmery of Results The main variables in this study can be grouped into four sets of outcomes: Perceived fairness, test motivation, self-concepts, and post hire attitudes and behaviors. Although application/recommendation intentions can be seen as a distinct outcome that represents the ability to recruit applicants to an organization, discussion of this measure is included with fairness perceptions because of the similarity in hypotheses. For each of the sets of outcomes, results are summarized in terms of support, or lack thereof, of the hypotheses. As a means of 183 184 facilitating this summary, the hypotheses and the relative support for each are summarized in Table 17. Perceived fairness. Five hypotheses were generated for each of the perceived fairness measures (see Table 17). Main effects were predicted for hiring decision, selection test Job relatedness, and explanation for the selection system. Interactions were also predicted between Job relatedness and hiring decision and between explanation and hiring decision. These hypotheses were tested for both perceived procedural fairness and perceived outcome fairness. The first and second hypotheses were supported for both fairness measures. Perceived procedural and outcome fairness was greater among hired applicants than among rejected applicants and increased as the Job relatedness of the selection procedure increased. Additionally, hiring decision had a greater impact on outcome fairness than procedural fairness and Job relatedness had a greater impact on procedural fairness than outcome fairness. This finding offers support for the perceptual discriminability of procedural and outcome fairness. Although participants perceived the explanation manipulation, the third hypothesis was not supported, as explanation had no effect on either procedural or outcome fairness perceptions. Additionally, the hypothesized interaction between outcome and explanation was not supported for either fairness perception. The interaction between Job relatedness and hiring decision was significant at a e < .10 level for perceived outcome fairness and the means were in the predicted pattern. The possibility of the need for greater power when testing this interaction hypotheses at the p < .05 level will be discussed later. Summary of Hypotheses and Results 185 Table 17 Hypothesized effect Dependent Outcome x Outcome x Variable Outcome Job Related Explan. Job Related Explan. Procedural Fairness Sup. Sup. N.S. N.S. N.S. Outcome Fairness Sup. Sup. N.S. Sup. N.S. Recommendations N.S. N.S. N.S. N.S. Sup. Test Motivation Sup. N.S. Concentration N S N.S Belief in Tests Sup. N S Compar. Anxiety 22 N.S Test ease 22 N.S. External Attrib. 2? N.S. Need Achievement N.S. N 8 Global Self—esteem N.S. N.S. N.S Affect Self-esteem Sup. N.S. N.S. Self-efficacy Ont. N.S. Sup. N.S. Self-efficacy Oual. N.S. Sup. N.S. Perform. Ont. 22 2? Perform. Oual. N.S. 2? Job Satisfaction 22 22 Note. Sup. = different than hypothesized, N.S. = hypothesis generated. Hypothesis not supported. Blank Hypothesis was supported, 22 = Significant effects but no 186 The application/recommendation intentions measure was intended to capture the extent to which the selection system might dissuade individuals from applying for similar positions in the future, or recommend application to their friends. The measure should relate to an organization’s ability to recruit applicants. Somewhat surprising was the finding that application/recommendation intentions was only weakly correlated with fairness perceptions. Additionally, the first four hypotheses for this measure, which mirrored the hypotheses for fairness perceptions, were not supported. However, the fifth hypothesis, which had the greatest importance from an applied perspective, was supported through a significant interaction between explanation and selection decision. The pattern of the means was consistent with the hypothesis in that explanation led to higher application/recommendation intentions than no explanation among reJected applicants, but had no effect on hired applicants. Test motivation. Hypotheses for test motivation included main effects for Job relatedness and explanation (see Table 17). The Job relatedness hypothesis received some support, while the explanation hypothesis was not supported. Although test type effects were found on five of the seven dimensions of test motivation, only the dimensions that were related to test taking motivation and belief in tests demonstrated clear Job relatedness effects. The external attributions dimension demonstrated an effect that may be attributable to Job relatedness, while the effects for the comparative anxiety and test ease dimensions were clearly more consistent with differences in the perceived difficulty of the three different tests. That is, on these final two dimensions the moderately Job related, cognitive ability test 187 was perceived to be more difficult than the overt integrity test and led to higher comparative anxiety than the other two tests. Self-concepts. Self-concept was assessed in terms of self-esteem and self-efficacy and three hypotheses were generated for each construct (see Table 17). Main effects were predicted for selection decision, with higher self-concepts found among individuals than among rejected individuals. Interactions were also predicted between Job relatedness and selection decision as well as explanation and selection decision. The selection decision main effect hypothesis was supported for the affective self-esteem measure, but not for the global esteem or self— efficacy measures. The hypothesized interaction between job relatedness and selection decision did not receive support with the self-esteem measures but was supported with both the quality and quantity self- efficacy measures. The results demonstrated that, as the job relatedness of the selection test increases, the self-efficacy of hired applicants increases, while the self-efficacy of rejected applicants decreases. The implications of this "mixed blessing" pattern of results will be discussed in detail later. No interactions were found between explanation and the selection decision. Post hire attitudes and behaviors. The final hypotheses were main effects of the two procedural manipulations on work performance and job satisfaction (see Table 17). Work performance was assessed along quality and quantity dimensions and was tabulated separately for the first and second sessions. Performance quantity was found to vary as a function of test type in the first work session but not the second work session. However contrary to predictions, work performance did not increase with job relatedness but instead was highest in the medium job 188 relatedness condition, which also happened to be the selection test with the highest perceived comparative anxiety and test difficulty. Performance quantity also demonstrated an interaction between explanation and work sessions. The means appeared to indicate that performance quantity was higher in the first session among those applicants offered an explanation than those offered no explanation, with no difference or a slight negative effect in the second session, however simple effects tests failed to demonstrate any significant explanation effects. Performance quality demonstrated an explanation effect, although the effect was in the opposite direction to that predicted. Applicants who were given an explanation prior to hiring made more performance errors than those offered no explanation. Similarly, the only significant effect for job satisfaction was an explanation effect in the direction opposite to that hypothesized. Participants offered an explanation expressed lower job satisfaction than those offered no explanation. Given that these effects and an interaction on application/recommendation intentions were the only explanation effects found in this study, and all the post-hire explanation effects were in the opposite direction, the adequacy of the explanation offered is discussed following the discussion of Job relatedness effects. Job Relatedness Job relatedness and the related concept of face validity have long been considered desirable features of selection procedures. Although prior research has examined participant reactions to highly job related tests (Dodd, 1977, Schmidt et al., 1977) and has examined the perceived job relatedness of different types of selection procedures (Smither 8 189 Pearlman, 1991, April), the current study represents the first attempt to manipulate systematically Job relatedness in a selection context. In terms of perceived fairness, the results are consistent with the hypotheses: the greater the job relatedness of a selection procedure, the greater the perceived fairness. This effect is strongest in terms of perceived procedural fairness, but is also prevalent in perceptions of outcome fairness. Perhaps of even greater interest is the fact that job relatedness influences a number of other individual and organizational outcomes. Test taking motivation can have a direct impact on test performance (Arvey et al., 1990), as well as an indirect impact on the validity of the selection test (Schmit 8 Ryan, 1992, April). The current study demonstrates that test taking motivation is influenced by the job relatedness of the selection procedure, such that motivation is highest with a highly job related test. With the current interest in paper—and-pencil cognitive ability tests, these results are clearly important. Test developers may find it advantageous to search for alternate selection tests that appear more job related than the typical cognitive ability test. Alternately, it may be possible to modify test items of current cognitive ability tests to make them appear more Job related (Smither a Pearlman, 1991 April), while not changing the constructs being measured (Hattrup, Schmitt, a Landis, 1992). Overt integrity tests appear, at least in the current context, to be most problematic in terms of test motivation. This may be true of overt integrity tests in general, or it may be a function of the context in which the test is used. An overt integrity test may appear highly related in some selection contexts. Additionally, the overt integrity 190 test is somewhat different than the other two tests that were used in this study because it tends to assess typical rather than maximal performance. One could argue that test motivation should be lower with a typical as opposed to maximal performance test. A second dimension of Arvey et al.’s (1990) test attitudes survey that was influenced by job relatedness was belief in tests. Applicants have stronger beliefs that the specific selection test, as well as tests in general, are valid and useful if they are more highly job related. This has practical implications for litigation of selection discrimination cases which will likely soon be decided by Jury trials, as required by the 1991 Civil Rights Act. Statistically naive jurors may be swayed more strongly by the perceived job relatedness, validity, and usefulness of a test than by its actual statistical properties. Self-efficacy was also influenced by the job relatedness of the selection test, with effects that were quite different for rejected versus hired applicants. Implications of this finding are discussed later. Finally, job relatedness, or at least differences in selection test types, can influence post-hiring work behavior and possibly job satisfaction. Unfortunately, a potential confounding of the job relatedness manipulation with test difficulty variation across the different test types limits some of the conclusions that can be drawn with regard to job relatedness in the current study. The perceived difficulty of the cognitive ability test exceeded that of the overt integrity test, and although not significant, the perceived difficulty of the work sample test was more similar to that of the overt integrity test than to the cognitive ability test. While the mean differences were much smaller for the perceived difficulty measure 191 than the perceived Job relatedness measure, differences between the overt integrity and cognitive ability tests may be partly attributable to test difficulty as well as Job relatedness. Kluger and Rothstein (1991, April) argued that the cognitive demands of a test are one of the main influences on applicants’ reactions, with more negative reactions resulting from more demanding tests. An effort was made during pilot testing to equate all three tests on perceived difficulty by reducing the length of the cognitive ability test from 50 to 40 items, while keeping the time limit the same. This was not enough of a concession as differences in perceived difficulty were found, however it is important to remember that these differences were small, albiet significant. As such, differences in perceived difficulty may have accounted for some of the variance on some of the measures, but certainly cannot completely account for the Job relatedness effects. It may be interesting to examine some of the observed effects of test type on organizational outcomes with an interpretation of test difficulty. Such a venture is speculative and should be interpreted as an effort to generate hypotheses rather than any attempt to test hypotheses with current data, because many other possible explanations exist. That is, there may be other dimensions on which the three tests varied besides job relatedness and test difficulty. Results for two test attitude scales (Arvey et al., 1990), test taking motivation and belief in tests, were consistent with the job relatedness hypotheses. However, greater comparative anxiety was found with the cognitive ability test than with the overt integrity and work sample tests. As stated earlier this scale represents a somewhat heterogeneous combination of how anxious the applicant felt, how well they felt they 192 performed, and whether performance was seen as a reflection of ability. It would appear that with more difficult tests, applicants feel more anxious, feel they performed less well, and feel that their performance is not a good reflection of their ability. A final issue to consider with regard to the findings with the test attitude scale, is that some of the unusual results may be due to inadequacies in that measure. A number of the scales of the TAS did not demonstrate reasonable internal consistencies in Arvey et al.'s (1990) research, and the theoretical content of some of the scales is difficult to interpret. Post-hire attitudes and behaviors also appeared to be influenced by test type in a way that may be indicative of differences in test difficulty. Participants completed a greater quantity of work in the first session when they were selected with a cognitive ability test than with an overt integrity or work sample test. Before ascribing too much importance to this finding, it should be remembered that the effects of test type on work quantity were short lived and did not persist beyond the first work session. In any event, it may be possible that the difficulty of the selection process influences how hard an individual will work after being hired. The more an applicant is asked to do in the selection process, the more they may actually do once hired. Additionally, job satisfaction demonstrated means in the same direction as this test difficulty effect, although the results were only marginally significant. It may be possible that a demanding selection process established an atmosphere of high quantity work that is not particularly satisfying. Of course these final extensions are purely speculative and would require extensive theoretical development as well as empirical testing. 193 In future research that uses an experimental paradigm to manipulate job relatedness, it may be worthwhile to manipulate test difficulty as an independent dimension and thereby allow for a cleaner assessment of job relatedness and test difficulty effects. It would also be possible to examine these issues using a survey methodology and the selection system fairness measure developed in this study, as long as a sample and situation were selected in which the dimensions of job relatedness and selection process difficulty were reasonably independent. Explanation Offering an explanation or justification for a decision has been shown to be related to procedural fairness perceptions in a variety of situations, including scenario-based recruiting situations (Bies 8 Shapiro, 1988). A logical extension of this research to the selection domain is in terms of offering information about the selection procedures and their validity, as well as information on scoring and decision making. In the current study participants were offered an explanation for the particular selection test administered and information on how a hiring decision would be made. This explanation was in the form of a one page written description and a three sentence verbal reinforcement. A manipulation check demonstrated that greater information was conveyed in the explanation condition than in the no explanation condition. However, explanation had no effect on perceived fairness, test motivation, and self-concepts; a predicted effect on application/recommendation intentions; and a negative effect on performance quality and Job satisfaction. The predicted explanation manipulation by selection decision interaction on application/ 194 recommendation intentions, indicates that explanation may be one means of "appeasing" rejected applicants. Those that were reJected but offered an explanation were more likely to recommend the project or apply for similar projects than those rejected and offered no explanation. This finding is consistent with prior organizational justice research on Justification (e.g. Greenberg, 1990a) and is important from an applied perspective. Given that explanation did work as predicted for application/ recommendation intentions, two questions arise: a) Why did explanation not have an effect on many of the variables that were influenced by the job relatedness of the selection test, and b) Why was the effect of explanation negative for post-hire work behaviors and attitudes? In terms of addressing both of these issues, attention should be directed toward the total amount of information provided to applicants, the way in which the explanation was conveyed, and the type of applicants that served as participants in this experiment. The procedure for this study included providing substantial information on the work task to applicants prior to testing such that applicants would be able to form reasonable impressions regarding the job relatedness of the selection tests. As such, all applicants received considerable information prior to completing the selection test. While only those in the explanation condition were provided any information on the selection test, all applicants may have felt they received reasonable information and subsequently their perceptions of fairness were not influenced by the explanation manipulation. In terms of the way in which the explanation was conveyed, presenting information on a single sheet of paper may have minimized the 195 impact of this information in comparison to the other information conveyed about the nature of the work task. During telephone interviews I conducted with recent job applicants, the types of issues they described in terms of selection information were either extremely negative incidents when not enough information was provided to successfully complete the selection process or extreme positive incidents in which much information was provided on the company, the selection process, and in some cases even the questions that would be asked during a selection interview. The explanation manipulation in the current study may have been too minimal to influence fairness perceptions. A second problem with the manipulation of explanation may have been that it was not believable. In particular, the explanation for the overt integrity test and even the cognitive ability test may have been seen as inaccurate or blatantly false. This may have been especially problematic in the current study, given that the participants were undergraduate students who likely had previous experience with deception in different psychology experiments. To these participants, a single written sheet of information may signify a manipulation and thus may have been discredited. Informal discussions with participants during debriefing did not indicate that suspicion was a problem, but debriefings were conducted in a large group format in which individual participation or expression may have been inhibited. If participants who received the explanation manipulation felt they were deceived, this might have led to the negative effects of explanation on performance quality and job satisfaction. 196 A question arises as to why this negative effect was only detected in hired participants, and only among post-hire measures. The explanation may have only been thought of as a deception after individuals had a chance to experience the coding task first hand. It may have seemed plausible that an overt integrity test was being used to select ”responsible” individuals up until the point when hired participants actually began the coding task. At this point the explanation may have been reinterpreted as a deception and negative consequences resulted. All of these possible interpretations of the peculiar results with regard to the explanation manipulation are highly speculative. A final interpretation is that in the selection domain, explanation or Justification for a selection decision does not influence perceptions of fairness. As such, the findings for post-hire measures are spurious and not likely to be replicated in subsequent research. As this is a harsh and dismal interpretation, I would be inclined to suggest that further research be conducted on explanation using a more substantial manipulation. As in other procedural justice research on justification (e.g. Greenberg, 1990a), the explanation manipulation could involve an interactive discussion of why a selection test is being used and how it will be scored. Alternately, Justification could be Offered after the selection decision, with a written or verbal description of how each individual performed in the selection process and why they did not receive a job offer. In this way, the explanation provides justification for the decision rather than the erocedure that led to the decision. This form of Justification may be more consistent with prior Justice research on Justification (Bies 8 Shapiro, 1988; 197 Greenberg, 19908). Interections Between Procedural and Distributive Justice Two of the most interesting sets of hypotheses in this study involved the interactions between procedural manipulations and the selection decision on perceived fairness and on self-concepts. The observed interaction on perceived outcome fairness is considered first and then the discussion addresses the interactions on self-efficacy. Perceived outcome fairness. In terms of perceived fairness, prior research has been inconsistent. Organizational justice research by Greenberg (1987a) and Leung and Li (1990) has demonstrated an interaction between procedural justice and outcome favorability on measures of perceived fairness. However, research by Kluger and Rothstein (1991, April) failed to find a similar interaction in a simulated hiring situation, drawing into question the applicability of the justice findings to the selection domain. In the current study an interaction was found between the job relatedness of the selection procedure and the selection decision on perceived outcome fairness, such that job relatedness had a Significant positive effect on outcome fairness perceptions of rejected applicants but no effect for accepted applicants. This finding replicates the results of Greenberg (19878) in two important ways. First, the form of the interaction is the same, indicating that contrary to Kluger and Rothstein’s (1991, April) results procedural justice findings are applicable to the selection domain. Second, the interaction was found on the outcome fairness measure and not the procedural fairness measure, which is also consistent with Greenberg’s findings. Both of these issues highlight the utility of 198 adopting an organizational justice perspective as a means of generating hypotheses with regard to selection system perceived fairness. In addition, at least three aspects of this interaction warrant further discussion: a) the magnitude of the interaction and power to detect significance, b) resolving the discrepancy with Kluger and Rothstein’s findings, and c) practical implications of studying perceived fairness. A liberal p < .10 level of significance was adopted when examining interactions in this study because of the low power associated with testing hypotheses. In particular, analysis of variance is most powerful when testing cross—over interactions and is far less powerful for interactions involving slopes of the same Sign. That is, the interaction hypothesis was not that job relatedness would have a positive effect for rejected individuals and a negative effect for hired individuals. Rather, it was predicted that job relatedness would have less of an effect (albeit still positive) for hired as compared to rejected individuals. In the analysis section the power to detect the interaction between job relatedness and selection decision on perceived outcome fairness was presented. It may also be useful to examine the sample size that would be required to detect a similar effect with .80 power and a p < .05 significance level. The effect size of the interaction between job relatedness and selection decision on outcome fairness was eta? = .019 or f = .137. Cohen (1988) describes f = .10 as a small effect and f = .25 as a medium effect, so it can be seen that the interaction represented a fairly small effect. As indicated previously, the power to detect this effect size at a p < .10 level was .61, while the power at a Q < .05 level was .48. If the typical 9 < .05 level is adopted and the desired power is 199 .80, then it would take 528 participants to detect the effect size obtained in the current study. This calculation suggests that future research directed toward examining the interaction between procedural features of the selection system and the selection hiring decision, must involve fairly large samples. Alternatively, it may be possible to develop realistic situations in which the effect size associated with this interaction is larger. Finally, given the effect size in this study, one might be concerned about its practical significance; this issue is addressed below. When examining the issue of the discrepancy between the findings of the current study and those of Kluger and Rothstein (1991, April), it may be worthwhile to consider again the issue of power. The effect size of an interaction between procedural factors and hiring decision is going to depend largely on the magnitude of the procedural effect. If the procedural effect is small, the effect size of the interaction will be substantially reduced. In the Kluger and Rothstein study, the procedural manipulation was cognitive ability versus biodata selection tests. Although differences were found between these two tests on a number of perceptual measures, it is likely that magnitude of the procedural effect was insufficient to produce a Significant interaction. In addition, the study involved a simulated selection environment in which no real job was on the line, and as such, the outcome effect may have also been reduced. Given the sample size of Kluger and Rothstein’s study (a = 151) and my best estimate of the size of this effect, it is likely that they had insufficient power to detect an interaction between test type and hiring decision. 200 In addition to the theoretical importance of the interaction between job relatedness and hiring decision on outcome fairness, this finding raises some practical concerns. As was mentioned in the introduction to the present study and hypotheses section, organizations are likely primarily concerned with the reactions that hired applicants have to the selection process. The findings with regard to outcome fairness would suggest that for hired applicants, the job relatedness of the selection system does not really matter. However, before we conclude that the study of perceived fairness is purely academic or only important from an individual rather than organizational perspective, it is important to recall that no interaction was found between job relatedness and hiring decision on the procedural fairness measure. That is, job relatedness influenced perceptions of procedural fairness just as strongly for the rejected applicants as for the accepted applicants. As considerable prior research has demonstrated a relationship between procedural fairness and a variety of organizational outcomes (e.g. Konovsky 8 Cropanzano, 1991), procedural fairness effects can certainly be seen as important. Self-efficacy. The second set of interaction hypotheses that were of considerable interest were those predicting interactions between procedural manipulations and the selection decision on the self-esteem and self-efficacy measures. Procedural Justice was predicted to have an adverse effect on the self—concepts of those that were rejected. The significant interactions between job relatedness and selection decision on the two self-efficacy measures supported this hypothesis. If you combine the findings of the two interactions, essentially what was demonstrated was that as the Job relatedness of the selection process 201 increases, the self-efficacy of hired applicants also increases, but the self—efficacy of rejected applicants decreases. In terms of the hired applicants, the selection process may almost serve the function of an organizational development intervention. The more job related or procedurally fair the selection system is, the higher the self-efficacy of those applicants that are hired. Prior research has consistently demonstrated a link between self-efficacy and work performance (Gist 8 Mitchell, 1992), such that by increasing the perceived fairness of the selection process, an organization may be increasing productivity. While the results of the current study demonstrated the effects of the selection system on self-efficacy, a link was not observed between self-efficacy and performance. The correlations between self-efficacy and performance were consistently nonsignificant for both dimensions of quantity ([95:11 = .10, [pen-3 = .10) and quality (Emil = -.05, [mag = - .04). These results suggest that participants may not have had an accurate understanding of what the job entailed at the time of completing the self—efficacy measures. Future research examining this issue would likely demonstrate a stronger relationship between post selection self-efficacy and actual work performance if applicants were applying for jobs in which they had a stronger a priori understanding of the Job content. It is also possible that the motivational enhancement demonstrated through the self-efficacy measures by individuals in the work sample test condition did not transfer past the training session to reflect performance on the Job. It would have been interesting to reassess self-efficacy either following the first work session or preceding the second work session to see if the job relatedness effect on efficacy was maintained following performance of the task. 202 To the same extent that higher job relatedness increases self- efficacy among hired workers, Job relatedness is also associated with a decrease in self-efficacy among rejected applicants. This is sort of 8 Catch 22 of developing selection systems that appear to be procedurally fair. While attitudes of rejected applicants toward the selection process and hiring decision are positively influenced by job relatedness, self-efficacy is reduced. This is an interesting finding and deserves subsequent research on both the underlying processes of the effect and the relationship of this effect to other outcomes relevant to rejected applicants, such as continued motivation to search for a job. Social psychology research on self-concept (McFarland 8 Ross, 1982) and attribution theory (Weiner, 1985) was used to generate the hypothesized interaction between procedural factors and hiring decision on self-concept measures. In particular, it was suggested that as the selection process became more Job related or more procedurally just, applicants would be less able to make external attributions regarding their performance in the selection process and the feedback inherent in the hiring decision would become internalized. While the results for self-efficacy suggest that this process took place, it would be useful to actually assess attributions associated with the selection process and see if they are more internal with a Job related selection process than one that is less Job related. Subsequent research could also examine whether this interaction is moderated by locus of control or initial self—esteem as would be predicted from some of the self—concept (Baumgardner et al., 1989) and attribution literature (Weiner, 1985), as well as prior theories of applicants’ responses to selection (Iles 8 Robertson, 1989). 203 Another need for subsequent research is to examine the impact of negative self-efficacy on future job search behavior. If individuals believe they could not have performed the Job very well (i.e. low self- efficacy) they may be dissuaded from applying for future Jobs of that nature. In additional, if the negative impact on self-concept is less specific than self-efficacy toward that particular job, motivation to continue the Job search process in general may be reduced. Although a similar interaction was not found with general or affective self-esteem, future research could examine the impact of rejection on self-esteem toward the job search process (Ellis 8 Taylor, 1983). In addition, research tracing the job search process of a single individual may be able to document the effects of repeated rejection on applicants’ motivation. At this point, it is not known how detrimental one job rejection may be, although we can probably all recall instances when we, or people we know, were really deflated after a single job rejection. A final issue to consider regarding the negative impact of job relatedness on rejected applicants’ self-concept is whether or not this effect can be minimized by a post rejection explanation that highlights external factors that lead to the rejection. During a recent Job search, I was told by one potential employer that if I did not get the job, it would not be because I was not qualified, rather it would be a lack of match between my interests and their needs. As was discussed previously regarding the effects of explanation, it would likely be worthwhile for research to examine the utility of providing applicants with informative and constructive feedback regarding a hiring decision. This means of increasing the procedural fairness of the hiring process may also serve to decrease the negative effects of rejection on self- 204 concept. Limitations As with any research project, it is worthwhile to take some of the wind out of the sail and discuss the limitation of the study. Three problematic issues warrant discussion at this point: a) the confounding of a test difficulty effect with the job relatedness manipulation, b) the problems with the explanation manipulation, and c) a problem with participants’ motivation. While the first two issues produced problems when interpreting results, the final issue may have decreased the magnitude of a number of the effects and may have diminished the external validity of the study. In the discussion of the job relatedness effects, I discussed the confounding of test difficulty with the job relatedness manipulation. The test difficulty effect was considerably smaller than the job relatedness effect and had a different form, such that any differences between the work sample test and the other two tests could be interpreted as a job relatedness effect. However, difficulties in interpretation arise with effects that were not linearly related to the ordering of test Job relatedness such as with some of the test attitude survey measures or with the performance quantity measure. Effects on these measures appear to be a function of test difficulty because they are Similar in form to the test difficulty effect, however there may be other differences in the tests to which effects can be attributed. In addition, some of the other job relatedness effects, such as the interactions between Job relatedness and selection decision on self- efficacy, may have been affected by factors other than job relatedness, such that the means are not a perfect match to the hypotheses. One 205 means of examining the possibility that other differences in the test were manipulated in addition to job relatedness is by examining test type effects on the ten procedural dimensions of the selection system fairness measure. As not all of the dimensions demonstrated adequate internal consistencies or discriminant validity, these analyses should be interpreted cautiously. A 3 (test type: work sample v. cognitive ability v. overt integrity) x 2 (explanation v. no explanation) x 2 (hired v. rejected) MANOVA on the 10 procedural dimensions demonstrated significant test type (E(20,480) : 10.11, g < .05), explanation (E(10,239) = 3.58, p < .05), and hiring decision effects (E(10,239) : 5.07, e < .05). Of greatest interest here are the univariate results with regard to test type, for which significant results (9 < .05) were found on Job relatedness, opportunity to perform, fakability, feedback, selection information, honesty, and bias suppression. The means for the Job relatedness effect are comparable to those for the job relatedness manipulation check in that the work sample test was seen as considerably more job related than the other two tests. For all the other procedural dimensions on which significant test type effects were found, the means indicate that the cognitive ability and work sample tests did not differ substantially, but the overt integrity test was much lower. Given the intercorrelations of these dimensions, it is difficult to determine where the basis for these effects lies. However, these results do indicate that it is difficult to manipulate a single procedural dimension, without also manipulating unintended procedural aspects. This is particularly the case when realistic and commonly utilized tests are adopted. A "cleaner” approach to manipulating job relatedness could 206 involve a Single test format with test items of varying job relatedness. Of course, this approach would maximize internal validity at the expense of generalizability. A second limitation of the current study is the inability to draw conclusions with regard to the explanation manipulation. A number of possible reasons why this manipulation did not influence dependent measures in the predicted manner have been proposed, but these are suggestions which are largely speculative. At best, I can conclude that my particular manipulation of explanation did not influence perceived fairness, but it did demonstrate an interesting interaction on recommendation intentions; at worst I could suggest that explanation in the selection context backfires and leads to lower satisfaction and work quality. It would have been useful to conduct individual debriefing sessions with participants to see whether they perceived something wrong with the explanation. A final limitation of this study is a problem with some participants’ motivation. After running a number of sessions, it became obvious that some participants were only completing the selection process in order to obtain course credit. As mentioned in the method section, six individuals refused to participate in the coding task after being selected, and three individuals did not return for the second coding session after completing the first session. Although three of the six individuals refused to participate in the coding task because of schedule conflicts, the other three refused because they did not want the money and only participated in the selection process in order to obtain course credit. Two of these people even told me that they had actively attempted to "fail" the selection test so that they would not 207 be selected. The other informal feedback I received regarding a motivation problem was that a number of participants audibly groaned when I announced that they were in the selected group. This is not likely something that happens in typical selection situations. As it is not possible to determine the extent of this motivation problem, a question to be addressed is how this problem may have influenced the results of the study. It seems that lack of motivation would tend to minimize or limit the magnitude of my findings. If a person was not interesting in being selected, but in fact was hired they would likely rate the outcome fairness as low. Similarly, if the low motivation participants were not selected, they would likely interpret the outcome fairness favorably. These trends are in the opposite direction to the hypotheses and observed results, which indicates that, at least for outcome fairness, the results may have tended to underestimate true effects. Similarly, with regard to the other effects, it seems that the motivation problem may have limited the magnitude of the findings rather than confound any of them. The solution to this problem is obvious. Subsequent research using a temporary hiring and work situation, such as used in this study, should employ other means of recruiting participants. The reason that course credit was offered in this study was so that there would be some incentive to have rejected applicants stay and complete the dependent measures after they had been rejected. Subsequent research could more closely simulate an actual hiring situation by recruiting participants with posted notices or advertisements in a campus newspaper, and then pay the rejected participants to complete the final dependent measures. In this way, only those participants that were actually interested in 208 being hired would apply for the job and motivation problems would be minimized. Future Research Directions Throughout the discussion, suggestions for future research have been made. Rather than reiterate all of these suggestions, only the most salient or important issues are reconsidered, and additional research directions are suggested. In addition, alternate possible research methodologies are discussed. One research need that was mentioned earlier was an investigation of offering different forms of selection information. In the current study, selection information was presented primarily in writing and the content mainly focused on the validity and suitability of the selection test for the journal coding task. Other forms of information could include greater detail on how tests are scored and how test scores are used to make selection decisions, as well as information on how individual participants performed and why they were, or were not selected. This final type of information, which may be more aptly described as a manipulation of feedback informativeness, is similar to information provided in some procedural justice studies of offering justification for a decision (Bies & Shapiro, 1988). In addition to presenting selection information in a written format, different information presentation formats could be explored, such as individually oriented verbal presentation or verbal group presentation with discussion. As well as continued investigation into the effects of offering selection information, future research should consider some of the other procedural rules presented in the conceptual model of perceived 209 fairness. For instance, the importance of two-way communication and consistency or standardization of administration could both be manipulated in a study of structured versus unstructured interviews. During telephone interviews with recent job applicants, I became aware of the diversity in reactions to standardized interviews. One individual described an interview as "Very structured, on a prepared form so each person was asked the same questions. My answers were rated on a standardize form. In all, it was a very standardized process." Another person described what would appear to be a similar interview in the following manner: "It was like talking to a wall. He would read a question and then wait for an answer, giving no feedback or building on any of the questions." Research on procedural issues surrounding a structured interview would help to theoretically advance our understanding of perceived fairness, while at the same time it would offer practical advice to those designing selection interviews. In addition to studying procedural justice in the selection process, future research Should consider the alternate distributive justice rules. An interesting study might attempt to manipulate factors associated with different distributive rules and see if under some circumstances, people will rely on an alternate distributive rule, such as special needs, when making evaluations of distributive fairness. This research would be particularly timely given the changes to the Civil Rights Bill and the recent Americans with Disabilities Act. In terms of other outcomes associated with perceptions of procedural and distributive fairness, it would be useful to study the impact these perceptions have on job application and acceptance decisions. While it may be difficult to study application decisions, 210 job acceptance decisions could be studied in an organizational setting by assessing individuals perceptions of procedural justice immediately following the selection process and then examining whether or not individuals that meet the selection criteria accept a job offer. It would also be interesting to study the impact of the selection process on other post-hire behaviors such as organizational citizenship behavior (OCB). OCB would be particular interesting to study as it is likely more directly under an individual’s motivational control than other forms of work performance. Future research could adopt any of a number of methodologies for investigating perceived fairness in selection. The current study demonstrates the utility of developing a highly controlled hiring and work situation. This approach is seen as more desirable than presenting participants with written descriptions of hiring practices (Stone 8 Herringshaw, 1991, April) or using computer simulated hiring procedures in which no job is actually being offered (Kluger 8 Rothstein, 1991, April). Given that work was actually performed and compensation was offered for performance, the current paradigm offers many of the features of a typical hiring situation. If more conventional recruiting procedures were used with the current study, the external generalizability would be even greater. Research could also be conducted in organizational settings by collecting selection system fairness data from groups of applicants, both prior to and following the implementation of a new selection procedure. While random assignment would be sacrificed in this situation, the increment in external validity may make this approach a useful complement to the procedure used in the current study. A final 211 methodology for studying these issues would be to trace a group of job seekers through the job search process, and have them complete survey questionnaires following participation in selection procedures. This approach would likely be useful for obtaining diversity in hiring systems on many of the procedural dimensions assessed by the selection system fairness measure. As such, this procedure would likely be a useful next step in assessing the psychometric properties of the selection system fairness measure and refining the scales. The selection system fairness measure developed in this dissertation will likely provide a useful methodological contribution to future research. Although future research is needed to further assess and refine this measure, the current results are encouraging in that good internal consistencies and factor structures were found for those scales on which systematic variance existed. If one compares the current fairness measure with the test attitude survey (TAS) developed by Arvey et al. (1990), several advantages can be found with the current measure. The selection system fairness measure has a strong theoretical basis in the organizational justice literature. Scale domains were developed theoretically and refined empirically, and an attempt was made to use prior scale items when they were available. The TAS has general ties with some motivation research, but scales were primarily empirically derived. An example of the empirical rather than theoretical nature of the TAS scales can be seen in the comparative anxiety dimension. Items in this dimension cover impressions of comparative performance, general performance, anxiety, and dislike for the test. As such, it is not readily apparent what higher scores on this dimension represent. Additionally, initial research indicates that 212 the selection system fairness measure will likely have reasonable psychometric properties. Factor analytic results presented for the TAS were far from convincing, and internal consistency estimates were quite low for a number of scales (e.g. a = .56 and .58 - Arvey et al., 1990). Although the purposes of the selection system fairness measure and the TAS may be different, it is clear that the selection system fairness measure has a number of advantages over the TAS. General Conclusions In this dissertation, perceived fairness was described as an important domain of study for both practical and theoretical reasons. It was demonstrated that past research has been both atheoretical and fragmented, and in an attempt to address these shortcomings a conceptual model of perceived selection system fairness was developed. Organizational justice theory was used to provide a theoretical framework for examining issues of perceived fairness and to generate a number of hypotheses. Issues of procedural and distributive justice were both described as important to overall fairness reactions and the interaction between these forms of justice was cited as an issue of both practical and theoretical importance. Links were proposed between perceived fairness and a number of individual and organizational outcomes. In an attempt to examine a number of issues proposed in the conceptual model, a study was conducted in which procedural justice was manipulated along dimensions of job relatedness and selection information explanation, and distributive justice was manipulated in terms of a hiring decision. Main effects of the job relatedness of the selection procedure were found for perceived procedural and outcome 213 fairness, as well as for test motivation. Explanation had no effect on perceived fairness, and contrary to expectations had a negative effect of job satisfaction and work performance quality. The hiring decision had a main effect on perceived fairness and self-esteem. Theoretically interesting interactions were observed between job relatedness and the hiring decision of measures of outcome fairness and self-efficacy. Finally, an unexpected pattern of results with a number of test attitude survey dimensions and with work performance quantity may have been the result of a difference in the perceived difficult of the three tests used to manipulate job relatedness. A final methodological advance provided by this research was the development and initial assessment of a measure for assessing perceived selection system fairness. This measure was initially based on the conceptual model presented in the introduction and was later developed with the results of a number of critical incident generating interviews with recent job applicants. Initial examination of the psychometric properties of this measure was encouraging, although more research is needed before all the scales can be refined. An additional outcome of the critical incident interviews is a possible refinement of the conceptual model. As mentioned in the method section, 10 procedural rules emerged from the interviews rather than the 13 initially proposed. Refinements to the coneptual model could incorporate these changes. Discussion of the results highlighted the practical and theoretical importance of the results of this study. In additional to the substantive findings, this research demonstrated the applicability of organizational justice theories as a framework for generating testable hypotheses. In addition, a research paradigm was developed in 214 which a useful balance between internal validity and external generalizability was struck. Future research issues that were discussed included extensions of the current findings and additional issues suggested by the model of perceived fairness. I believe what this dissertation has done is to chart a path for future research and then take the initial steps along that path. LIST OF REFERENCES Adams, J.S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in experimeneej socialypsychology (Vol. 2, pp. 267-299). New York: Academic Press. Alderfer, C. P. 8 McCord, C. G. (1970). Personal and situational factors in the recruitment interview. Journal of Applied Psychology, EA, 377-385. Alexander, S. 8 Ruderman, M. (1987). The role of procedural and distributive justice in organizational behavior. Social Justice Research, 1, 177-198. Anastasi, A. (1988). Psychological testing (6th ed.). New York: Macmillan. Arvey, R.D., Gordon, M.E., Massengill, D.P. 8 Mussio, S.J. (1975). Differential dropout rates of minority and majority job candidates due to "time-lags" between selection procedures. Personnel Psychology, ge, 175-180. Arvey, R.D. 8 Sackett, P.R. (in press). Fairness in selection: Current developments and perspectives. In N. Schmitt 8 W. Borman (Eds.), Personnel selection. San Francisco: Jossey-Bass. Arvey, R.D., Strickland, W., Drauden, G., 8 Martin, C. (1990). Motivational components of test taking. Personnel Psychology, 5;, 695-716. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, e5, 191-215. Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 31, 122-147. Baumgardner, A.H., Kaufman, C.M., 8 Levy, P.E. (1989). Regulating affect interpersonally: When low esteem lead to greater enhancement. Journal of Persogelity end Social Psychology, §§, 907-921. Bierhoff, H.W., Cohen, R.L., 8 Greenberg, J. (1986). Justice in social relations. New York: Plenum. 215 216 Bies, R.J. 8 Moag, J.S. (1986). Interactional justice: Communication criteria of fairness. Research on Negotiation in Organizations, 1, 43-55. Bies, R.J. 8 Shapiro, D.L. (1987). Interactional fairness judgments: The influence of causal accounts. Social Justice Research, 1, 199-218. Bies, R.J. 8 Shapiro, D.L. (1988). Voice and justification: Their influence on procedural fairness judgments. Academy of Management Journal, e1, 676-685. Campbell, J.P. 8 Pritchard, R.D. (1976). Motivation theory in industrial and organizational psychology. In M. Dunnette (Ed.), Handbook of industrial and geganizational psychology. Chicago, IL: Rand McNally. Campion, M.A., Pursell, E.D., 8 Brown, B.K. (1988). Structured interviewing: Raising the psychometric properties of the employment interview. Personnel Psychology, 51, 25-42. Cascio, W. F. 8 Phillips, N. F. (1979). Performance testing: A rose among thorns? Personnel Psychology, 32. 751-766. Cleary, T.A. (1968). Test bias: Prediction of grades of negro and white students in integrated colleges. Journal of Educational Measurement, E, 115-124. Cohen, J. (1988). Statistical power analysis for the behavioral egiences. Hillsdale, NJ: Erlbaum. Cohen, R.L. (1987). Distributive justice: Theory and research. Social Justice Research, 1, 19-40. Crant, J. M. 8 Bateman, T. S. (1990). An experimental test of the impact of drug-testing programs on potential job applicants’ attitudes and intentions. Journal of Applied Psychology. 15. 127- 131. Cropanzano, R. 8 Folger, R. (1989). Referent cognition and decision autonomy: Beyond equity theory. Journal of Applied Psychology, 11, 293-299. Crosby, F., Burris, L., Censor, C., 8 MacKethan, E. R. (1986). Two rotten apples Spoil the Justice barrel. In H.W. Bierhoff, R.L. Cohen, 8 J. Greenberg (Eds.), Justice in social relations (pp. 267-281). New York: Plenum. Dipboye, R.L. (1977). A critical review of Korman’s self-consistency theory of work motivation and occupational choice. Organizational Behavior and Human Performance, l§. 108-126. 217 Dipboye, R. L. 8 de Pontbriand, R. (1981). Correlates of employee reactions to performance appraisals and appraisal systems. Journal of Applied Psychology, ee, 248-251. Deutsch, M. (1986). Cooperation, conflict, and justice. In H.W. Bierhoff, R.L. Cohen, 8 J. Greenberg (Eds.), Justice in social relations (on. 3-18). New York: Plenum. Dodd, W.E. (1977). Attitudes toward assessment center programs. In J.L. Moses 8 W.C. Byham (Eds.), Applyingythe assessment center method (pp. 161-183). New York: Pergamon. Downs, S., Farr, R. M., 8 Colbeck, L. (1978). Self-appraisal: A convergence of selection and guidance. Journe] of Occupational Psychology, 51. 271-278. Dodd, W. E. (1977). Attitudes toward assessment center programs. In J.L. Moses 8 W.C. Byham (Eds.), Applyingfthe assessment center method. New York: Pergamon. Ellis, R.A. 8 Taylor, M.S. (1983). Role of self-esteem within the job search process. Journal of Applied PsycholOQY, 68, 632-640. Farr, J. L., O’Leary, B. S., 8 Bartlett, C. J. (1973). Effect of work sample test upon self-selection and turnover of job applicants. Journal of Applied Psychology, ee, 283-285. Feldman, J.M. 8 Lynch, J.G., Jr. (1988). Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior. Journal of Applied Psycholggy, 1;, 421-435. Flanagan, J.C. (1954). The critical incident technique. Psychological Bulletin, E1, 327-358. Folger, R. (1977). Distributive and procedural justice: Combined impact of "voice" and improvement on experienced inequity. Journal of PersonalityEand Social Psychology, ee, 108-119. Folger, R. (1986). Rethinking equity theory: A referent cognitions model. In H.W. Bierhoff, R.L. Cohen, 8 J. Greenberg (Eds.), Justice in social relations (pp. 145-162). New York: Plenum. Folger, R. (1987). Distributive and procedural justice in the workplace. Social Justice Research, 1, 143-159. Folger, R. 8 Greenberg, J. (1985). Procedural justice: An interpretive analysis of personnel systems. Research in Personnel and Human Resources Management, e, 141-183. Folger, R. 8 Konovsky, M.A. (1989). Effects of procedural and distributive justice on reactions to pay raise decisions. Academy of Management Journal, e2, 115-130. 218 Fryxell, G.E. 8 Gordon, M.E. (1989). Workplace justice and job satisfaction as predictors of satisfaction with union and management. Academy of Management Journal, e2, 851-866. Gist, M.E. (1989). The influence of training method on self-efficacy and idea generation among managers. Eeysonnel Psychology, 52, 787-805. Gist, M.E. 8 Mitchell, T.R. (1992). Self-efficacy: A theoretical analysis of its determinants and malleability. Academy of Management Review, 11, 183-211. Gomez-Mejia, L.R. 8 Balkin, D.B. (1987). Dimensions and characteristics of personnel manager perceptions of effective drug-testing programs. Personnel Psychology, 1Q, 745-763. Greenberg, J. (1982). Approaching equity and avoiding inequity in groups and organizations. In. J. Greenberg 8 R.L Cohen (Eds.), Eguityyand justice in social behavior (pp. 389-435). New York: Academic Press. Greenberg, J. (1986a). Determinants of perceived fairness of performance evaluations. Journal of Applied Peychology, 11, 340- 342. Greenberg, J. (1986b). The distributive justice of organizational performance evaluations. In H.W. Bierhoff, R.L Cohen, 8 J. Greenberg (Eds.), Justice in social relations (pp. 337-351). New York: Plenum Press. Greenberg, J. (19878). Reactions to procedural injustice in payment distributions: Do the means justify the ends? Journal of Applied Psychology, 12, 55-61. Greenberg, J. (1987b). A taxonomy of organizational justice theories. Academy of Management Review, 12, 9-22. Greenberg, J. (1988). Equity and workplace status: A field experiment. Journal of Applied Peychology, 1;, 606-613. Greenberg, J. (1989). Cognitive reevaluation of outcomes in response to underpayment inequity. Academy of Management Journel, eg, 174- 184. Greenberg, J. (1990a). Employee theft as a reaction to underpayment inequity: The hidden cost of pay cuts. Journe] of Applied Psychology, le, 561-568. Greenberg, J. (1990b). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 1e, 399-432. 219 Greenberg, J. (1990c). Looking fair vs. being fair: Managing impressions of organizational justice. Research in Organizational Behavior, 12, 111-157. Greenberg, J. 8 Ornstein, S. (1983). High status Job title as compensation for underpayment: A test of equity theory. Journal of Applied Psychology. 68. 285-297. Greenberg, J. 8 Tyler, T.R. (1987). Why procedural justice in organizations. Soc1el justice research, 1, 127-142. Harris, M. M. (1989). Reconsidering the employment interview: A review of recent literature and suggestions for future research. Personnel Psychology, 12, 691-726. Harris, M. M. 8 Fink, L. S. (1987). A field study of applicant reactions to employment opportunities: Does the recruiter make a difference. Personnel Psychology, 19, 765-784. Hartigan, J.A. 8 Wigdor, A.K. (1980). Fairness in employment testing: Validity generalization, minority issues; end theygeneral aptitude test batteyy. Washington, DC: National Academy Press. Heilman, M.E., Simon, M.C., 8 Repper, D.P. (1987). Intentionally favored, unintentionally harmed? Impact of sex-based preferential selection of self-perceptions and self-evaluations. Journal of Applied Psychology, 12, 62-68. Herriot, P. (19893). Interactions with clients in personnel selection. In P. Herriot (Ed.), Assessment and selection in organizations (pp. 219-228). Chichester, England: Wiley. Herriot, P. (1989b). Selection as a social process. In M. Smith 8 I. Robertson (Eds.), Advances in selection end eesessment (pp. 171- 187). Chichester, England: Wiley. Huffcutt, A. (1990). Intelligence is not a panacea in personnel selection. The Industrial-Oggenizational Psychologist, 21(3), 66- 67. Hunter, J.E. (1989). The Wonderlic Personnel Test eera predictor of training sucess andejobyperformance. Northfield, IL: Wonderlic. Hunter, J.E. 8 Hunter, R.F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, fie, 72-98. Hunter, J.E., Schmidt, F.L. 8 Rauschenberger, J.M. (1977). Fairness of psychological tests: Implications of four definitions for selection utility and minority hiring. Jourgel of Applied Psychology, e2, 245-260. 220 Huseman, R.C., Hatfield, J.D., 8 Miles, E.W. (1987). A new perspective on equity theory: The equity sensitivity construct. Academy of Management Review, 12, 222-234. Iles, P.A. 8 Robertson, I.T. (1989). The impact of personnel selection procedures on candidates. In P. Herriot (Ed.), Assessment and selection in ogganizetions (pp. 257-271). Chichester, England: Wiley. Ironson, G.H., Smith, P.C., Brannick, M.T., Gibson, W.M., 8 Paul, K.B. (1989). Construction of a job in general scale: A comparison of global, composite, and specific measures. Journal of Applied Psychology, 11, 193-200. Jacobson, M.B. 8 Koch, W. (1977). Women as leaders: Performance evaluation as a function of method of leader selection. Organizational Behavior and Human Performance, 29, 149-157. Jones, E.E., Rhodewalt, F., Berglas, S., 8 Skelton, J.A. (1981). Effects of strategic self-presentation on subsequent self-esteem. Journal of Personality and Social Psychology. 51, 407-421. Kahneman, D. 8 Tversky, A. (1982). Availability and the simulation heuristic. In D. Kahneman, P. Slovic, 8 A. Tversky (Eds.), Judgment under uncertainty: Heuristics end biases (pp. 201-208). New York: Oxford University Press. Kleiman, L.S. 8 Faley, R.H. (1988). Voluntary affirmative action and preferential treatment: Legal and research implications. Personnel Psychology, 51. 481-496. Kluger, A.N. 8 Rothstein, H.R. (1991, April). The influence of selection test type on applicant reactions to employment testing. In. R.R. Reilly (Chair), Perceived yejidity of selection procedures: Implications for organizations. Symposium conducted at the 6th annual conference of the Society for Industrial and Organizational Psychology, St. Louis, MO. Konovsky, M.A. 8 Cropanzano, R. (1991). The perceived fairness of employee drug testing as a predictor of employee attitudes and job performance. Journal of Applied Psychology, 1e, 698-707. Landy, F. J., Barnes, J. L., 8 Murphy, K. R. (1978). Correlates of perceived fairness and accuracy of performance evaluation. Journal of Applied Psychology, pp, 751-754. Landy, F. J., Barnes-Farrell, J., 8 Cleveland, J. N. (1980). Perceived fairness and accuracy of performance evaluation: A follow-up. Journal of Applied Psychology, pp, 355-356. Leung, K. 8 Li, W. (1990). Psychological mechanisms of process-control effects. Journal of Applied Psychology, Zé, 613-620. 221 Leventhal, 0.8. (1980). What should be done with equity theory? New approaches to the study of fairness in social relationship. In K.J. Gergen, N.S. Greenberg, 8 R.H. Willis (Eds.) Sociej exchange: Advances in theory and research (pp. 27-55). New York: Plenum. Liden, R. C. 8 Parsons, C. K. (1986). A field study of job applicant interview perceptions, alternative opportunities, and demographic characteristics. Personnel Peychology, 22, 109-122. Lind, E.A. 8 Tyler, T. (1988). The social psychology of procedural justice. New York: Plenum. London House (1984). Personnel Selection Inventory - PSI-1. Park Ridge, IL: London House. London House (1991). Office Skills Assessment Battery - Skills. Park Ridge, IL: London House. Lounsbury, J.W., Bobrow, W., 8 Jensen, J.B. (1989). Attitudes toward employment testing: Scale development, correlates, and "known- group" validation. Professional Psychology; Research and Practice, 29, 340-349. Major, B. 8 Deaux, K. (1982). Individual differences in justice behavior. In. J. Greenberg 8 R.L Cohen (Eds.), Equity end justice in social behavior (pp. 43-76). New York: Academic Press. Martin, C.L. 8 Nagao, D.H. (1989). Some effects of computerized interviewing on job applicant responses. Journal of Applied Psychology, 11, 72-80. McFarland, C. 8 Ross, M. (1982). Impact of causal attributions on effective reactions to success and failure. Journal of Personalityyand Social Psychology, 12, 937-946. Mikula, G. (1986). The experience of injustice: Toward a better understanding of its phenomenology. In H.W. Bierhoff, R.L Cohen, 8 J. Greenberg (Eds.), Justice in socie1erelations (pp. 337-351). New York: Plenum Press. Moorman, R.H. (in press). The relationship between organizational justice and organizational citizenship behaviors: Do Fairness perceptions influence employee citizenship? Journal of Applied Psychology. Murphy, K.R., Thornton, G.C. III, 8 Prue, K. (1991). Influence of job characteristics on the acceptability of employee drug testing. Journal of Applied Psychology, 1e, 447-453. Murphy, K.R., Thornton, G.C. III, 8 Reynolds, D.H. (1990). College students’ attitudes toward employee drug testing programs. Personnel Psychology, 12, 615-631. ~222 Nacoste, R.B. (1990). Sources of stigma: Analyzing the psychology of affirmative action. Law a Policy, 12, 175-195. Nisbett, R. 8 Ross, L. (1980). Human inference: Strategies and shortcomipgs of socialejudgment. Englewood Cliffs, NJ: Prentice- Hall. Noe, R. A. 8 Steffy, B. D. (1987). The influence of individual characteristics and assessment center evaluation on career exploration behavior and job involvement. Journal of Vppe1ional Behavior, 22, 187-202. Normand, J., Salyards, S. 0., 8 Mahoney, J. J. (1990). An evaluation of preemployment drug testing. Journal of Applied Psycholpgy, 12, 629-639. Olian, J.D. (1984). Genetic screening for employment purposes. Personpe1 Psychology, 21, 423-438. Petty, R.E. 8 Cacioppo, J.T. (1981). Attitudes and Persuasion: Classic and contemporary approaches. Dubuque, IA: Wm. C. Brown. Powell, G.N. (1984). Effects of job attributes and recruiting practices on applicant decisions: A comparison. Personnel Psychology, 21, 721-732 Premack, S. L. 8 Wanous, J. P. (1985). A meta-analysis of realistic job preview experiments. Journal of Applied Psycholpgy, 12, 706-719. Robertson, I. T. 8 Smith, M. (1989). Personnel selection methods. In M. Smith 8 I. Robertson (Eds.), Advances in selection and assessment (pp. 89-112). Chichester, England: Wiley. ~ Ross, M. 8 Sicoly, F. (1979). Egocentric bias in availability and attribution. Journal of Personality and Social Psychology, 21, 322-336. Rynes, S. L. (1991). Recruitment, job choice, and post-hire consequences: A call for new research directions. In M. D. Dunnett (Ed.), Mendbook of industrialyend opgepizational psychology (2nd ed.). Palo Alto: Consulting Psychologists’ Press. Rynes, S.L. (in press). Who’s selecting whom? Effects of selection practices on applicant attitudes and behaviors. In N. Schmitt 8 W. Borman (Eds.), Personnel selection. San Francisco: Jossey- Bass. Rynes, S. L. 8 Barber, A. E. (1990). Applicant attraction strategies: An organizational perspective. Academy of Management Review, 12, 286-310. 223 Rynes, S. L., Heneman, H. C. III, 8 Schwab, D. P. (1980). Individual reactions to organizational recruiting: A review. Personnel Psychology, 22, 529-542. Sackett, P. R., Burris, L. R., 8 Callahan, C. (1989). Integrity testing for personnel selection: An update. Personnel Psycholpgy, 12, 491-529. Schmidt, F. L., Greenthal, A. L., Hunter, J. E., Berner, J. G., 8 Seaton, F. W. (1977). Job sample vs. paper-and-pencil trades technical tests: Adverse impact and examines attitudes. Personnel Psychology, 22, 187-197. Schmidt, F.L., Urry, V.W., Gugel, J.F. (1978). Computer assisted tailored testing: Examinee reactions and evaluations. Edueepional and Psychological Measurement, 22, 265-273. Schmit, M.J. 8 Ryan, A.M. (1992, April). Test-taking motivation: A moderator of criterion validity. Paper presented at the 7th annual conference of the Society for Industrial and Organizational Psychology, Montreal, Canada. Schmitt, N. (1989). Fairness in employee selection. In M. Smith 8 I. Robertson (Eds.), Advances in selection and assessment (09. 131- 153). Chichester, England: Wiley. Schmitt, N. 8 Coyle, B. W. (1976). Applicant decisions in the employment interview. Journal of Applied Psychology, 21, 184-192. Schmitt, N., Ford, J. K., 8 Stults, D. M. (1986). Changes in self- perceived ability as a function of performance in an assessment centre. Journal of Occupational Psychology, 22, 327-335. Schmitt, N. 8 Gilliland, S. W. (1991). Beyond differential prediction: Fairness in selection. Paper prepared for conference on Human Rights and Employment: Interdisciplinary Perspectives. McGill University, Montreal. Schmitt, N., Gooding, R.Z., Noe, R.A., 8 Kirsch, M. (1984). Metaanalyses of validity studies published between 1964 and 1982 and the investigation of study characteristics. Personnel Psychology, 21, 407-422. Schmitt, N. 8 Noe, R.A. (1986). Personnel selection and equal employment opportunity. In C.L. Cooper 8 I. Robertson (Eds.) International Review of Industrialyend Orgepizational Psychology, 1222 (pp. 71-115). Chichester, England: Wiley. Schmitt, N. 8 Robertson, 1. (1990). Personnel selection. Annual Review of Psychology, 11, 289-319. Schneider, 8. 8 Schmitt, N. (1986). Staffipgiorganizations (2nd ed.). Glenview, IL: Scott, Foresman and Co. 224 Schuler, H. (in press). Social validity of selection Situations: A concept and some empirical results. In H. Schuler, J.L. Farr, 8 M. Smith (Eds.), Personnel selection end assessment: Individual and ongenizational perspectives. Hillsdale, NJ: Erlbaum. Schwab, D. P., Rynes, S. L., 8 Aldag, R. J. (1987). Theories and research on job search and choice. Research in Personnel and Human Resources Management, 2, 129-166. Schwinger, T. (1986). The need principle of distributive justice. In H.W. Bierhoff, R.L. Cohen, 8 J. Greenberg (Eds.), Justice in social relations (99. 211-225). New York: Plenum. Seymour, R.T. (1988). Why plaintiffs’ counsel challenge tests, and how they can successfully challenge the theory of "validity generalization." Journal of Vocational Behavior, 22, 331-364. Sheppard, B.H. 8 Lewicki, R.J. (1987). Toward general principles of managerial fairness. Socialejustice research, 1, 161-176. Smither, J.W. 8 Pearlman, K. (1991, April). Perceptions of the job- relatedness of selection procedures among college recruits and recruiting/employment managers. In. R.R. Reilly (Chair), Perceived validity of selection procedures: Implications for organizations. Symposium conducted at the 6th annual conference of the Society for Industrial and Organizational Psychology, St. Louis, MO. Society for Industrial and Organizational Psychology (1987). Principles for the validation and use of personnel selection procedures (3rd ed.). College Park, MD: Author. Steel, R.P. 8 Ovalle, N.K. (1984). A review and meta-analysis of research on the relationship between behavioral intentions and employee turnover. Journal of Applied Psychology, 22, 673-686. Steffy, 8.0. 8 Ledvinka, J. (1989). The long-range impact of five definitions of "fair" employee selection on black employment and employee productivity. Qgganizational Behavior and Human Decision Processes, AA, 297-324. Stone, D.L. 8 Herringshaw, C. (1991, April). Effects of the purpose of the test. perceived relevance, and use of test results on reactions to honest testing. Paper presented at the 6th annual conference of the Society for Industrial and Organizational Psychology, St. Louis, MO. Stone, D.L. 8 Kotch, D.A. (1989). Individuals’ attitudes toward organizational drug testing policies and practices. Journal of Applied Psychology, 1A, 518-521. 225 Taylor, M. S. 8 Bergmann, T. J. (1987). Organizational recruitment activities and applicants’ reactions at different stages of the recruitment process. Personnel Psychology, A2, 261-285. Teel, K. S. 8 DuBois, H. (1983, March). Participants’ reactions to assessment centers. Personnel Administrator, 85-91. Tharenou, P. (1979). Employee self-esteem: A review of the literature. Journal of Vocational Behavior, 12, 316-346. Thibaut, J. 8 Walker, L. (1975). Procedural justice: A psychological analysis. Hillsdale, NJ: Erlbaum. Tyler, T.R. (1987). Procedural justice research. Social Justice Research, 1, 41-65. Tyler, T.R. (1989). The psychology of procedural justice: A test of the group-value model. Journal of Personality and Social Psyehology, 21, 830-838. Tyler, T.R. 8 Bies, R.J. (1990). Beyond formal procedures: The interpersonal context of procedural justice. In J.S. Carroll (Ed.) Applied social psychology and organizational settinge (pp. 77-98). Hillsdale, NJ: Erlbaum. Vecchio, R.P. (1982). Predicting worker performance in inequitable settings. Academy of Management Review, 1, 103-110. Wanous, J. P. (1980). Organizational entry: Recruitment, selectiopj end socialization of newcomers. Reading, MA: Addison-Wesley. Weiner, B. (1985). An attribution theory of achievement motivation and emotion. Psychologjcal Review, 22, 548-573. Wood, R. 8 Bandura, A. (1989). Social cognitive theory of organizational management. Academy of Management Review, 1A, 361- 384. APPENDICES APPENDIX A Journal Coding Materials REVIEW lNFORMATION Student Number: Manuscript Number Blind Review (O-no; i-yes) Action Editor (Tu-Howell; z-Lord; 3-Schmitt; 4-Murphy) Editor's Decision (iareject; 2-relect 8 resubm1113-accep1-PR) Editorial Board Reviewer: Editorial Board Reviewer (use numeric code) 01 - Anderson 08 - Delel 15 - Guzzo 22 - Levine 02 - Away 09 - Dipboye 16 - Heilman 23 - London 03 - Billings 10 . Drasgow 17 - Jackson ' 24 - Mlllsap 04 - 80ka 11 - Fisher 18 - Jago 25 - Montague 05 - Boudreau 12 . Freedman 19 - Jeanneret 26 - Murphy 06 - Campion 13 . Gordon 20 - Kllmcski 27 - Pulakos 07 - Dalton 14 . Greenberg 21 - Larson 28 - Rynes Time for review in days Reviewer Ratings (nine numbers) 1: 2: 3: 4: 5: 6: 7: 8: Length of review {to the nearest halt page) Ad Hoc Reviewer Number 1: Time for review in days Reviewer Ratings (nine numbers) 1: 2: 3: 4: 5; 6: 7: 8: Length of review (to the nearest halt page) Ad 7100 HOVIOWIF Number 2 (If P7000911): Time for review in days Reviewer Ratings (nine numbers) 1: 2: 3: 4: 5: 6: 7: 8: Length of review (to the nearest hall page) 226 29 - $3811 30 - Schmidt 31 - Schneider 32 - Stone 33 - Taylor 34 - Widaman 35 - Yukl 227 A OR AND PAP R INF RMATI Flrat Author's Name Author's Gender (o-male; i-iemale; 9-unknown) institutional Affiliation (Tu-academic; 2-other) Length of Paper (including title page, figures. and tables) __ (pages) Length of Introduction (to nearest hall page) Number 01 References Multble Studies (O-no; 1-yes) Code the Following Inton'natlon For the Primary Study: Research Setting (i-iaooraiory; 2-11610; s-meinod: 4-revlew) Research Design (inexperirnental; 2-survey: 3-correlatlonal; 4-cther) __ Hypotheses Presented (O-no; 1-yes) Nunber of Subjects? Type oi Subjects (O-oollege students; i-other) Content of the Paper (use numeric code) __ i .- Leaderehip 10 - Utility 2 - Motivation 11 - Tumover or absenteeism 3 - Job apnoea/satisfaction 12 .- Negotlation. labor, union research 4 - Training or learning 13 - Organizational psychology (groups, 5 . Polygraph or eyewitness accuracy climate. commitment. justice. power) 6 - Job parlorrnanoe or performance ratings 14 . Genderflamily laauee. life satisfaction 7 - Stress 15 - Organizational entry, recruiting 8 0 311918116 18 - Job Wilma. WM!) 9 .- Peraonnei selection 17 a Job Analysis 18 I 0121.! DO NOT COMPLETE: Final acceptance decision (i-re)ec1;2-accept)___ Years since Ph.D. Primary Type(s) Correlations and/or regressions (0-no; 1-yes) of Data Analysis: Analysis of Variance (O-no; 1-yes) Factor Analysis (O-nO; 1-yes) Lisrel. CFA, Path Analysis (O-no; 1-yes) Other (O-no; i-yes) Blind review: Action editor: Editor's decision: Ed. Brd. Reviewer: Time for review: Reviewer ratings: Length of review: Ad hoc reviewer 1 : Ad hoc reviewer 2: 228 QQDING INSTRUCTIONS Indicated on either the front of the folder or the first page of the manuscript by the word BLIND. Look at who wrote the decision letter to the author on Journal of Applied m letterhead. Can be coded from front of folder, but should be verified by examining the decision letter. R . reject: RRR - reject and resubmit; APR or A . accept pending revision. Look at front of folder and find first name that matches one of 35 names listed. usually identified as Reviewer A. Look at front of folder and count the number of days between when the manuscript was sent to the reviewer and when the review was received. Assume each month m 30 daye. Code from the rating sheets for each reviewer. The review is titled "Comments to the author“. Page length to the nearest half page. but assume a 1/2 page minimum. Code time for review, reviewer ratings. and length of review as was described for the Ed. Brd. reviewer. There may only be 2 reviewers listed on the front Of the folder, but if there is a third. complete the same information as for Ad hoc reviewer 1. All of the following lnforrnatlon can be coded from the actual manuscript. First authors name: Authors gender: Institutional affil.: Length of intro: Number of references: Multiple studies: Indicate first name and last of the primary author. From the first name. If you can determine gender. Academic refers to universities. colleges. and technical institutes. Count the number of pages up to. but not including the method section (do not include the title or abstract). Indicate N/A (not applicable) if there is not a method section. Method section is usually labelled, but is sometimes not. It contains information on the number of subjects. the measures used. etc. Count references listed at the end of the article. Will be listed as study or experiment 1, 2, etc. For the following queetlona, code Information from the primary study. Undergrade should code Infomiatlon from the flrat study: Research setting: Look in the method section: 1 - laboratory or classroom setting in which data is collected 2 . field (e.g. organization) setting 3 - method indicates a statistical paper that did not involve data collection. so there is no setting 4 ... review indicates a review article or meta-analysis Research design: Hypotheses presented: Number of subjects: Type of subjects: Content of the paper: 229 Again. look in the method section: 1 : experimental. which involves the manipulation (i.e. the researcher actually changes the organization or influences the subjects in some way. such as providing different training for different groups of people) of one or more variables. Can be a laboratory or field setting. For example. an experimental study of the effects of caffiene dosage on driving performance would give people varying doses of caffiene and look at differences in driving performance. 2 a survey or selforeport format used to collect all of the data. and no manipulations occurred. A survey study of the caffiene-driving phenomena might ask people with a questionnaire how much coffee they drink and how many accidents or tickets they have received. 3 = correlational. similar to survey in that no manipulation occured. but some of the data was not collected with surveys. May have been collected from company records. from personnel tests. etc. A correlational study of caffiene and driving may correlate performance on a driving test with caffiene level in the blood. 4 3 other is indicated with statistical or review papers. Hypotheses will be in the introduction and will often appear immediately prior to the method section. They may be formally separated from the other text or may be irnbedded within a paragraph. Should be a subjects section in the method. Be sure to code the final number of subjects. as sometimes more than one nun‘ber is listed (e.g. if subjects were dropped due to missing data). Add up the total number of subjects it more than one group is involved. Can also be jobs or other units being analyzed. Simply determine whether or not subjects were college students or MBAs. By reading the title and abstract of the manuscript. identify which one of the 17 categories best describes the article. If it is a methodological paper. try to determine the content of the methodological advances (e.g. turnover or utility). What follows is a brief description of each content domain: 1 as Leadership: includes path-goal theory. least prefered coworker. vertical dyadic linkage theory. Vroom-Yetton theory. and any studies on leader characteristics. 2 = Motivation: includes expectancy theory. goal-setting. self-efficacy theory. intrinsic motivation. equity theory. 3 = Job attitudes/satisfaction: in addition to various types of job satisfaction. look for job diagnostic survey or job characteristics model. Should pgt indicate this category if satisfaction is only one variable. but the main purpose of the study Is something else (e.g. leadership). 4 3 Training or leamlng: effects of training intervemions. transfer of training. managerial development. training needs assessment. 5 = Polygraph or eyewitness accuracy: lie detection. eyewitness accuracy. lineup identification. jury decision making. 6 a Job performance or performance ratings: performance domain. performance appraisal. performance evaluation. rater or ratee characteristics. 230 7 3 Stress: job stress. coping. burnout. health issues. physiological reactions to aspects of work or life. 8 3 Statistics: purely statistical anicles that that are not related to another domain such as turnover. personnel selection. or utility. 9 = Personnel selection: includes different types of predictors. such as interviews. cognitive ability test. biodata/biographical data. personality tests. assessment centers. and specific ability tests. 10 a Utility: includes articles that try to assess the practical or economic benefits of selection. training. or some other organizational intervention. Are often methodological papers. 11 :- Turnover or absenteeism: any articles that are primarily focussed on turnover or absenteeism. Should M indicate this category if absenteeism/tumover is only a dependent variable. but the main purpose of the study is something else (e.g. motivation or recruitment). 12 a Negotiation, labor. union research: research related to negotiation processes. labor relations. and union commitment and satisfaction. . 13 3 Organizational psychology: this is a broad category but look for articles with a primary focus on one of the following topics: groups. climate. culture. organizational commitment. organizational justice. power. organizational influence. role ambiguity. role conflict. 14 a Gender/family issues. life satisfaction: articles often deal with the roles men and women assume in organizations or the perceptions that subjects have about the "proper“ role of men and women. 15 = Organizational entry. recruiting: includes. job search. recruiting imerviews. recmiting sources. realistic job previews. socialization, and person-organization fit. 16 a Job evaluation. compensation: also includes pay satisfaction. comparable worth. and incentive programs. 17 a Job analysis: also includes task analysis. task inventory. task ratings or judgments. job tasks. position analysis questionnaire (FAQ). and dictionary of occupational titles (001). 18 = Other: anything that does not fit into one of the above 17 categories. APPENDIX B Recruiting Script I would like to tell you about a project I am doing that I would like to recruit your help with. I am doing a study on how manuscripts are selected to appear as articles in a major psychology journal. I need you help compiling information for this study. I have a work task that involves coding journal manuscripts on a variety of characteristics. You should find this task interesting. It will require about 5 hours of your time and you will be paid $25 for your help. Your help on this project will be required over two sessions so you will have to come on two days. The first session will be approximately 3 hours and the second will be 2 hours and will take place ataut two or three days after the first. Finally. I need people that will be able to perform the task well. To determine those who can best complete this task, I will give a short selection test in the first session. In this way it will be much like applying for a job. That is. taking the selection test does not guarantee you will be selected to perform the task. Your instructor has graciously consented to offer you course credit for taking the selection test. Let me explain how this works. If you sign-up and take the selection test, as well as answer a few additional questions, you will receive course research credit. You will receive this credit regardless of whether or not you are selected. Your instructor will explain how much the credit can impact your grade. Those that are selected will be asked to perform the journal coding task for 4 hours and will receive $25 for doing so. Are there any questions? 231 232 Initial Experiment Script Welcome to the Journal Coding Project! Before we begin, let me tell you a little bit about this project. Some of my colleagues and I are interested in studying the process by which journal articles are reviewed and how decisions are made to accept or reject these articles. Practically, research of this nature is of interest to academic people who spend a sizable portion of their time conducting research and publishing the results in journals. In addition, we hope that this research will tell us something about judgment and decision processes in general. When you people go out and apply for jobs after college, organizations will be making judgments about your potential as an employee and making a decision of whether or not to hire you. General research on judgment and decision making can help us understand this process, and may eventually lead to better decisions. Your part in this study is to help us code information from different manuscripts and from reviewers and editors decisions. Let me start by offering a little explanation of the manuscript review process. The journal that we are concerned with in this project is the Journal of Applied Psychology. When someone wants to publish an article in this journal, they send copies of the manuscript to the journal editor. The editor then sends the manuscript out to 2 or 3 reviewers who read it, evaluate it, and make comments on it. When the reviews are sent back to the editor he or she makes a decision on whether to accept or reject the paper, or whether to give the author a chance to revise the paper and 233 resubmit it another time. Very few papers are accepted the first time, most are either rejected outright or rejected with the chance to revise and resubmit the paper. Your coding task will involve looking through a file that contains a copy of the paper, a copy of the reviewers evaluations and comments on the manuscript, a letter from the editor to the author explaining the decision, and possibly a number of other letters or papers, which are not of relevance to your task. You will have a coding sheet and instructions. and you will have to record information on the coding sheet that is contained in the file. For example, you will have to code who the editor was, who long the reviewers took to complete their reviews, what ratings the reviewers gave the manuscripts, and what the e itor’s decision was. You will also have to code the name of the author and a number of characteristics of the manuscript, such as length, number of references. type of study, and study content. Instructions and training will be provided on how to code each of these information items. You will probably find that it takes you between 10 and 15 minutes to code a single manuscript, although you should get faster with practice. This task will require attention to details and an ability to find information and code it. It is very important that you are accurate when you do the coding. In total, we are going to have over 1000 manuscripts coded by students like yourselves. We can not go through and check the accuracy of all the coding you do, because there are just too many manuscripts. Therefore, I would like to emphasize again that you should try to be as accurate as possible when doing this coding. Another aspect of this task that I would like to highlight is the fact that all these 234 manuscripts contain important and confidential information. You must make sure that you do not misplace any of the materials contained in the files, as they are the confidential property of the Journal of Applied Psychology. Although we need many people for this task, we do want to make sure we have people who can perform the task well. Therefore, just as would be the case if you were applying for a job, we are going to use an employment test to select people. We will be selecting about half of you to perform the task and our decision will be based on your performance on a test. Those who are selected to perform the task will code manuscripts today after the selection test, and then again in a couple of days. Those that perform the coding task will be paid $25 for their work. Payment will come at the end of the second session. Before I give you this test, I would like to ask you to fill out a brief application form. This form summarizes the task you will be performing and the nature of the selection process. We ask that you print your name and student number on the form, and then sign the bottom indicating your willingness to complete the selection test. On the back of the form, there are a few questions asking how well you think you would be able to perform the task and how well you think you will do on the selection test. Your responses to these questions will not be used to make the selection decision and will only be looked at following your completion of the coding task. We are only interested in whether people can make a reasonable prediction of how well they may do based on the brief description we have given you about the task. 235 Script That Follows Testing Thank you for completing the employment test. We will now score these tests and make our decisions of who will be selected. While we do the scoring we would like you to complete a short questionnaire that assesses some of your reactions to taking the test. This questionnaire is unrelated to our decisions of who will be hired and is unrelated to the coding task. It is of interest to another researcher who is interested in finding out how people react to paper and pencil employment tests. I would appreciate you taking the time to complete this questionnaire honestly. In addition to helping my fellow researcher out. it will give you something to do while we score the employment tests. APPENDIX C Application Form You are applying for the task of coding journal manuscripts and editorial information. The task will require that you read a number of different information items contained in a file, determine where the information that needs coding can be found, and then code that information. All of this will have to be done at a high level of accuracy, with much attention to detail. Instructions and training will be provided prior to performing the coding. In return for completing approximately five (5) hours of coding, you will receive $25.00 compensation, which will be provided at the end of the second coding session. As a means of selecting appropriate people to perform this task, we are asking you to complete a brief employment test. Depending on your performance on the test, we will be selecting approximately one half of the people that take the test to perform the coding task. Your responses and performance on this test will be kept strictly confidential and will only be examined by the individuals running this project. Just as with any job, your participation in the selection process and in the coding project is voluntary and you may choose not to participate or to quit participating at any time. If you have any questions at this time, please ask the project coordinator. I have read and understand the above description. I am willing to participate in the selection process by completing the employment test. I understand that completion of the employment test does not guarantee that I will be selected to perform the task. I also understand that I can discontinue my participation at any time, without penalty. Name: (please print) Student Number: Signature: Date: PLEASE TURN OVER AND ANSWER QUESTIONS 236 237 How Well Do You Expect to 00? Please indicate the extent to which you agree with each of the statements listed below. Use the following five-point scale and the fill in the number corresponding to your answer in the space to the left of the statement. 1 : Strongly Disagree 2 : Disagree 3 = Neither Agree no Disagree 4 = Agree 5 : Strongly Agree Please provide a rating of agreement for each of the statements. Your responses to these questions will in no way influence the decision to hire you to perform the coding task, and will not be examined until after you have completed the coding task. i. (71 10. 11. I expect to perform well on the employment test. I expect that I will be selected. I don’t expect to do well on the employment test. I expect that I will do well enough on the employment test to be selected. I don’t like my chances of being selected. I think that I would be able to perform the coding task about as accurately as the average person doing the task. I would expect to be among the top performers in terms of the speed at which I could perform the coding task. I do not think that I could perform the coding task accurately. I think that I could perform the coding task at least as fast as average. I expect that I would be one of the most accurate coders. I do not think that I would be very fast at performing the coding task. APPENDIX 0 Selection Tests Includes: 1. The entire work sample test. 2. Example items from the Wonderlic cognitive ability test 3. Example items from the Personnel Selection Inventory 238 239 WORK SAMPLE TEST JOURNAL CODING TEST instruction and Question Booklet DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO BY THE TEST ADMINISTRATOR INSTRUCTIONS: This test assesses your ability to locate information from an assortment of documents and a manuscript. and to code this information quickly and accurately. The packet of materials contained in this coding test booklet includes the following: 1. Two pages of instructions titled "Coding instructions" which tells you what to code and where to find the needed information. 2. A Manuscript Information Sheet which contains information needed to code Items 1 - 6. 3. A set of pages labelled ”Reviews” which contains information you will need to code Items 7 - 10. 4. A ”Manuscript” itself which will be the source of info for Items 11 - 20. You will have 12 minutes to complete this test. Please work quickly and accurately as you will be scored on the number of correct information items that you are able to code. Do not worry if you are unable to finish coding the manuscript because this test is designed in such a way that few people should finish. Finally. do not make any marks in this instruction and question booklet. Answer all of the questions in the answer booklet. If you have any questions at this time. signal the test administrator. Please wait until the test administrator instructs you to begin before you turn this page. DO NOT WRITE IN THIS BOOKLET. MAKE ALL RESPONSES ON THE ANSWER SHEETS. 14. 15. 16. 17. 18. 19. 20. Length of Paper: Length of Introduction: Number of references: Hypotheses Presented: Number of subjects: Type of subjects: Content of the paper: 240 (2) Count the number of pages including title page. abstract. figures. and tables (to the nearest page). Count the number of pages up to the section of the paper titles "Method". Do not include the title page or abstract. Count to the nearest half page. Count the number of references listed at the end of the article. "References" will be clearly indentified at the top of the page at which the list of references begins. Hypotheses will be in the introduction of the paper and will often appear immediately prior to the METHOD section. They may be formally separated from the other text g may be imbedded within a paragraph. Key words to look for that indicate hypotheses are "Hypothesize" and "Predict”. There should be a subjects section in the method. Be sure to code the final number of subjects. as sometimes more than one number is listed (e.g. if subjects were dropped due to missing data). Add up the total number of subjects if more than one group is involved. Simply determine whether subjects were college students (including MBAs). By reading the title and abstract of the manuscript. identify which one of the 6 categories described below best describes the content of the anicle. 1 = Motivation: includes expectancy theory. goal-setting, self-efficacy theory. intrinsic motivation, equity theory. 2 a: Job performance or performance ratings: performance domain. performance appraisal. performance evaluation. rater or raise characteristics. 3 a Personnel selection: includes different types of predictors. such as interviews. cognitive ability test. biodata/biographical data. personality tests. assessment centers. and specific ability tests. 4 = Organizational psychology: this is a broad category but look for articles with a primary focus on one of the following topics: groups. climate. culture. organizational commitment. organizational justice. power. organizational influence. role ambiguity. role conflict. 5 3 Job analysis: also includes task analysis. task inventory. task ratings or judgments. job tasks. position analysis questionnaire (FAQ). and dictionary of occupational titles (DOT). 6 2 Other: anything that does not fit into one of the above 5 categories. 241 CODING INSTRUCTIONS For Recording Data About The Manuscript on the Answer Sheet Answer Sheet Number and Instructions for Coding Information information Item Manuscript Inforrnatlon Sheet: 1. Manuscript Number: Code manuscript number (indicated by Ms #) from the upper left corner of the Manuscript Information Sheet. 2. Action editor: Code from upper right corner of the Manuscript Information Sheet. 3. Ed. Brd. Reviewer: Look at Manuscript lnforrnation Sheet for reviewer A and match the name with one of the 20 names listed on the Answer Sheet. 4. Blind Review: Look at Manuscript lnforrnation Sheet for the word ”BLIND”. If the word BLIND does not appear anywhere on the page. code it ”No"; if it appears. code it ”Yes“. 5. Time for review A: Look at Manuscript lnforrnation sheet and count the number of days between when the manuscript was sent to Reviewer A and when the review was received. These dates are located next to _S_e_r_11 and Recd on the lower right of the Manuscript lnforrnation Sheet. Assume each month has 30 days. 6. Time for review B: Look at manuscript information sheet and count the number of days between when the manuscript was sent to Reviewer B and when the review was received. Assume each month has 30 days. Reviews: 7. Length of review A: Look at the review from Reviewer A and determine the length of the written review to the nearest half page. 8. Reviewer A ratings: Code from the rating sheets the ratings Reviewer A gave the paper on each of the nine rating scales. 9. Length of review 8: Look at the review from Reviewer B and determine the length of the written review to the nearest half page. 10. Reviewer B ratings: Code from the rating sheets the ratings Reviewer B gave the paper on each of the nine rating scales. Manuscript Infonnatlon: 11. First authors name: Indicate first and last name of the first author listed. 12. Authors gender: Code from the authors first name. if you can determine gender. 13. Institutional affiliation: Academic refers to universities. colleges. and technical institutes. 242 MANUSCRIPT INFORMATION SHEET 243 MANUSCRIPT INFORMATION SHEET Editor: Howell Ms if 90415 Title: Quality and quantity goals in a complex decision task: Strategies and outcomes. Authors: Stephen w. Gilliland and Ron S. Landis Recd. 12./2’91 Ack. 12.6791 Reviewer A: Dr. Lisa M. Saari Sent: 12/6/90 Boeing Company Seattle, WA 98105-5428 Recd.: 1.13/91 Reviewer 8: Dr. Alan Turner Sent: 12’6/90 Department of Management Scnool of Business Recd.: 1/18/91 Auburn University Auburn. AL 36849 244 REVIEWS 245 WS'IO'DIEAUTHORS are: 12-6-90 @uscript: 890-415 "Quality and quantity goals in a couple: decision task: fievieweg: A by Gilliland 8 Landis 1. I very finch like the central idea behind the research reported here. I think that the process-oriented techniques in the decision making literature hold great pranlse for mderstanding motivated task behavior. Furthermore. there is a need for more research on two of the issues addressed here: strategy selection in goal setting, and the relation between decision process and outcomes. (These positive contents about the central ideas of this study outweigh my feelings against yet another lab study on goal setting -- but: just barely.) 2. The section on the definition of task catplexity needs work. Are you accepting Wood's definition of task catplexity? In order to be categorized as "couples", must: a task be high on all three dimensions. or can it be carplex if it is high on only one diamsion? Further, how do the next: two task dimmsions you discuss (the requirement of strategy selection and trultiple task performnce dimions) fit with the definition of task catplexity? Are these additional dimensions of carplexity, do these characteristics follow frcm Wood's definition or what? These issues are rare carplex than your brief discussion suggests. If a task is carplex in that: it: requires acts which are difficult for most people to perform (e.g., brain surgery). it does not follow that strategy selection is necessary for task performance, nor does it: follow that: there are rrultiple dimensions of perforttance. I think you can make life easier for yourself by not trying to define task catplexity, in the abstract. Rather, you should set the parameters of the type of task you want to study, and that relate it: to existing discussions of carplexity. You will have no trouble arguing that: my real world tasks share key characteristics with your laboratory task (e.g. . strategy selection affects task performance. task information is canplex, both quality and quantity are important. etc.). 3. I suppose I accept your conclusion that research is needed which exaltnnes the effects of quality and quantity goals, manipulated separately. However, I have trouble imagining any results which would not be highly dependent: on the task being studied. as the ways in which quality and quantity relate mast be quite carplex and specific to the task. In addition, as I got to this point: in the manuscript. I became concerned that you had too my things going on in this study. and that; it: would be hard to pull it all together. 4. The first paragraph under "Strategies and Carplexity" (p.6) is really confusing. First, do you accept the assertion in the first sentence, that; the only way goals affect performance is through the activation of strategies? I am not farrdltar with the Wood and Locke (1990) paper. but this seem like a sanewhat controversial assertion. and is also at odds with mitof Locke's earlier writing on the processes through which goals affect purorrmnce. The next sentence my help to explain this assertion. as it seem to elevate "strategies" to be all-enccmpassing, in that: increasing effort is an exarrple of a strategy. Does this mean that: all task behaviors are strategies? The rest of the paragraph further confuses 246 me. Are ”'new plans" the same as "task specific plans", and are both of these the-same as "stored and developed task specific strategies"? Overall, ’this is a very confusing paragraph, mostly because you are trying to do too much here. and are not very clear on your arguments. 5. Give: that you have seened to define "strategy" to include almost all task-related behavior. the next few paragraphs reviewing approaches to strategy selection could cover almost anything. If one strategy is to increase effort. any approach which measures effort is a strategy assessment approach. You need to tell the reader more about what you mean by "task specific strategies". so that we can make sense of the techniques you mention. If the task is an infomation processing task. does strategy mean the ways in which task-relevant information is acquired and canbined? Or is this too narrow a definition of strategy? 6. The "recent advance" you briefly describe in the middle of page 7 sounds a lot like one which has only been around for thirty years. Most readers of £32 will know this technique by its usual name (policy capturing). so if you mean sanething different. you need to say so. '7. Your contrast of policy capturing and processing tracing is basically correct. but you miss a key point. Infomtion search techniques (you use the term process tracing, but that includes protocol analysis) focus on the acquisition of information. while policy capturing tells us sanething about how information is carbined to wake judgments. The choice of techniques should be driven by the type of information required to answer one's research questions. This point is made in the Ford et al. paper. and stmiies are referenced which expand on the differences betwem these two ways to measure somewhat different aspects of the decision process. Overall. I get the feeling that you have not exandned W of the literature on policy capturing and process tracing. 8. The first paragraph under "The Qirrent Study" is the first place where you have mentioned all of the elements of your study, and it introduces the variable of task difficulty. My earlier concern that you are trying to do to much in this study are now confirmed. You have too many research questions and too many indeth variables. 9. Perhaps as a result of having too much going on. the development of the hypotheses (pp. 9-10) is inadequate. The first prediction concerns the effects of quantity and quality goals. The nature of the expected interaction is not clear, because your description of expected effects is incarplete and confusing. You need to describe the predicted effects on quality and quantity separately. For example. the third sentence of the last paragraph on page 9 should addressed what effects on quantity are expected under difficulty quantity goals and easy and difficult quality goals. The assertion the "either performance quantity or quality or both would be hindered" under difficult quantity and difficult quality goals is impossible to translate into testable predictions. What specific pattern of findings is being predicted? Either draw the predicted interaction. or describe '_._it so clearly that the reader could draw it. 10. After describing the form of the interaction predicted. you need to give a clearer rationale for the predictions. "Based on previous research" 247 is not enqugh. Earlier, you asserted that previous quality/quantity researchi‘é‘inadequate, because quality and quantity goals have not be manipulat'id separately. You need to explain how these predictions are based on this research. 11. The role of task difficulty needs much more explanation. Given that your description of the expected interaction for quality and quantity goals is not complete, it is hard to say that the further interaction with task difficulty is clear. More importantly. there is no rationale for the effects of task difficulty. Why would effects be stronger under easier goals? Further, as I mentioned earlier. it is not clear why this variable is in this study in the first place. You have attempted to set up the rest of the study, by arguing that quality and quantity goals have not been manipulated separately, and that strategy can be measured through process tracing. But why stick task difficulty into the study? (If this were a proposal to do the research. I would strongly advise against adding this variable, but now that you have, you need to give sate rationale for its inclusions, and provide a better rationale for the predictions. 12. The paragraph which gives the predictions for information search strategies (top of page 10) is totally inadequate. To begin with. why tug; aspects of search behavior? Why are amount of information. search latency, and search pattern relevant strategies for this decision task? These indices of decision process (and others) have beau used in decision research because they are markers for decision strategies. such as carpensatory versus noncampensatory strategies. Are they indicators of effort in this study? I can buy the argtment that a goal of high quality might get translated to more information search and more time taken. but why would the pattern of search be affected? Overall. you have not made a persuasive case that the strategies affected by goal difficulty are the strategies measured by this technique. 13. The task would seem to be one which is totally novel to the subjects (at least for the psychology students). Were subjects prescreened on knowledge of stocks and investments? Why this task with this population of subjects? 14. The amount of information available seats to be too small. Previous information search research would suggest that a four by five matrix is not large enough to allow for variation in the amount and pattern of search. While you did find some effects, there was probably a ceiling effect which reduced the size of the effects and may have made it more difficult to find the interactions you predicted. Research on information search generally tries to put subjects into a situation of moderate information overload, so that the effects of a manipulation have a better chance of being seen. I think that this would be particularly important when examining a quality/ quantity trade-off. 15. The way in which you have operationalized task difficulty makes it simu'lar to the concept of "similarity of alternatives". While you are correct that the process tracing literature has focused on a different operation-lization of task difficulty, similarity of alternatives does have a long history in the decision making literature. See Payne (1982. WW3) for a brief summary. One prediction caning from 248 this lit _ture would be that similar alternatives (your difficult conditi . would lead to more intradimezsional search. at least in later tria s. 16. On page 15 you report that quality and quantity are uncorrelated, suggesting that there was not a trade-off between quality and quantity. I presune that the correlation you report is across all subjects in all conditions. I'm not sure that this is a good index of a trade-off, because the 2x2 design (assuring that quality and quantity are affected by the manipulation) produces a lack of relationship across all subjects. Don't you need to examine the correlation within each cell, and for the easy versus difficult task? Furthermore, there might only be a negative correlation between quality and quantity within the high/high condition for the difficult task, because a trade-off beth these two aspects of performance may only be induced when there is pressure to try to do both and when the task makes it hard to do both. Later (page 21) you can back to the trade-off issue and report there that there is a negative correlation in the difficult task (which is consistent with my argmants). but you still don't separate the quality and qtantity goal conditions. 17. It was a good decision to report strategy results for early, middle and late trials, as previous research has geierally found trial effects, particularly for novel tasks. 18. The findings for the pattern of search is rather strange. Specifically, it is not clear why quality goals do not affect pattern in the middle and last trials. eve: though the interaction is significant (page 19. Table 2 and Figure 4). Are you sure that you conducted the post- hoc tests correctly? 19. Much more should be said with regard to the relationship between search behavior and outccmes. In Table 1 and on page 24 you report that amount of search and performance quantity are strongly correlated. This is important information; if these measures of strategy and performance were not related, the: the study could be criticized for not measuring the right dimensions of strategy. Likewise, Table 1 indicates that latency of search is related to performance quality and quantity. However, pattern of search is not related to either measure of performance. What relationships were expected betweei strategies and performance, and what do these results mean? 20. Given that there are some relationships between strategies and performance, I think you should do more to examine the moderating effects of search strategies on the goal-outccme relationship. I'm not sure how to get around the higher-order interactions, but couldn't you put the search variables into the ANOVA as a covariate and examine the effects of goals on performance? Alternatively, could you examine the correlations (as you do at the top of page 24) within sane of the cells? 21. In sqmrary, there are sure interesting ideas and results here. You have demtrated links between goals and search strategies, and search _ strategies and performance. I still think you tried to do too much in one study (scmething you can't change now), and you haven't developed your arguments well enough (sanething you pin change). 249 L 1 I z [ L [g I s l Definitely Probably borderline Pro y Definitely Unsuitable unsuitable [Suitable Suitable "-1 l I 2 1 3 4 I otslly I Poor | Dorderline Adequate Outs ing Inadequate l I I 2 L7€I .. s —I ginative ’Inadeouate. ’Codiepadblly Conceptually brilliant Ideas; and/or conceptuali- West. but Satisfactory; Theoretically Confused ration lacking lepairable Hall-inforned Well-organised or inaccurate l 2 S 3 ’Ineonsistent: ’Inadaquste; "53:3?IEE;FT=7--- licellent: leasuras do Validity of letter neasures fleas seen to leaaures are not correspond sons seaanras exist or sbould be appropriate Clearly Concep- witb constructs is lacking bsve been A valid given tual. adequate tbs autbor developed existing liter- and free of describes atura A concep- "bias” tual discussion 1 2 3 a) s gn ota y ign Inadequate ign is Linited} Des a lesson- Design is Inadequate to but Study light but Study Still able but Possible Adequate to Address Study be Salvageabla lakes Contribution Alternative Ques- Answer lesearcb Objectives . if Linitationa are tiona and Designs Questions Clearly Stated Should be len- tinned l 2 3 t 5 Analysis and Additional or Analyses/lster- aajo a yses . Analyses and Interpretation Alternative pretation al. but but Sons Addit- Interpreta- are Wrong Analysis is Alternative ional Detail or tion are loaded to Pro- Analyses light lsplanation Correct and vide Appropri- be letter Might be Provided Clear ate interpre- tation [£55 5 nc ear; savers at s ctory; Escallent; passages require (needs isprove- Understandable Cos-unicatea rewriting sent) Very Clearly l I z 3 I 4 l 5 I a. :21- ”figyggx a at. Substantial Significant: betioa at Contribution Contribution Will be Cited all Widely l I z 3 not Czi't Toll: es wertb Isvision nay Contribution rewriting balp ‘ lsproved (Or Here) As Is 250 COMMENTS TO THE AUTHORS Date: 12-6-90 Manuscript:#90-415 "Quality and Quantity Goals in a Complex Decision Task: Strategies and Outcomes" Reviewer: B by Gilliland & Landis 9 Page a, last sentence - You state here that "...the multi- dimensional nature of complex tasks commonly requires individuals to attend to both quantity and quality of performance". This is not necessarily the case only with complex task, as this statement implies. Simple arithmetic or simple assembly tasks require both quality (answer or assemble it correctly) and quantity. Page 11 - Did you examine feedback effects on time spent on search strategies (e.g., did subjects spend more search- strategy time after receiving negative feedback then after receiving positive feedback?). Page 12, first full sentence - was the pilot sample of 40, which was used to establish the difficulty level of the goals for the subjects in your experiment, similar to your subjects? In other words, was the pilot group composed of the same proportion of psychology and business students as in your experiment? The business students in your study may have had an easier time with the task, given its business emphasis -— did you assess this? Page 9, The Current Study, first sentence - Since you depict your study here and elsewhere (e.g., page 25, Conclusions, first paragraph, last sentence) as specifically involving a complex task, you need more explanation/evidence to support this statement. Some additional words incorporated here and possibly on page 8 would help. ‘ Page 14, Manipulation Checks - How was the 4 (midpoint) of your goal difficulty manipulation check scale defined? Page 26, first paragraph, last sentence - You state here that "...we believe our operationalization of task difficulty warrants further consideration in the decision making literature." Explain your reasons for this statement, particulaly given the fact that it appears your operationalization of the two task difficulty conditions was not clearly perceived as all that different between the easy and difficult task conditions. Specifically, on page 14, you report that on a 7—point scale, the easy goal condition subjects rated goal difficulty on average 3.19 and the difficult goal condition average was 4.37, both around the midpoint of the scale. 251 Page 26, two sentences from the end - Your findings of quality goals being sacrificed for quantity goals is an interetsting one and one that anecdotal information suggest is an issue in organizations today. Further research in this area would be interesting. 252 L 1 I 1 I 7 r b 5 ] Datinitaly 'robably lord ina Probably Datinitaly Onouitabla Dnauitabla [Suitabla Suitablo 2‘] a I 2 L 3 I 4 I s I otally Poor I In a lidaquata Outotandin. lnadaquata _] l 3 a s ] ta. ‘Eonoaptuilly concaptually brilliant ldaaa; oonoaptuali- Baal. but Satiafactory; tbaoratically ration lacking lapairabla Hall-iniornad Hall-organirad or inaccurata l 2 3 d 5 lnaonaiatant: ’Inadaquata; 6::3:I...SI.: a Incallant: Iaaauraa do Validity of Iattar naaauraa laaauraa noon to laaauraa ara not oorraapond aona naaauraa axiat or abould ba appropriata Claarly Concap- nitb conntruota in lacking bava boon & valid givan tual. adaquata tho author downlopad axiatin; litnr- and iron oi dancribaa atura b concap-. 'biaa' tual diaouaaion 1 SE; 2! 3 t 5 Daaign Totally daquata Daaign ia Linitid. Daaign ia Iaaoon- Daaign ia lnadaquata to but Study light but Study Still abla but Poaaibla Adaquata to Addraaa ltudy ba dalvagaabla nakaa Contribution Altarnativn Quaa- Anauar Iaaaarcb Objantivaa - i! Linitationa ara tiona and Daaigna Quartiona Claarly itatad Should ba lan- tionad, l g__, 6 S analynia and Additional or Angkor“- 'iajor Analyaaa . Analyaaa and Intarprotation Altarnativn pratation OK. but but Sana Addit- lntarprata- ara Bron; Analyaia in Altarnativa ional natail or tion ara laadad to tro- Analyaao light Inplanation Corract and vida Appropri- ba Iattar flight ba trovidad Claar ata lntnrpra- tation 2 3 b g is! y c oar; aavaraI’ lordarlina Satiafactory; ’lit t; paaaagaa raquira (naada inprova- Undaratandabla Connunicataa routitin; nant) Vary Claarly 3 [ s :5 I “rota IubatantTal Sign“ Lcant; Contribution Contribution Hill to Citad Vidaly mbly Contributor on flortb but Can In Contribution routitin. balp ' Iaviain. lnprovad (Or flora) a. la 253 MANUSCRIPT 254 333% 82. 38.233. 5:3». 233 a... :38: .33.. 203 83:3» .833. 338 8. 88.5 .3 .382. a 9: $535 335.88.. 8 83895 >562... . .3233 9938.88; 3 Soto ion ban-ac - Sagan .8335. £58 a... 338.88.. 63 8 nacho .3. 332.. £32 9:. .38.. 2. 53- 8:853 be £888 338385. 2. Snag-:38- gafiugbcuoaofi 3:8? «8.. .8 859-. .9388 a B on. 9:. 6335.. to» 33:. 9: 5.88% 10. $5.... 5.35.. a... 33.8. 3.3!. 3.3%.: 5:95.35 a. 38.. .5523 3 a... 8. at. 3:82.. a; 338985. 8. Co .38: < 8:23 €33 Saga: 3 ”5383 .8358 85 an: 5386 xxx—E3 - 5 3:50 293 :8» bass... 9: baasc ..o .33 §§< N - ......o 2. 5.5 .82 an? .02 .38.. ..m .3222“... 183.590 2. 13.85 a. 38m 2. to .8328 .35 gm 2. a 38.8.. R... 53%: 2% snag: .33 .m 2.8”. 2. 3.5.6 .3 .2an gaggfluafim 33833338335563»; 255 .5... ......s .28... .55.. 8.38.. 8...... 8.353.... 8.8... ... 8... a... 8......28. ncgggksgggasucaa _Su gfiai... ...»...39F .33. .802 a... .8328. 8.82»... as... 3.83 B ..8 no... u... Bis-.... 8...... ...... 8.9.8.8 .... .... 3.2.... 8.38.. ...... a... .... 8 8...... 88 8.25.... ...: .8623 8:3 53 .8383. 83.3. 8.395». .5... 35.... 2. ..8 .68-... 39.23 .33 3.02. ~553— bEnBol «no 032:... .0 o... o... a. .8838: .38.... ... 85>? 3.8». < .6... 8...... a 8....2. 8.8.8. 9......8 ...... Q... .83.... 3...... 2.... .33.... .2. Eu...» 2. o. 19...... 3 an... .63.... co 3.8.5.88 59...... 8.. .8.— 58.55.23 ...... ...... a 5.... ...... 3.8.8. ......o... .... 88.5.8. 88...... 8.8... 2.... 89.5.2.0 .....Eogaoaasaasgzsuasoaafifidsuofigaeofié 338-... 058... 2. 23m... 9 no.3 9:... as: 3.38... ans-... .3...» ..u... .3385. 058... go. 25...»... 9 to... 9.03 3.88.28 8.. .3.— ..»35? .523 3a... 188 o... .a .348... 883... 8:... .o .38.... 2. 3 32...... bonus. ....- 33 8a... 53.... a 823.8 58?... 3 $8: .82.: .88. .38: d :83. 8:38.38.— ..3 u...- ucflo. .8» e828 Sagan—o. 2.58.2.8. bs§3se§§893888332§§§§.§£=< 88...... .... ... 855.... 2...... .... 292.... .... 5...... .828 9.. ... 8.8... 589...... .5888.- .Gofi... o... 3.6.... o. a. 2.3 9.88 2:. £9.83. 32>»... 3 v8: :8: .52. .2. 8335.8. .8638: .83-... 33>». 9 a ...c an. $83.33 on o. g 8:... .58.... ...... .8... .. ...... .3898. 8.8... .... 88...... ...... 8.9. ...... 8.58. ......a .. an...“ 22.3 9:. .38385 9.- §§§ v0.5.0. .3 2.92.». ...-3...? o... ... v29. 33.. .380» 95...... a. 38.23. 2.... 3.... p... .5... 3oz ...... 2.2.8 o... 3 coco—9:8 .8 :38 Boo—oz... p.- .2. 3.3.9... 8.8... a... 9 Ace...» :88 ..o canon... 82:. .98 .5... 333.9..- 352 .185. E So... 99... 2.....— Sofi. .23... ..o 3395. .8 .3358 o... 53.6 .. 8883...... .8... ...... 8...; .. as... ...... .... ...... 382.. 88.. 3.3 ...: ..83 aim—gang .. - €55 ...: ......6 68.83 89.. 8389.8 538.. 1939. 888.. .33. 52.58.... gram 5 80.1395 ..a .3.— o. 3965 a. 3.8.8. 8.... .828. .. 8.8.8 .. .3888. 6.... ....83 ... 9.8.88 8...... ...... 8888..o§a§t&8:.8.....88.€§528§38.3.3.3...3 guanobmgagbufiawg 93889333523: aggxo—aaooafinaauo Ba $38.33.... 08.88% 33.31.393.233555383338888m ...... ... .. .38.. 88.8.8 .... 3.38... .... 839.5. 8.8... ... 8.3.8. ... .8322... 33.33381885.§3.§.58333§85§8€.§8§§82. oz.u§a.3..o§§seus.38.oso.>..a.8§< 3.39.8838ng ...... vases..383a.a=&3 ...... 88.... .. 8.8.5.2... 32...... 8.88.5.3... 18.233.233.533... 8.85.... agnuagiaaiags.n§2§. .8... 8...... 388.8 . .... 8...... u.. ..8 8.8.38 ...... .68.. .83... .8... o... .88.... 3§.§.§8§a:8§.§§.8.§8338982§8.3§5 .5... 8.3. a... .883... ... .3280 ......m a... 33.8.8229 .88. .8... ...... :85... ......8... .85....885368. 5385...... 8.8.8 55.68.833.838. 8i!§38§a§§.§.§2§¢§82a823§ {5.3383388188832315853aiafié ......8. 328.8oasaéuafizsisigiaagfié 833.383.85.833 ......§4.3.33a§88§8.8..8. Bgfifiggfiggggasgauggasm soundings-um 33.533385535563350 n .....fiénaezso 256 00.00.50 0.... 000.8000 .00 8.. 00000.0. 2.00.0.0 5.0.05.0 ...... 0. .8000. .....s 80080.0... 000008.80 0.. 00.0 0.0... .0... :83... 00.8.0. 000...... 00.800 0.0.3 0.00008 0.03 8.00.0... 9000.00 0030.. 3.0. 0008.0 0... 0. .0000...) ....0. 0.00 0... 0. 00.3000 .0008 0 0. 000. 0. 00... 000.. 2.0.. 0.000 8.00.0... 00000.0... 000 0008 ...0... .< 000.3000 .880 0008 3000.00.00 8.00.0. 00...... ...... 0.00 0... 0000.00.00... .3 000.8000 .03 ..0. 00.0800 0 .0 0.0.08 0... 00...... 0.80.... .... ... 8...... 0.. 0...... ... .8... 0.. .800... ... 800.30... ... 0.5.... .... 0...... .00.. 0. 0000000 00000800 .0... ...0. 00.800 00.088 0 ..0 000086.000 00 ..000 000000 000 .....000 ..0 .00....0 0... ..080 0. .03 >08. ...0 ..0 0.00.00 0... .00.....0..0 0.5.... 0... 0030 00.0000 000 ..0 800000000. ..000 00.0.... 000 000000 .0. .00 0.0 800.. new ......000 000 000000 ..0 .88. 0. 008.008 .03 800000.80 0000.07. 00.00.00. 0.00.0.8 .000 .0 8.00.0... 02.8.0... .... ... 5...»... .... ...... ...... 0.80.... .3. .. 09088. .. .2280... .... .30.... 600000 0.... .....000 00.50.0000 0003.00 02.00020. 00.0.... 800500.. 0 .. 0.0... ..000 000000 000 .....000 ..0 88.0.0 0... 0. .0... 0.00.00. ......0. .0: 8.0008. .0008 .0 .3800 0... .0 .800. 0. ..000 0000.80 0.03 000 80.080 0000......0 00.20.00. 0.... 0 00.0.0800 88.3.5 ......0000 000 0...... ...3 8.30.8 8...... ...... .53.... .. .20.... 0.... .... 80.0.... .08.. .0... 00.00.00. .....050 .000 000000 ..0 .08. 0... .0 8.00.000 000000.02. 3.000 8008.880 000.3 0. $00.00 8008.00.00 000 00000 8008.880 83.3098... . .. ...... ...... 98...... .08.. 06...... ......a. 80...... .8888... 0... 8.0.00 00.0000. 000000 000 0:000 .00.. .0 ..0080.0 00.08008... 00...... 0000.80 00>. ...... .... .... 0.88.80... ... ...... 0...... .... 00...... .... ...... 0.8.. 8...: 0... 03.0... .. 8.2.0.8 ...... 0.. .. 80...... .... 0...... ...... 85.8... .0... 0... ...... 20...... 8.. .... ...... 8...... 0...... 8.. .... .02... .....e... 8.. 0.38.530... 0...... .... 00...... ...... 8.88 00...... ..000 .0. .00 000.. 8.0.... 0.0... 00830.. .88. 8...: .0 0.00.0... 600. .001. .0 8.9.08 80.0.0 0.03 0.00: 000000 000 0......0 .00.. 00...... 00.00008 08.. 8.. 00.88. 080m 530000.80... 000008.000 ..0 000000 00.. .....000 .00.. 00 .80.. 0. 030.300. 0... 80.000. ....0. ..08 00 000088.80 .0... 0.0.000 ... . 0.5.0 .... ......o 0... .0 000...... 0... .. .. .....0000000. 00000.0 0.. .08 ..000 000000 0.... 0.0.00 .0... 00.8 800.. 0...... 0... 9.8.. 88...... ....3. 0...... .... 0003......1... 0.... 0...... ...... 0.00.8. .0 3.0. 000 00... 0.08 00 0.00» 0000.0 0. 100.300. 00 2.000. 30.48... 02.030... ..000 000000 6...... . 8.... 8.303.... 33...... 8 8.8... €80.58... .... .288. 0...... .80 080.000.51.023 0.0.. ..0 800000.000 00.00800 .10» .8000 00.8.0.8 8.00.... .0 600008.000 00.00.00. ..000 000.0000 00.8.00... 80.00... .880 0. 0.0.0.0.. .00 003 .. ...... 800.008 .0008 00 .03 0.0... 8.08.. 00.000000 0.. .00 0.08 0.88.0 ...0... .0 .000 0580.000 0.03 00.... .2. 0. ..000 9.... 000805.08. .0. .00. .03 800. 03080.88 800.0000. 00.00... .800... .83335808..2§ .0.35838§.§803805.38008 .§§.0§.8§.8.§§.98§§850€§0083 .808... 0...... .285»... .88. ...... 8...... .3005... 8...... 0...... 8.... ... 8...... .... ......88... 50.00088 0008 00000.05. 00.00. ..0 003000.. ..0. 00.00.000.00 .000 000088.... ..000 803388.§.§§8.§000.8000§8800§803§8000.80.080.30. 30...... ...... .3... 8.0.0.... .... ... 000980 .3338. 8.0.8... ... .5... 8.3.0.... 8...... 0. ...... .58. 8...... ...... 80.8.. 08.5.0038 8.0.038 08...... ...... 8...... ... .88.... 8.00. ...... 80.8.. 0.8... .88.... ... 8...... .8. .....8... ...... .... 0...... .80 8.00.0.0. .8... 80.35.... ... 0...... .93.. . 5...... 8.08.. :6... .8... .88... .....0. 8.28.... .003.§80§=.8§8¥0a§0855$.§0.§08§8008.88 ... 8.8 8008...... 00.80.00... 80.38.... .035... 0...... 0.83 .0... ...... ...... .... .2025... 8.. .... 8.038 0.8.. ...... 80.8.0... .33 ...... 0.8.... 0.8.. ...... 3.3.... .....a 3.... 08.0.8.8 3.8.00... 5...... 80.8.. .....a. 0.050 .... ... 8.0.... .... 0...... 3.8.0.0... 30.01808.0.8.80...§B§00080a§..88 ......8. 000.2030 ..0 80800 88.08 0 000 02000.20 003000 ..0 808.... ...-.0000 0 00.00.80 0.0 5...... 80.8.. ...... 80.8.. .8 30...... ... .8000... 80.8.. 083-808.... .380... n-00...08.0=.=0 257 ... 8.1. 803 .3888. 8.3.8.. ..8.» ... .88.... ...... .838- o... £238... 983.... 2... 9.... sass... ....- o...8...o ...—...: «.... ... 83...... 2.888.. .... 888.8 9.. 2o... 22... gang .3... 2.5.... .. .... 9...... ...... ... s... .858. 5. a .... 8.32.... .5... c. 38...... ... ....» ...... 2.6... 03 5.33283 .38.. ......u 89.3.89. ....» 8...... 5.3 .8888 u... ....a .88.... ....» 8:85... 38.. .o .35.... o... 5.... .... 82.. 8.3.688 a 9.82 ....» 8.298.. a 3... "2.6:... 3 33 829.8 ..n... ......o o... 8...... .3» 3.5.... ...... ...: >53... ......E... a .0383... ...... 6.2.3.. ....u >52... 5.... 33......» ......u .... 8:8... 9:. .838... 8.0.5.. ... .23.... as... .8 2.2.5.. 82.. :8» 88...... :85... ...... an... .26. *8 o... .n .8 9.03 ......» .53.. .65... 0...... 5.3.3.5.. ...... 3.... ...... .. 33...... .95. .338 2.. .58... .. a. .53 3.3 .31.... E 5.85... x3. .... :26. 5.... .... .8 803 :8» 3.5.... ....- ..uaaa ...B 8...... 335233523: 3.2.... 888...»... ...—......ng 135353935$va533 . .8 o... ... $9 ... 3.5.8. ....o... a 8.2.38.3. :8» Ln... ...... ...... 8...... .3. 0... 8.9.8.8 ...... .2338. ......E... 9.. ... ...... ... 35.6.5. 8. 3 ... 2...... .2... < 3.2.2.... ...-8...... .2... o... .... 95.3.03 ...... ... .36.. .... ... ....- .....83 o... 8:... ...... .8203... .88.... .8... o... .... 31> 8.23:8. o... 35.. 9... >36.- 33 3.9.8 saga...) 828 s... .....a. use... 9.33.... 8. 3......5 . a... .8... 5.38.... .93 .... 2...... 18...... o... 8.3.88 ... as. 8.3... 05 3...: 18.. ...... 3...... ...-0.. a 2.... 8.. a... ...! 838.. ......E... 9:. 2380...... .53.». u... .... ... .... n. .8... .... 50.8.... no...» a ... 38.. .85.. o... ... Sta... 8... ...... x8: 8.. a... ...... 8.38.. ...: 9.... ...8: ...8... 9... o... ... ...So>8.....§ 8...... 2.. .....e... 833...... :3 ...... 8.68 2.. ... E38... 2:. .38.... 8.33 8.25.. v... 8.. 83.....8 .88... 5:. .8803... 33 3...... .358... a..o.....o.. ....88 - £02.88 a . ....sao ...... ....So ggugon. 8.2.35... 9.3.3.3.. §8£§8g.gu.o.c8.& 33858058.; ..8...8..8o....n..3 §§8.302.88.£.2393 9.88 o... .....e... .38... 8:82 2.2.3.5.. 2.. .....8... =8. ...... .8: o... .2... 32...... 8.3..89383339az.8=§§83§33§=n§:§§ ... ...»... 2.. 39.. 3...... 838.. 2.. .....eE. ...... ...". .35... .... .....5 .8: ea. ...... 9.. ... ...: .35... .... 2.... .8: Ba. ...... 2.. .... .85.... .... a... a... o... .2... 58.. ......98 ... as... 9.. .9 Ages... .. . a. 93...... is... 33:... 9.. a. .... ... <5. ...... .....a.§...au.e..o...... .33....383353836833 . 3:333... 18...»... 338%?...8=§§83§.§.B§§§§3§§8§§2. :3 £25.. ...... 5...... 3.2.8... .... 8.3.2.... ...... 3:88.. 9.3 ..Bee§3>... ...... .... 8.3.... «...... 8.55.9388 . 3...»... ...... =33. 2:. g i=8 ...»... 95...»... o... ... ”.88.... 1...... ... 83.»! £88.... 9.2.. 583...... ...»...8 3.98.. :8» 3.5.... fiance—...... .2. .8. N .. ......» Ea... .u...u8=q..o .2. .8. N a ...... :85... .2. .28. N a 83 ......» .2. .... 5...»... 9.... 5.83.... 8933...... on! - ... g8 mac—o5»... ...... 3883-5... .8... 8.3.8. 38...... 933388.. 9.. 803 383...... g 8.3.. 83......» 8.8.3 u... .... 38...... 8o... ... 5...... 38.8 85.8.... 383 3.3.! .89-... Eggnggéam 13333383835305... 1&383.e$.§8§=.23.39§93§8§3§.3s§< iuéaaazfiesgzxsaaEBéiaSngch .2... .8380 .3. .55.... :85... 5...... =88... .... ... a... ...-8.3 as .. 4...... .55.... ...: ...-.... 83...... 36...»... a; 8.89... a ......82 25.3... .... ..a... .888..- 983 888.8 8538...... ....- 3.9.8.3.... .828. 5.3.... n.a.... 838.. u... $8.... .82 632...... ... 8.3.18.5... 5-8.3.5835 258 9.8. :96»... baggazaégaaé b.8533 9.3%: A3. A .5 03...: saw-9.83.82 .82 o... 8.. 3.5.. 9.03 «83.5.... 8 38...... ...“... .28 oz .63.. I a .8. 55.... ...... ...... so... 5... .3... I a. 3.3.5. .9... 89.. 8.938 .3. $5.... :85... ...... 9.5.32... .8. .u «.13. v .18.... .. 32.28 2.59.. .Sugoefiooo. .89 u... ..o :8» 5.5.... 3.. «.55. 33 80...... in... a 33.6%.. 3 .3302... 38. $5.... .33» 3.3.... 5.35:. an... N x N x N 22.39. .33 25v 803 nah—as. 69.33:. 82.. .0 8:38... o... :95 .3... .3. I 5 32938.... 9.2.. .9538. 8.850 03. o... a. 3.32..» 8.. 33 @395 buss-<33... a 55:88....» 9 E80 Ann..." I a $3.3 .o 8... >838- ... .23 33.33.88». A86 n a who. 3.39:3 n.a.—3...:— .omeo>~ ..O mag .33 25.8. a .9. 83 2.25.888 ..o «.26. 2.5.5.. ...... 9.39... 2.8.9.8 ... ... ..u... at... a. 395.888 ...... :35 9.. 8880...... :88 o... .3895.» 2.. agent... 89.. n5...... .5: .. S. ...... .....s... .85... ...... 5.... so... I a .5 ...... 293.55. 3...... ...... 5.? 2. ..3 =35... 9.. ... ...... .3... .. a ..a. .5. 9.. ... .35....58 .93.. ......u... 38...... 955...... ..8. _. N: .3. v a .3.“ u 22...... ...... 3.3.. ...: .8. n «.18. v a ......" .. .. “....m. E82... .3. .... 98... 5...... 8.3.83. <>oz< 38.. 35...... N u 38.. >.......... N u 339...... ......n < ..8. a an .2... a 2. ..3. .35.... ...... a. 3.3.8... 9. a. 3.9.9 siege... ....55 ...... 33...... 23.9.. .2. 32. n.a.... ...... u a. .3333. n a. .3853. . 82.. nous: 523 to»: a... 93:88 .88.qu .8» 88. 5.6. < 5.35... an... a... .95. .8: .3 33.3.6.5... 9.. 53 9 26.2.8 22.. «...-ata...— 3... 28...... 3.3». 82:. .8“. I N: .8. v a .83 I 22.58 the I a .3368“. ...: :85... 9.. ... 89.. 5.... a. .n I a 8.98 3 o. a»... 2. 338.8.— couficou 33. >3» 2.. ... 2.8.2. 80... ...... 3.8.9.. 8.2.2.3. .35..qu .8... n .8 £35... an: 3.08%. 38.. 9 no»: 83 38. 335:. E2... 5 9 >30 b2... : .53....38 .88. 8.... < .35.... ...: .....2... ...3 .o 3:9 ... .95. .8. 85.3. ..o... 3.82 2.358 3.3. ... - 35.6 ..s. .....so 83.3 .... 8.3.82 .833 ...... 3.9. 3.38. 2158.83.93 523.5... 9: 23382 5.38. an. 833... 8383 9.. ... 83.88 8.... £8839... ...9.» 89.2.5... .99. . .3 3.... 3.9.3.... .2583. 2.38.8 ...: .338. 838.. 58...... .853. 3.82 9.1.9.5.. 6838... a... 3.88.. ..93 .9... 83...... 9.... 35.... s. 8. ...... aggro... o... .3 383 5.5.6.5.. 93 ..oo... 3......— a .8 8 :03 3 5.2.9.. 8.2.58 2. .3 35.8.5... 9 3.38.... 803 «Ron Jun. 338—. 08.08.. a misc—.0"— ...ouaaau 8.2.88 2.. .3 con-.38 9.. 8 ...a ...... 8.69.... ea... 9.. 8 .8353. 3.99. 8:82 agave... g .88. .... .o .5... 93995.. 9...... 38... . 9.3.... .38.... 25.5... ... 332.9. 5.8. 9.38.. 9 8... 59. 25. 39.... :9... .....a...9.....§._... .+ a. ...Sigés. .. 82. 9...: 2:8 5...... 5...... ... ... 3...... ...F 38...... .23. 18.2.8.3... ...... ..9... ... a... 9.. .... 2.3... ...... 805...... 5.3. 3.239329... ... .23.... 9.. as. 39.39.. 8.3. 183955.. 3.9.8.... 9.. 595.... .. 3.2.88 :3 .28. 18.33.39... .3 ..9... ... .3... .2 ....esaoaasésncaé . 5...... .8333... .38. a .18.:95295.2.§§.. ... as... .323... 328 £35.... 3.5.5.3 8.. 383. 33.8 838—. 523 a. 2.3.8 o... 38...... nun-8338...— §§3a§n£§5§§3.§fln§u§§8§ 3.9.9.9.... 98§§n.8§§o1§>3§g§§a§3§§= issuaefiaasigaiaauoaaafiificgg ...-9533933395.. 9§§§3§§.§9§§§3§3§§a§aofi «8.9.9.833. 19.39.. 39.... 2.5.... ...: £3... 89...... 8.523 35.... ... I583.§3295§2§3§8233§3§§§a§9=§§39§3 §o§§§u§§safi>afi.fio§s.a§i§afi 88.938582 zgugésgggzgfiggasga a - 5.3.0 ...: 2.2.0 259 eggsaaeoaga? 8:33;; asuaasavasiazacaaafiss 06 .55 .33 5368 a... 2.98 :5» .23 3383- 3:3 03. «.8358 32. :5. 83:85 .2... ... .3 3:. x8: 2. ... as a. use» =8 €25 .553: 23%... 9 3.3 a». 88m“. 393m .2 v 8 .5328 .8» >53... >3» 9.. 3.. 8c .3 A3. v a .85 .. .3266 Snags .3. €26 .385 2. 3. use =3 .8...» .33»? . .a was". 8: «Swansea. c3689.: 2. 2 .33» >53: 55 33885 9: .3 .393... :3 Soto a... .88.. a .32 $8.. .. 3562 .2 _. u .256 33 x8: 3. 332. 8882.. 3.. agaiaiaaiuspaegfiaasufisaSegaaassggguae zfimofisgsza .332. :fiazeoaasssuuéfissgfzu £3... a... 3532.2 - 9.- 80....» 3632.». a $28.: :23. a. 38m“. 33... n.a.-unseen .Go. A 3 3.: .3 a 282. 2. .9. .3 A3. v a .2: u 325 3... Eu 2. a. 9.3. an; 28:... .8. 3.13 295. .2. a 5,32 2929... .52 .38. 3...... a... v a 523 _. 326m. =38... 8. ac. v a .82 n 3268 :8 53 a. 22. .8...» 2333.2 2.. .2. 56.83% as 38% cast a? a 2.35 ... 3:395 z 8:895 38% 3 as. 33.. 2F .5." u .32 .2; .. 28E: .36 .. 656 saga». a... .85. 3338 383.. .35. .238... 230 .388? a3 as. :8. bi... 83.3 353.. .5 Sean... 5§§uuaeisa.2§a:§u§.a..£a§fiaa.o§§§5 «8332 3. Engagzgafisgaafimoiu 55313392525330.3330 9.55 :23 3.8.98 3 .38 82.8 do... a .9: inset—8.5.... 2.3.8.. 2- .83-.95 .88 2.2 2.. $3.555 .8332... :3... 2855.. . B. .32 388.... . :33 333» 3.. a: 8855 saunas B .8. 55.... 2. .2. 38 3 .39 a .5. A 9.3.3.2....853.§.vu.n~.~.323 33.32.§.va.2.?82§§ 2. s 35% 5:3 .8. 3.5.... :85... B. :3 a... 3.3.... 38m» .8. $5.... 2 - 35.6 a... 9.2.0 gassing-.865". .aaénasuasoaaaaaasaggs 3.335532nagggfisusugingasoasaaza g3§§33§23§a§3§=38=§a§33§u§3< d 3...... 8.33 533-553. 3 233.3... 53:3 83 .8332 .. :8.au§c§.a=aa§§3532>oz<§§ufian33353813.. pa; 3.3!, has. 2. 5532339... .5. on... 5,6 6389. :33»... 2.5.: B. gaggxanusggagsgagzggx e...uu§.a§a.o§§§sa§z£8.Lancing-.53 8303.88. 8.55 33.: has. «as 2.. ... 38223295 2F 3.3% 35 . as? .38.. §§»333%€.§5a§a§§§§8§a§§9§§ Sauwigasfiziziadua33333333553339 39:535.... €33.33» .33 .5332: .5328... a... «2.85. ..8: as. 383.» 8 ... =5 E .o ... as on: .35.: :83... .33 858.. .88. Base. :23 33989 8:. 8.63.3363? ..E 8.2.333 ”a; .538 ..o 58.2. 3- .538 Ho .83.: 60388 saga—9.333.: 3893 8.5 9:. a ..8. v a a? u as a has... .3053 as? .55.... 32385...v.33-80555853535fi=c§§§aa§§§ goagugagiioubagsgoeagaagg a... a... v a .86 u 62.8 .353 .3. $5.... ...: has... 13.3% a... a .8533. §Aéaaanaesafiaavséaga§agaai§a§€§a Sizessggsgzaé. é Bamaefigaaaeuuaiagafia gsésggzggaéggagsézgssé .3... a «586984....6258 83883382558. 58.... 3 :8. has... 3 b_:oE_ux3-.B>§ A3. A ”9 §>§§3833u 803.70» >553. Briggéoazaoz .2373.3.a§§s§a.-.=..§§»§ : . $530 a: $30 260 8§88§§1a§1§55.315585588358953?» 33...... 88...... 99...... ... .8... ... 8.3.... 5...... 833.. a... 3...... ...o ...8. 8...... ...... .9. .3 ......E... .... 8.3. 18.38.33... .53... 8...... . ....8. ...... ......8 .... 8.3. 8.8.8.... ... .58.. ... ...... ... 3.33.3... .33... 3...... .... 3.33 ......3. .....s 8.8.... 3... ... 8.3.8.... 88.3.... .... 333.... ...8. 8...... ..3 ...... 39.. ...... 8.88.... .a 2.38. .398. .8... £125.18» 313.195... 5.3 8515.83... .....3. 558 ... 3.93 85...... .... 8838.... 8...... .. 6...... ...... .... 8.5.8.... 35.83 .3... ...... ... .595 u... 59. ..291 391... 8.38.. ...5 38....- .. ...-on .31.... :15 ... 8...... 5 ... ...-on 3.5.... .o .....o. a 5 0.5.... .9. 3.. 8589...... «555.8 .8... 8... .... .....95 ..5 833 ...... .83. 8.38.... .3. ... 8.35.83 n.... ... .3. ...... 88.... .33 ... 8.858. .88.. a... .33... 3.8.... 3...... .... .8... .....3. .88.... 38... .... 8.8.8.... .. ......8. 3 8 .8588... .3. .. ......8. 3...... .... 88.8.. ..8.. 9...... 388.. .....8 8...... 838.. .38.... a... 3:38 8......3. ...... .3. ......s .... ...... 8.... .8... ...8. 5...... 8.8... 3... 3.9.15 03. 9. o. .19.... 98... ...-on 555.... :15 ..5 8:88.“... 838... 0... 9.8385 38.5. ... 28. ... a. 951.8... 05 «9.3.8 ......» .55... 05 5.23 ... 85 988 9.... 358.533 .10» 9.5.3558 v3.58... 9.. 9 385.58 8.551.. 39. >1... .856 u... 5 $1.58. .13 >552. ..5 b.1.... 952.68 9 3988 n.a.—.858 .- 882.313 9. ~an... :58». 88.... $55.... 5 .31.... 59. 9 2.8.88 ...-...: 9 :85... 8. 8.33 ...... 3......8 o... ...... ... 33.88.... 2. ...... =88... ...... ...... ... .38... 3.3.3.. .... ...... 3...... 2. ...... 88:. .... .... .833 2:. ...... .8. .. a ...... ..3 3.. ... ......288 .... ...... v .. 5.. n .. ...... .88... o... .. 3.38... .383... .33 .....a... ...... 3...... .2888. a... .... .... .... 85...... .. 38...... 8.5.83.3... ...: 8.5.... 8.33... h... 8.81.... 53 .51.... 8:81.... .....3. a 5 ..5 $55.... ..5 b.1.... 855.28.. ...9. 83.9.. 9 8.1 .9. .53 88.3.9. ...-o- »..13 Q5 3.5.... :85... 5.3 .8930: .510. 3.1.... .8 5... >88 8.33 o. 8. .13 3.1.... :85... ...-on 3.5.... .8 5.3 6.1.... ... . .385 ...... 8......0 .833538333583838588388853 38.883938... .. ...: 58...... .... o... ... .888. 8.88.... 8 ...... 883...... 3...... .33.... 838...... 05 53.135... ..5 »8 503.2. 88...... o5 ... 32.8 «5.... 88:83.... 8.28.. .... .18.... 3.38... ...... ... 3.9.... 3...... 8.23.. ... .... =85... .... .. 833...... 8......» ...... an... 98.8 13 .8... 3 «>83 53 39...... ..u... 268...... 3388.. 89.. .... 3.8». 3... ... .858. 3.... ...8 8.5... ...... ..8.. .35... 8 8 ......8; 39:. 88...... 21.8.3...5939553..§o..9§..8.8323§;§....8388§.€8 83§.§3§§§8.§.825§6 3.9.8.3888... 59§8§3§§§§§33.8ea%€8.§. 8.88.583... 81.332233838298333388388388. >358: ou§< ...5. >50 9.. a. 8.58. 9. 8b.. 9. 5.. ......»- 13 4858...... 82...... .9580 3.138595 3.3980588535533938...» 53 95.8 1..» .8039. .9558. 82.8. 05 a. 580...»... 53 88-3.... 53.8.... < 58...... 883388.. .... ...... ...... .....E... ..5 »8 05 803.8 8938...... 95 2.0.8.6.... .553 88-3.... 53.8.5 ..5 .... 89.5 9...... 1.533.... .....8.....&8............8£..§.....8.893.835.8353883. - 33 3.85... .18 5 ...-on .55.... .110» 3.1.... 803.9. b.58- 8 9.838... 53.3.... < .9559: 5.1.... 8558...“... 9.. .3 1932.8 .8339. .53 3.5.... ..5 b.1.... 803.3 B8582... 5.385888838ubugu38uoo58389590583 ...... 8...... .8888 ...... .833... ...... 88.... ...: 8.1.... 82.8.. ....8. .... .883... ..z ...... ......833838.858..283§.3§8§§8893..8~5§.. :85... 658% $55.... 858.88.. 8 881 82... a u... .13 3.5.... 088....» u< gnu-g ...: 838.. 2.33.. ... .. 3.8... 83.8.. 5.9.8... .. 88...... 53 3088.33. hug 9. 9....»- 10- ..o 38....» 9.. 980m 85:30.5. .3 3.5.... ...5 n. . 8.82.0 ...... 8.1.6 1 6 2 .33 4.» 3% gang . . .2580 2.. ... 8.5.5. 3.3.98. .....p .68.. d .... ...83 an..." .8 3.3333335... .35.». ...-.89.. ...: 2...... ...... ... 822......» =3... ... .8583. ...; 832...... .8223: .88.. ... ... 5...: ... ... .. .235. .362 .8 .5233.» 32.3... 3.558.... .3 ...: ......a. .80 .28: .. d .55... a ..2 .. ...:m ..2 a. .3...» ..< .m .38.. ...... 82.2.. .2 ......o 803...... 35%? .68.. .. .o ...an ... .< ... .38.. $9813.33 ...... 03.3.. a ..o 35.6.8.2. :0 38.5. a... .»czna :8» 5.35:. a»: .0 33.5 .83: ...— .> .82.: .... .-.»... .9. gun—Sana; iguana .8833 5332 «.8832. u:- 5538n .5355 3.858. 9:25 .38... .33: A ..s. 5:28 ... ..2 .... 5...: 3. ..m .ségsgum ..z ....sfi... J. a 63.. .2383 .53.... 8.2;: H2 63...... .3: .3333... is... 3.3... ... c.0350... ... 8.5.5.. ...o; ...: 3...... 835%.... .88.. .2 Jam dove.“ an 3.33.3”. 5323335 680353 a»... 2.8.83... ca 3.8.... ngu an... ... 2.253... .... 3555...... ...: 53...... ...... .33 .63: .o ... 53.. ... u .13.... an...“ «N £223? ......a. .8» ... .8qu 9.. 8 5...... 9....» 3.3.5.... :3. v5 guano—26o Bush» .380 .33: .0 .5688 d .... 5.0580 :0 ... Son-m .92.: .3”. Q3335 i=2... ...: 55...... no.8... < 8553...... .5 .25. .8» ... 38.... .38.. ...... .8. ... .. .32.... .85—eon .. . ......so .... 5.5 .5382 228 ... .83—.... on 3.6..» .3... 8383: 9.3.6 338.. o... 9 3.38.8 .a .5352 38...... ...... ... as...» ...: assign; 9.. .32.... ......» 55.... ...: can... ...... Q .55....88 8:88 :8... 3:2... :58». a... .355". £8» 39...... 9628 9 Bush: $22. 2.. :8» 5.1..» 352.3 058.83 9 3.8. 398...... 9 353: bu§£b=3a 2.2.3 o... 38.8 3:2.» 3.388 28.... J»... .3380 .8388 a 9 3.38....— »53» .3» uasxo 5.3 .8385 3.38 »§§9=. ..o .38.... a 3.3385. h...... .2. 633.88 ... ......3. 32.5... .325... 9 3 8...... 9.. 2.. =82... .8..ch 33382.8. 23 ....- 32. 9.3838 .35... .539 862% :2: on 333 £588 2......— .o>u8u8n§§8 .35 .363 .9332.— 3 >3. 9.an .8» >52... .5328. 82.. .3 35. ... be. a. 833. .338» :50... .5 45.28 n.a.. 82: 5:... 538 3339......3353533iasgéa 3.333.383.5850... 9...»... ... a; .888 .328. 9.... ...... 5...... 858.. 5...; .. .858. .35.... 2.... .338. ... 8889:... 39.. £398 ...2. >3. 8.388 .682: 5.3» 2. 69a acme...— .28» 3...... .33 .26 3958.. eon—88... :8» 3.3.. :85... 68.9.8 .8» >553 >30 £3. .333...3:538:31...8.8.3.583333133853..zaua3§o 995.53.. 3888.. ....u .658. .85... .2. 9.8. ...: ...... 88...»... .8: . 90.8.8 58...... 3.. 3.58:8 .... ...... .23. .88: ... ... 5...... ... 32.. ...... 35...... .555 o... §3§§§a§§c8niacafn5 >3: 33509338....‘80 n. . Essa ...... 3.26 8N. NNN and 3.. 5.3 gm 20.6.5635 .... N .N. N .2... N «29.5.... N.. N 3N N .85 N €26.20 NN. N z. N 3N N «30.... NN. N .... N 3.. N Esau... a... N 8N N .36 N «:5 .3... N .85 N .n N 5:0 8. N SN N N... N ...... .NN... N .88. N 3. N .... 388%.. so... 8. 5.6 N: 5.8.. Na. 3.569 55.3 on. . 8N . N... . 510x... 3. . ..N. . NNN . 5:0 2 no. . 8. . 8.. . can... N 2... . 3 . 8. . 6.... 2.. . 8.. . .9... . .8. ..80 £55 3. . 8N . 3. . 203803.30 2. . .....n . 3N . Engage ...... n. .... ... ... m ... n...._...=> 285.03 3.5553330 .31.... .NN 33:33.33. «Elaine < 58..» ..“A." .o 8.282. . .. a358,... ...... .52. .< ,m .38.. a .N .< .95: .... ... 68>» ... . 55.6 2. 3.2.0 263 Figure 1 Recommendation Accuracy 82 52 ,0 . Easy Task Difficult Task _ 5° 73 - 47-2 - 4a 7. . 74.8 i- 40 7‘ ‘ 71.9 ' 44 72 - .9 a ‘ ‘ o - 42 70. O" . . ’ A i 7‘.‘ ’40 68 4 ' 38 00 - - 38 04 ~ L 34 O2 32 Say Quality Olmotm Quality any Quality mmcuu OuIIIty -*- luv Gummy 4' lay Gummy 0- omIcuu dummy -6- Duncan Gummy F Igu re 2 "WW 0‘ Eimnts Accessed Latency at Search Pattern of Search a: . u m P ‘1 * 0." h. 7 i- i“ i- ‘9 ‘ M an >- 39. . P ‘ us ' c 11.0 r . .. ‘3. < «m ‘3 10m- 4 -0.1 o p 2“? 4. ...” 110.1. “3 nu una- uu A m an. m f." Tut Repetition. Tank W 264 COGNITIVE ABILITY TEST NAME ............................................................................................. Date ............................. (Please Print) READ THIS PAGE CAREFULLY. DO EXACTLY AS YOU ARE TOLD. DO NOT TURN OVER THIS PAGE UNTIL YOU ARE INSTRUCT ED TO DO SO. PROBLEMS MUST BE WORKED WITHOUT THE AID OF A CALCULATOR OR OTHER PROBLEM-SOLVING DEVICE. This is a test of problem solving ability. It contains various types of questions. Below is a sample question correCtly filled in: PLACE ANSWERS HERE REAP is the opposne of I obtain. 2 cheer. 3 continue. 4 exist. 5 sow ................................................. The correct answer is “sow". (It is helpful to underline the correct word.) The correct word is numbered 5. Then write the figure 5 in the brackets at the end of the line. Answer the next sample question yourself. Paper sells for 23 cents per pad. What will 4 pads cost? ............................................................ The correct answer is 92C. There is nothing to underline so just place “92C" in the brackets. Here is another example: MINER MINOR — Do these words 1 have similar meanings. 2 have contradictory meanings. 3 mean neither the same nor opposite? The correct answer is “mean neither same nor opposite" which is number 3 so all you have to do is place a figure “3" in the brackets at the end of the line. When the answer to a question is a letter or a number. put the letter or number in the brackets. All letters should be printed. This test contains 50 questions. It is unlikely that you will finish all of them. but do your best. After the examiner tells you to begin. you will be given exactly 12 minutes to work as many as you can. Do not go so fast that you make mistakes since you must try to get as many right as possible. The questions become increasingly difficult. so do not skip about. Do not spend too much time on any one problem. The examiner Will not answer any questions after the test begins. Now. lay down your pencil and wait for the examiner to tell you to begin! 265 PERSONNEL SELECTION INVENTORY CID CID Cl D oz ClCl DC] C] D oz CID CID C] U E CID CH] C] D 00> ElEl D U - 9.2.0.9.... and .o .535 a So: 80.3 2.6 a :33 >23... >5 :93. .26 30> gm: .9 «2:: a 32m acorn; moon .m ....taonm 265. o. 0303 .050 ox: .0: 2:03 30> 35 250305 can 3%. on; 25.8.2800 so» 26: .v :25 a «as... .3... a 88 a... .3... so. 8 ... «>23... 2 333 9.32 .5268 £99.83. a 5 3.2... on 58 5.3.5.2.. Song... 3...: 26. a .9: 3.9.3038 €388 :95 no; on; ago-95 cm 55 26:3 30.. on .u D ................... smoocmfieqfigu 05 .0 305.600. 5:: 2.: =2 gag—m 30> 00 .w oo> 22.5.... 2.2.2.. 52.8.5 >330... 22.5.8 E i a. E E £25.? :26 5o» 83:2 33 :25: .2523 2.0 2: gonzo c232... coco .0... 3.2. 8552...... 2: x032 6.3.32... ..a .0322 APPENDIX E Explanation Manipulations Work Sample Test The employment test we are asking you to complete as part of the selection process has been chosen for at least three reasons. First, we need people that can perform the journal coding task quickly and accurately. Given that we will be coding so many manuscripts and that we are unable to check the accuracy of all your coding, it is particularly important that we select people who can perform the task accurately. The Journal Coding Test that you will complete has been developed solely for selecting people to perform this task. The types of questions that you will be asked to complete in the Journal Coding Test are the same types of information that you will be coding if you are selected to perform the coding task. Essentially, what we have done is to select a sample of coding items from the coding task and incorporated them into a test format. Second, prior testing with the Journal Coding Test has indicated that people who do well on this test also tend to perform better in the actual coding task. That is, the Journal Coding Test is a good predictor of how well a person can perform the coding task. Finally, prior research with work related tests similar to this has consistently demonstrated that work related tests are good predictors of job performance. We will be using your performance on the Journal Coding Test to decide whether or not to hire you to perform the task. More specifically, we have a particular score on the test which you must pass to be selected. This score has been developed based on prior testing 266 267 and is related to an established minimally acceptable level of coding performance. We have found the people who reach this cutoff perform the task significantly better than those who do not. Therefore, it is in our best interests to use this test and this cutoff to select people to perform the coding task. Verbal explanation. As was explained in the information sheet provided prior to the test, the Journal Coding Test has been developed specifically for selecting people to perform the coding task. The questions in the test were taken directly from the coding task, and further, the test has been demonstrated to be a good predictor of performance in the coding task. Regardless of how you did on the test, we hope that you can at least understand why we used this selection test to decide who to hire. Cognitive Ability Test The employment test we are asking you to complete as part of the selection process has been chosen for at least three reasons. First, we need people who can perform the journal coding task quickly and accurately. Given that we will be coding so many manuscripts and that we are unable to check the accuracy of all your coding, it is particularly important that we select people who can perform the task accurately. The General Personnel Test that you will complete has been chosen to select people to perform this task. The General Personnel Test assesses your ability to recognize information and problem solve, which are similar to the abilities needed to perform the coding and categorizing you will do on the job. Second, prior testing with the General Personnel Test has indicated that people who do well on this 268 test also tend to perform better in the actual coding task. That is, the General Personnel Test is a good predictor of how well a person can perform the coding task. Finally, the General Personnel Test is a nationally recognized test that is used to select people for a variety of jobs. Considerable prior research with this test has consistently demonstrated that it is a good predictor of job performance in jobs requiring clerical skills. The type of coding that you will do if you are selected, requires the problem solving abilities that are assessed by the General Personnel Test. We will be using your performance on the General Personnel Test to decide whether or not to hire you to perform the task. More specifically, we have a particular score on the test which you must pass to be selected. This score has been developed based on prior testing and is related to an established minimally acceptable level of coding performance. We have found the people who reach this cutoff perform the task significantly better than those who do not. Therefore, it is in our best interests to use this test and this cutoff to select people to perform the coding task. Verbal eXplanation. As was explained in the information sheet provided prior to the test, the General Personnel Test is a nationally recognized test that is useful for selecting people to perform clerical tasks. The abilities assessed by the test are similar to those needed to perform the actual coding task, and further, this test has been demonstrated to be a good predictor of performance in the coding task. Regardless of how you do on the test, we hope that you can at least understand why we used this selection test to decide who to hire. 269 Honesty Test The employment test we are asking you complete as part of the selection process has been chosen for at least three reasons. First, we need people that can perform the journal coding task quickly and accurately. However, given the confidential and sensitive nature of the materials that you will be handling, we also need people that we can trust to maintain this confidentiality and handle the materials in a responsible manner. These manuscripts and editorial materials are the property of the Journal of Applied Psychology. Many of the editorial materials are not shared with the authors and are sensitive given that a sizable portion the manuscript are rejected for publication. The only way that we have access to these materials is by ensuring the Journal staff of the confidentiality with which these materials will be handled. The Personnel Selection Inventory that you will complete has been chosen to select people to perform this task. The types of questions that you will be asked to complete in the Personnel Selection Inventory are designed to assess your ability to work responsibly and maintain confidentiality. Second, prior testing with the Personnel Selection Inventory has indicated that people who do well on this test tend to be better suited to perform the actual coding task. That is, the Personnel Selection Inventory is a good predictor of whether or not a person can perform the coding task responsibly. Finally, the Personnel Selection Inventory is a nationally recognized test that is used to select people for a variety of jobs. Considerable prior research with this test has consistently demonstrated that it is a good predictor of responsibility on the job. 270 We will be using your performance on the Personnel Selection Inventory to decide whether or not to hire you to perform the task. More specifically, we have a particular score on the test which you must pass to be selected. This score has been developed based on prior testing and is related to an established minimally acceptable level of job responsibility. We have found the people who reach this cutoff are more suited to perform this task than those who do not. Therefore, it is in our best interests to use this test and this cutoff to select people to perform the coding task. Verbal explanation. As was explained in the information sheet provided prior to the test, the Personnel Selection Inventory is a nationally recognized test for selecting people to that can handle a job with responsibility. In pilot testing this test was demonstrated to be a good predictor, and further, this test has been used to select people for a variety of jobs that require varying degrees of responsibility. Pegardless of how you did on the test, we hope that you can at least understand why we used this selection test to decide who to hire. APPENDIX F Feedback and Debriefing I would like to take a little time to talk to you about a study in which many of you participated. If those of you that participated recall, I had you complete an employment selection test to determine whether or not you would be hired to participate in a journal coding project. Approximately half of you were selected to perform the task and the other half were not. Those that performed the task coded journal manuscripts and editorial comments. It was explained that the results of this coding would be used for a study of the journal review process. Well indeed this was the purpose of coding the manuscripts. We are currently working organizing the data and performing the analysis for this study. However, I must apologize and tell you that the means by which you were selected were not as I originally explained. You were given one of three different selection tests: A journal coding test, which represented an actual sample of the work you would do as a coder; a clerical aptitude test, which contained a number of coding and number checking subsections; or an integrity test, which assessed how likely you were to be responsible or honest on the task. These tests were chosen to represent different degree of job relatedness with regard to the coding task. That is, the journal coding test was highly job related and the integrity test was low in terms of job relatedness. Another aspect of the study that was not as it seemed was the way in which we made the hiring decision. We did not make the hiring decision based on your test scores, but instead we made the decision on a random basis with half of you randomly being chosen to perform the 271 272 coding task. In fact, we never scored your employment test and have no idea about how well you may have performed on such a test. Let me emphasize again, the hiring decision was made on a random basis and did not reflect your ability in any way. You may be wondering now why we were not honest with you to begin with. Well let me tell you about what I was studying in this research and then it should become clear why I could not tell you right from the start how you were being hired. One issue in industrial psychology that has not received much attention is the impact that selection procedures have on job applicants. I emphasize in my line of research that we need to know more about how applicants’ reaction to different selection procedures. In order to study this in a systematic way, I decided to develop a hiring simulation in which people take different selection tests for a job and then either get hired or not hired. I then had you fill out a number of questionnaire items regarding your reactions to the selection process. I am interested in how both the type of test you received and the nature of the hiring decision (either hire or reject) affected your reactions. I wanted to make the hiring decision on a random basis because I did not want any individual differences to provide an alternate explanation for my findings. I think you can see why I could not tell you any of this earlier, because your reactions would likely have been quite different if you did not think the test was being used to select you or if you knew that different people were receiving different tests. I believe research of this nature is necessary if we really want to understand how applicants react to hiring procedures. Later research that I conduct will likely use a survey method to find out peoples’ 273 reactions in an actual hiring situation. I hope you can see the benefits of this type of research in that these issues may one day help shape the types of selection procedures you experience when applying for jobs. Does anyone have any questions about why I did what I did and why I did not explain this to you earlier? Does anyone have any concerns or comments about this research? If anyone would like to discuss any of this privately with me, my office is 22 Baker and phone is 355-2171. I thank you again for your participation and apologize for not being honest with you initially. PLEASE NOTE Copyrighted materials in this document have not been filmed at the request of the author. They are available for consultation, however, in the author’s university library. pages 274-286 University Microfilms International APPENDIX G Dependent Measures Test Motivation Please indicate the extent to which you agree with each of the statements listed below. Use the following five-point scale and the fill in the number corresponding to your answer on the computer scanning sheet. For example, if you "Strongly Agree” with a statement, then fill in the ”5” on scanning sheet in the answer space that corresponds to the statement number. Or, if you "Neither Agree nor Disagree," write a "3" on the scanning sheet. 7 : Strongly Disagree 2 : Disagree 3 : Neither Agree no Disagree 4 : Agree 5 : Strongly Agree Please begin by writing your student ID number on the scanning sheet and fill in the corresponding circles. Please provide a rating of agreement for each of the statements. .4. Doing well on this test was important to me. I wanted to do well on this test. h) L. ) I tried my best on this test. 4. I tried to do the very best I could on this test. 5. While taking this test. I concentrated and tried to do well. 3. I want to be amcng the top scorers on this test. i. I pushed myself to work hard on this test. 8. I was extremely motivated to do well on this test. 9. I just didn‘t care how I did on this test. 10. I didn’t put much effort into this test. ii. It was hard to keep my mind on this test. 12. I found myself lOSing interest and not paying attention to the test. 13. During the test session, I was bored. 14. I get distracted when taking tests of this type. 15. This test was a good reflection of how a person could do on the job task. 16. Tests are a good way of selecting people into jobs. 17. This king of test or tests should be eliminated. 274 275 18. I don’t believe that tests are valid. 19. I probably didn’t do as well as most of the other people who took this test. 20. I am not good at taking tests. 21. During the testing, I often thought about how poorly I was doing. 22. I usually get very anxious about taking tests. 23. I usually do pretty well on tests. 24. I expect to be among the people who score really well on this test. 25. My test scores don’t usually reflect my true abilities. 26. I very much dislike taking tests of this type. 27. During the test, I found myself thinking about the consequences of failing. 28. During the test, I got so nervous I couldn’t do as well as I should have. 29. This test was too easy for me. 30. I found this test too simple. 31. I found this test interesting and challenging. 32. I felt frustrated because many of the test questions were too difficult. 33. I became fatigued and tired during the testing. 34. The questions on this test were ambiguous and unclear. 35. I have not been feeling well lately and this affected my performance on the test. 36. While taking the test, I was preoccupied with how much time I had left. 37. I felt a lot of time pressure when taking this test. 38. Once I undertake a task, I usually push myself to my limits. 39. I try to do well in everything I undertake. 40. I general, I like to work just hard enough to get by. 276 BACKGROUND INFORMATION The following questions are purely for research purposes. Additionally, your name will never be associated with any responses you make. Your student number is only being collected so that we can match up your responses on different questionnaire forms and answers sheets. Please complete the following questions. Student ID number: Age: Sex: Major: PLEASE TURN THE PAGE AND ANSWER ALL THE QUESTIONS IN THE SPACES PROVIDED. 277 f0 HOW DO I FEEL RIGHT NOW? Please read each of the following descriptive terms and evaluate the extent to which it represents how you feel right now. Use the following five-point scale to indicate the response that best represents your personal reaction in the space to the left of the item. Work quickly and do not dwell on any single item. 7 : Not at all 2 : A little bit 3 = Somewhat 4 : Quite a bit 5 : Extremely 1. boastful ___________ 9. self-centered 2. inadequate 10. worthless 3. effective ___________ 11. resourceful 4. sname 12. conceited 5. egotistic i3. stupid 6. incompetent 14. efficient 7. pride __________15. competent 8. smart 16. confident 278 WHAT KIND OF PERSON AM I? Please complete indicate your level of agreement with the following items. Use the following five-paint scale and fill in the number to the left of the statement that most clearly represents your personal reaction. Please be frank and honest. h) 10. Uihfiu'uu II In H II '1 I feel others. I feel All in I Strongly Disagree Disagree Neither Agree no Disagree Agree Strongly Agree that I'm a person of worth, at least on an equal basis with that I have a number of good qualities. all, I am inclined to feel that I am a failure. I am able to do things as well as most other people. I feel I take 0n the I wish I do not have much to be proud of. a pCSitive attitude toward myself. whole, I am satisfied with myself. I could have more respect for myself. I certainly feel useless at times. At times I think I am no good at all. 279 HOW HELL COULD I PERFORM THE JOURNAL CODING TASK? 4 Based on what you know about the coding task you applied to do, how well do you think you could performed the task. Specifically consider the following statements and indicate your responses using a two part answer. First indicate (yes or no) whether you think you are capable of performing at an indicated level. Second, if you think you are capable of performing at an indicated level, what is your confidence in that judgement. When making the confidence ratings, please use the following scale and mark your responses on this sheet: Not at all confident A little bit confident Somewhat confident A good bit confident Extremely confident (hkufou 'I H H ll H 1. "In term of the speed at which I could perform the coding task, I believe I am capable of performing at or above the level of... If yes, then confidence Yes/No (1 - 5) ..a below average performer .an average performer .an above average performer .the very fastest performer 2. "In term of the accuracy with which I could perform the coding task, I believe I am capable of performing at or above the level of... If yes, then confidence Yes/No (1 - 5) ..a below average performer ..an average performer .an above average performer .the most accurate performer 280 Fairness Perceptions Please indicate the extent to which you agree with each of the statements listed below. Use the following five-point scale and the fill in the number corresponding to your answer on the computer scanning sheet. 1 = Strongly Disagree 2 = Disagree 3 : Neither Agree no Disagree 4 : Agree 5 : Strongly Agree Please begin by writing your student ID number on the scanning sheet and fill in the corresponding circles. Please provide a rating of agreement for each of the statements. ‘. I think that people who perform well on the selection procedures are more likely to perform well on the job. 2. The people hiring were straightforward and sincere about the job and what it entailed. 3. I am satisfied with the amount of feedback I received during the selection process. 4. The selection process was like an interrogation - the people were cold and rigid. (h The selection process was standardized and systematically administered. 6. There did not appear to be any bias or discrimination on the basis of sex or anything. 7. The people hiring should have been more honest when telling me about the pcsition and my chances of being hired. 8. I can see a connection between the selection procedures and performance on the job. 9. The selection process did not capture the extent to which I am a hard worker. 10. Instructions for the test were clearly explained to me (either verbally or in writing). ii. I am satisfied with the communication that occurred during the selection process. 12. Lack of interactive or two-way communication was a problem during the selection process. -A 3. There appeared to be no consistency in the methods used to evaluate me. 281 1 : Strongly Disagree 2 2 = Disagree 3 = Neither Agree no Disagree 4 : Agree 5 = Strongly Agree 14. I feel the selection process cut down on favoritism that can sometimes be a problem when people are selected for jobs. 15. I think that my hiring decision was affected by special treatment offered to some people. 16. I think some peOple would distort their responses on this test to try to make themselves look better. 17. The selection process got right down to what I could and could not do. 18. The questions asked of me during the selection process were neither relevant nor important for the job. 19. The type of questions asked during the selection process were directly related to the job. 20. Some of the questions asked during the selection process were intrusive of my privacy. 21. I did not receive the evaluation I feel I deserved. 22. I was given a reasonable explanation for why the specific selection procedures were used to hire people. 23. I had control over the factors that influenced my performance during the selection process. 24. During the selection process, I feel I was treated more like a number than a human being. 25. Given my past experience looking for a job, I feel I recieved an appropriate evaluation. 26. I think that an effort was made to consider individual needs when making hiring decisions. 27. Everyone was screened fairly and given the same opportunity. 28. I received information on the hiring decision (accept/reject) in a timely manner. 29. I am satisfied with how I was informed of the hiring decision (accept/reject). 30. Given my ability and experience, I was g9; evaluated correctly by this selection process. 31. I was asked questions that I feel were inappropriate or discriminatory. 282 1 : Strongly Disagree 2 : Disagree 3 : Neither Agree no Disagree 4 : Agree 5: Strongly Agree 32. I don’t think faking answers is possible during the selection process. 33. The people hiring let me know about the selection process in advance. 34. I feel the people hiring lied about the selection process and the way they chose people for the job. 35. The people hiring did pg; make enough of an effort to hire disadvantaged people or people that are often discriminated against. 36. The selection process was difficult, much more than I thought it would be. 37. During the selection process, I never got the chance to prove myself. 38. Many of the questions asked of me during the selection process were difficult to answer. 39. I received an adequate explanation of how the selection tests would be scored. 40. I was offered an explanation of the types of factors that affected the hiring decision (accept/reject). 41. I was provided informative feedback on my performance. 42. I was treated honestly and openly during the selection process. 43. I was not offered sufficient opportunity to ask questions. 44. Compared to the ability of others taking these selection procedures, I don’t think I received an appropriate evaluation. 45. It would be easy for people to be dishonest when answering questions and make themselves look good. 46. During the selection process, the people hiring asked standard questions that were the same for everyone. 47. During the selection process, the people made the difference - they were friendly and made me feel at ease. 48. In the hiring decisions. the people hiring seemed to favor those individuals who had special economic needs. 49. Overall, the selection process was pretty easy. 50. I thought that the selection process was too difficult to select applicants for this job. 51. The results of the selection process were consistent with how I view myself. 283 1 : Strongly Disagree 2 = Disagree 3 = Neither Agree no Disagree 4 : Agree 5 : Strongly Agree 52. The outcome of the selection process was pg; a good reflection of my job capabilities. 53. The selection process was directly relevant to the job because it involved the same things that are required on the job. 54. I was told how test scores would be used to make a hiring decision (accept/reject). 55. I don’t think that the selection procedures used can predict whether or not I will be successful on the job. 56. They asked me about their selection process and how it could be improved. 57. Personal motives or biases appeared to influence the selection process. 58. It was obvious how you should respond to some of the questions if you wanted the job. - 59. I was treated with warmth, sincerity, and thoughtfulness during the selection process. 60. I thought you could beat the test if you were smart and gave the answers they were looking for. 61. As far as I know, the selection tests were administered the same way to all applicants. 62. I did not know what to expect during the selection process. 63. I was given adequate opportunity to demonstrate my skills and abilities. 64. It was made clear what was expected of me from the onset of the selection process. 65. I did not do as well as I expected to during the selection process. 66. People were candid and frank with me during the selection process. 67. People with greater needs were given special consideration for employment. 68. The types of questions asked during the selection test were the same as would be found in the journal coding task. 69. The test was not directly applicable to the journal coding task. 70. I was told how scores on the selection test would be used to hire people for the Journal coding taSk. 284 a I - Strong’y Disagree 2 : Disagree 3 : Neither Agree no Disagree 4 ‘ Agree 5 = Strongly Agree 71. I was giver a reasonable explanation for why the selection test was used to select pecple for the journal coding task. 72. The selection test aSked questions that were spec1fically related to the journal coding task. 73. The selection test was representative of the journal coding task. 74. I was told how the selection test is related to performance on the journal coding task. 75. The selection test was directly relevant to the journal coding task because it involved the same things that the coding tasl involves. 76. I received no explanation of why I was given the selection test. 77. The selection test was only indirectly related to the journal coding task. I received information describing how the selection test would result in the se.ection of tre mcst :acao’e people for the jourral coding task. T9. The quest'ops BSLEC :-ring the selection process were not at all similar to the jdurnal ceding : 3* Please Continue on the Next Page 285 ‘C Strongly Disagree 2 : Disagree 3 : Neither Agree no Disagree 4 : Agree Strongly Agree OVERALL SELECTION PROCEDURE FAIRNESS The following questions refer to your overall evaluation of the fairness of the procedures used to select people for the journal coding task. Without considering whether or not you received the job, we would like your reaction to the selection process. Please use the same five-point scale indicating your agreement or disagreement with each item. 80. Whether or not I got the job, I feel the selection process was fair. 81. Whether or not I got the job, the procedures used to select people for this ‘ob are fair. 3:. whether or not I :t the job, I am satisfied with the selection proceps. ([2 ES. Overall, I feel disatisfied with the way people were selected for the journal :3 ing task. OVERALL HIRING OUTCOME FAIRNESS How we would like 35L to only consider the outcome of the selection process. N‘irOut considering the procedures used to select you, please indicate your reaction to the gecision to hire or not hire you for the journal coding task. 34. Overall, I feel the results of the selection process were ppfair. 35. I feel the hiring debision (accept/reject) was fair. 36. Dverall, I am sat‘sfied with the hiring decision. 97. I am dissatisfied with the company's decision about whether or not to hire me. OVERALL REACTIONS TOWARD THE PROJECT 'ne following questions refer to your overall reaction toward the prOject in general. Please answer the following questions with the same five-point scale. 9?. If I hear about other projects like this, I would be interested in applying for them. 39. If I knew in advance what this prOJect would entail. I would not have applied for it. 90. I would recommerd this project to my friends. l‘. I think my friends would be interested in applying for this project. 286 Satisfaction With The Coding Task Please indicate the extent to which each of the descriptive terms listed below is representative of your experience with this coding task. Use the following five- point scale and fill in the number on the computer scanning sheet that best represents your personal reaction. MQMNN Not at all A little bit Somewhat Quite a bit EXtremely Please begin by writing your student ID number on the scanning sheet and fill in the corresponding circles. . l f) m 0) Pleasant Bad Ideal . Waste of time . Good . Undesirable . Worthwhile . Worse than most . Acceptable . Superior Better than most . Disagreeable . Makes me content Inadequate Excellent . Rotten Enjoyable Poor Prerotation Eigenvalues VARIABLE OPFl OPF2 OPF3 OPF4 ODFl ODF2 ODF3 ODF4 RECl REC2 REC3 REC4 COMMUNALITY .62841 .58684 .64364 .33915 .62859 .68973 .61600 .37368 .52248 .35825 .68114 .60463 Rotated Factor Matrix OPF2 oars OPF1 OPF4 REC3 REC4 RECl REC2 ODF3 ODF2 ODF1 ODF4 FACTOR .75913 .75500 .74044 .47706 .14811 .09458 .04636 .14036 .32225 .52348 .52114 .25750 1 ************** APPENDIX H FACTOR (OWNCDOTbOON—a FACTOR 2 .18105 .16546 .16721 .06385 .89139 .77866 .75731 .55015 .13146 .03631 .05321 .13154 287 EIGENVALUE 5.30473 2.32863 .85158 .69603 .58024 .50629 .40869 .32649 .29589 .27256 .23047 .19841 FACTOR 3 .26587 .35541 .34006 .33692 .01935 .01399 .08764 .20334 .78269 .69045 .63644 .56890 PCT OF VAR 44. 19. AdNNNQ-fibo‘fl Results of Factor Analysis on Perceived Fairness and Application/Recommendation Intentions Measures Nowmwhwooooabm CUM PCT 44. 63. 70. 76. 81. 85. 89. 91. 94. 96. 98. 100. OOObNNOO’QU’TNON