$93.“, TIL; :31} i E“ a. . an . . $3 1?. ,. .1. . “an; . . . . . . , . . V V. Lfiwauw x V . .V .. . A . . ,. {.1 . Ema... . . . V .. , . $J§Efis . V V . . . q . . u is do)“. . m4». V V . . V V .. . _ a . w A a aha.‘ , , . v . . . . . ‘ . ..¢¢:.MD...n~ . V . . ‘ . L. . . . V . V _ ffifi a 5. .. . £30? a rum}. . _ . .a . hafig: :d S. :7. :V J.» V .0351? . ._. . . l zinc. , . ‘ . fill'fi-‘FK V . V V .V . V , «255m. V ‘ . .1 . . ad I" = .. firm}... . .V . V _ . . S. viii-i .v , ... V . . . . . 3?. . . . , $2.7. V . . : L- “gag ‘ IJ’. l x . 2... if?) ,7 :12. . 2.? IV '2; LIBRARY 2 a: ( Michigan State University This is to certify that the thesis entitled APPLICANT REACTIONS TO NOVEL SELECTION TOOLS presented by Darin Wiechmann has been accepted towards fulfillment of the requirements for M.A- degree in _Es¥r.hnlng¥ Date Ill/[JOfl 0-7639 MS U is an Affirmative Action/Equal Opportunity Institution PLACE IN RETURN Box to remove thi To AVOID FINES return on MAY BE RECALLED with earlier 5 checkout from your record. or before date due. due date if requested. DATE DUE DATE DUE DATE DUE moo chWpGS—p.“ APPLICANT REACTIONS TO NOVEL SELECTION TOOLS By Darin Wiechmann A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS Department of Industrial/Organizational Psychology 2000 ABSTRACT APPLICANT REACTIONS To NOVEL SELECTION TOOLS By Darin Wiechmann Organizations are increasingly using novel tools (e.g., multimedia, web-based, computer adaptive testing) in designing selection systems. While there is great interest in using these tools, there is little research to indicate how applicants react to them. As applicant reactions have been shown to affect a number of variables important to organizations (e. g., test fairness, job acceptance, productivity, recommendation intentions), it is important to understand what determines these reactions. A framework was developed to highlight a number of antecedents to, types of, and consequences of reactions to novel testing. The current study has a 2 (mode of presentation: paper-and- pencil - computerized) X 2 (perceived technical level of the job: high technical job - low technical job) X 2 (selection decision: rejected or selected) between subjects design. While test-takers’ post-test perceptions do not significantly differ as a result of mode of administration, factors such as computer anxiety and experience were shown to be important factors in determining success in the selection process. Results show the significant relationships between post-feedback reactions and test-takers’ intentions. The discussion highlights implications for implementing novel selection tools in an applicant setting. TABLE OF CONTENTS LIST OF TABLES ..................................................................................................... vi LIST OF FIGURES .................................................................................................... vii INTRODUCTION ...................................................................................................... 1 Benefits of Novel Technologies ..................................................................... 3 Costs of Novel Technologies ......................................................................... 6 Reactions to Novel Technologies .................................................................. 7 Limitations of Previous Research .................................................................. 9 Reactions Framework .................................................................................... 11 Individual Differences .................................................................................... 15 Mode of Administration ................................................................................. 20 Perceived Technological Level of the Job (Job Type) ................................... 24 Decision Outcome .......................................................................................... 27 Behavioral Intentions ..................................................................................... 30 METHOD .................................................................................................................. 31 Participants ..................................................................................................... 31 Design ............................................................................................................ 32 Pilot Studies ................................................................................................... 32 Procedure ....................................................................................................... 33 Measures ........................................................................................................ 37 RESULTS .................................................................................................................. 42 Analysis Plan .................................................................................................. 42 iii Manipulation Check ....................................................................................... 43 Motivation Check ........................................................................................... 44 Descriptive Statistics ...................................................................................... 44 Fairness Perceptions ....................................................................................... 51 Additional Analyses ....................................................................................... 62 DISCUSSION ............................................................................................................ 76 Mode of Administration ................................................................................. 77 Perceived Technology Level of the Job (Job Type) ....................................... 79 Selection Outcome ......................................................................................... 80 Individual Differences and Attitudes ............................................................. 82 Individual Differences and Performance ....................................................... 85 Attitudes and Intentions ................................................................................. 86 Limitations ..................................................................................................... 86 Future Directions ........................................................................................... 88 Suggestions for Implementation .................................................................... 90 REFERENCES .......................................................................................................... 92 APPENDD( A: ADMINISTRATOR INSTRUCTIONS FOR PILOT STUDY ........ 110 APPENDD( B: INFORMED CONSENT FOR PILOT STUDY ............................... 115 APPENDD( C: DEBRIEFING PILOT STUDY ........................................................ 116 APPENDIX D: TECHNOLOGY SURVEY .............................................................. 117 APPENDIX E: TEST ADMINISTRATOR INSTRUCTIONS FOR MAIN STUDY .................................................................................................................................... 123 APPENDIX F: INFORMED CONSENT FOR MAIN STUDY ................................ 131 iv APPENDIX G: PRE-TEST QUESTIONNAIRE ....................................................... 132 APPENDIX H: POST-TEST QUESIONNAIRE ....................................................... 138 APPENDIX 1: POST-DECISION QUESTIONNAIRE - SELECT CONDITION 141 APPENDIX J: POST-DECISION QUESTIONNAIRE -— REJECT CONDITION ...147 APPENDIX K: DEBRIEFING FORM FOR MAIN STUDY ................................... 153 APPENDD( L: COMPLETE LIST OF MEASURES AND SOURCES ................... 154 LIST OF TABLES Table 1. Technology Ratings Across Pilot Job Titles ........................................................... 34 Table 2. Descriptive Statistics and Intercorrelations of the Pre-Test Individual Difference Measures ................................................................................................................................ 45 Table 3. Descriptive Statistics and Intercorrelations of the Post-Test, Pre-Feedback Measures ................................................................................................................................ 46 Table 4. Descriptive Statistics and Intercorrelations of the Post-Feedback Measures .......... 47 Table 5. General Individual Difference Measures’ Correlations with Post-Test Reactions .52 Table 6. Computer-Related Individual Difference Measures’ Correlations with Post-Test Reactions ................................................................................................................................ 53 Table 7. Significance Results of Post-Test Reaction Measures by Mode of Administration ....................................................................................................................... 55 Table 8. Moderated Regression of Perceived Technology of Job and Mode of Administration ....................................................................................................................... 56 Table 9. Significance Results of Post-Feedback Reactions by Selection Outcome .............. 57 Table 10. Moderated Regression of Selection Outcome and Mode of Administration ......... 59 Table 11. Intercorrelations of the Intention Measures and Post-Feedback Reactions ........... 63 Table 12. Moderated Regression of Selection Outcome and Process Fairness ..................... 65 Table 13. Moderated Regression of Outcome Fairness and Process Fairness ....................... 68 Table 14. Regression Analyses of Selection Outcome and Test Performance for the Paper-and-Pencil Condition ................................................................................................... 71 Table 15. Regression Analyses of Selection Outcome and Test Performance for the Computer Condition .............................................................................................................. 73 LIST OF FIGURES Figure 1. Applicant Reactions Framework ............................................................................ 13 Figure 2. Selection Decision and Mode of Administration Interaction ................................ 61 Figure 3. Process Fairness and Selection Decision Interaction ............................................. 67 Figure 4. Process Fairness and Outcome Fairness Interaction .............................................. 67 vii INTRODUCTION “A century of progress: Science discovers, technology applies, man conforms” (V olti, as cited in Spacapan & Oskamp, 1990). Despite being almost 70 years ago, the theme of the 1933 Chicago World’s Fair portrays one way of describing the current role of technology in personnel selection. With the advent of cheap personal computers, networking capabilities, new video equipment and software, etc. companies are looking for new ways to apply these technologies. Dating back to WWII, researchers began using new technologies that were available to them. Gibson’s (as cited in McHenry & Schmitt, 1994) and Carter’s (cited in Dubois, 1970) motion picture tests were some of the first tests designed in a technologically new medium. While, not gaining popularity early on, technology eventually paved a cost-effective way for companies to design new selection tools. Leading companies such as AT&T, Proctor and Gamble, IBM and Allstate (cited in McHenry & Schmitt, 1994), Texas Instruments (Texas Instruments, 1999), etc. are following the lead of these early researchers by implementing such tools as multimedia, computer adaptive, web-based, and video testing. The attractiveness of using these technologies over paper-and-pencil tests may come from the increased standardization, cost effectiveness, positive image of the organization, and realistic job preview they are able to provide. Before implementing these types of technologies, researchers and practitioners have been concerned with their ability to predict performance at a level comparable to traditional paper-and-pencil versions. With good justification, issues of psychometric equivalence have received a great deal of discussion in much of the literature to date (e.g., Bartram & Bayliss, 1984; Hofer & Green, 1985; Moreland, 1985; Skinner & Pakula, 1986; Burke & Normand, 1987; Mazzeo & Harvey, 1988; Mead & Drasgow, 1993). Research examining equivalence of different versions has found that while they are often highly correlated, differences may occur due to the type of test used (e.g., Bartram & Bayliss, 1984; Greaud & Green, 1986; Mazzeo & Harvey, 1988; Henly, Klebe, McBride, & Cudek, 1989; Mead & Drasgow, 1993). In addition to psychometric issues, researchers (e.g., Angle, Ellinwood, Hay, Johnson, & Hay, 1977; Lucas, Mullins, Luna, & McInroy, 1977; Skinner & Allen, 1983; Slack & Slack, 1977; Slack & Van Cura, 1968) have also been concerned with reactions to automated testing. While researchers have been concerned with general reactions to automated testing, much of this research has been conducted in a clinical setting. To date, far less research has examined reaction issues in the area of personnel selection (Burke, Normand, & Raju, 1987; Daum, 1994; Landis, Davison, & Maraist, 1998). Gilliland (1993) proposed a number of important variables that applicants’ perceptions may affect such as self- efficacy, future job-search intentions, job acceptance decisions, litigation intentions, performance, and job satisfaction. Researchers have found support for a number of these variables such as self-efficacy (e.g., Gilliland, 1994; Horvath, Ryan, & Stierwalt, in press), recommendation intentions (Ployhart & Ryan, 1997), organizational attractiveness (Bauer, Maertz, Dolen, & Campion, 1998), job satisfaction (Farmer, Beehr, & Love, 1998), and test performance (Chan, Schmitt, DeShon, Clause, & Delbridge, 1997). The goal of the current paper is to better understand applicant reactions to novel testing procedures. While computers are not novel to most people today, the state of computer technology is allowing researchers to design unique selection tools. Many people may have used a computer for basic word processing, but very few have probably taken a computer adaptive test (CAT) or a multimedia test, which are being used by some organizations. Thus, even though technology and novel are not synonymous, they are closely related at this point in personnel selection. In the following sections, I will discuss the various issues surrounding the use of novel technologies. First, I will discuss some of the limitations in the existing literature on novel testing and how the current paper addresses these limitations. Next, I will review the benefits of novel technologies that have been put forth in the literature. As with any new technology, it is also important to understand the potential costs of using them. While novel technologies present a number of advantages over traditional paper- and-pencil tests, they also have drawbacks of which both practitioners and researchers need to be aware. After discussing these issues, I will discuss the research that has examined people’s reactions to novel technologies. I will then present a fi'amework, which attempts to clarify the antecedents and consequences of applicant reactions to novel testing. Finally, I will present a study to assess components of this framework. Benefits of NoveL Technologies Since the initial use of computers in psychological testing, many researchers have pointed out the potential benefits of using new technologies (e.g., Bartram & Bayliss, 1984; Erdman, Klein, & Greist, 1985; Kleinmuntz & McLean, 1968; Smith, 1963; Tomkins & Messick, 1963). Kleinmuntz and McLean (1968) pointed out the three major advantages of the computer in psychiatric interviewing are flexibility, objectivity, and speed. Researchers using computers would benefit from these factors in either the administration or scoring of tests or both. Thus, the early attraction to using computers was to take the fallible human out of the process. Another avenue for using computerized testing was to reduce subgroup differences commonly found in the literature. Research in personnel selection has consistently found the presence of subgroup differences for cognitive ability tests (Hunter & Hunter, 1984; Jensen, 1980), but also other types of tests such as situational judgment tests (Chan & Schmitt, 1997) and biodata instruments (Schmitt & Pulakos, 1998). Cognitive ability tests consistently show Caucasian test-takers score a one standard deviation advantage above African-American test-takers. With legal issues always a concern to organizations, subgroup differences on a selection test will likely result in adverse impact, which may be legally challenged. Researchers have looked to different mediums as potential avenues to reduce subgroup differences and have found some promising results (Johnson & Mihal, 1973; Chan & Schmitt, 1997). Johnson and Mihal (1973) theorized that computer testing might be able to reduce the subgroup differences found with paper-and-pencil cognitive ability tests. They gave black and white students a general cognitive ability test with quantitative and verbal sub-tests. They found that while white students did not differ between versions, black students did better on the computer version. In addition, there were subgroup differences on the paper-and-pencil version of the verbal sub-test, but not the computerized version. This research shows that subgroups may perform differently when tested on the computer versus with a paper-and-pencil test. Chan and Schmitt (1997) provide further evidence for changing the relative standing of Afi'ican-Americans and Caucasians on a situational judgment test. They found that subgroup differences in test performance and face validity perceptions were reduced when participants were given a video-based version of a situational judgment test compared to a paper-and-pencil version. They also found that this interaction of race and method could be explained by differences in reading comprehension and face validity reactions. The conclusion was that the video-based version may be more concrete and realistic to applicants, which in turn reduces the adverse impact that is normally attributed to racial bias in less concrete and realistic tests. Researchers have also explored the potential of novel tests to measure new constructs that may not have been possible in the past (e.g., Fleishman, 1988). Fleishman (1988) believes that not only will we be able to better measure such existing constructs as perceptual speed, short-term memory, and spatial visualization, but we will also have the capability to measure constructs not possible before such as divided attention, concentration, and workload conditions. Other advantages that companies may gain from using novel technologies may be a more favorable impression of the organization (McHenry & Schmitt, 1994), a more realistic job preview (McHenry & Schmitt, 1994; Shetland & Alliger, 1999), and better selection and recruiting efforts (Stanton, 1999). With new technologies burgeoning at an incredible rate, more organizations will be able to witness these advantages first-hand and will look for other ways to extend their capabilities. Before implementing novel technologies in an organization, it is also important to understand the potential negatives that surround them. The next section will review the potential costs of using these new technologies. QM of Novel Technolog'eLs While these technologies are attractive alternatives to traditional tests, many researchers (e.g., Bartram & Bayliss, 1984; Skinner & Pakula, 1986) have also been concerned with the potential costs of using them. Erdman, Klein, and Greist (1985) discuss that many professionals are apprehensive about using computerized interviews for their patients because they are impersonal and inhumane. With the proliferation of web-based selection systems, Stanton (1999) highlights the increased problem of test security. Still another concern of new technologies is their tremendous start-up and updating costs (McHenry & Schmitt, 1994). Thus, organizations interested in using new technologies must be aware of these and other potential costs. A great deal of research has been conducted to determine the equivalence of traditional paper-and-pencil tests to their more high-tech versions. Mead and Drasgow’s (1993) meta-analysis of paper-and-pencil versus computerized cognitive ability tests provided evidence for their equivalence. They found a .91 overall correlation between the two versions, but found differences between timed power and speeded tests. While timed power tests showed a .97 correlation between versions, speeded tests only showed a .72 correlation. Their meta-analysis points out that researchers should not just assume computer versions are equivalent and need to be aware of the nature of their tests. Other researchers have also compared versions of other commonly used measures in organizations such as personality measures (King & Miles, 1995), attitude surveys (McFarland, Ryan, & Paul, 1998), and interviews (Martin & Nagao, 1989). While there is evidence that different versions correlate strongly, there is also evidence of differences in the way people respond to computerized measures (Evan & Miller, 1969; Lautenschlager & Flaherty, 1990) that may have important effects on test equivalence. For example, most research conducted using computers in an interview setting has shown that patients provide more honest answers to personal, intrusive questions as compared to traditional face-to-face interviews (Evan & Miller, 1969; Lucas et a1., 1977; Martin & Nagao, 1989; O’Brien & Dugdale, 1978). Other research has found that people do not respond differently to computers (Lautenschlager & Flaherty, 1990; Skinner & Allen, 1983). As computerized testing is becoming more commonplace, the issues of test equivalence will be increasingly important to resolve. In addition to the issues of test equivalence, researchers have also been concerned with people’s reactions to these technologies. The next section discusses research that has examined the types of reactions people have to novel technologies. Reactions to Novel Technolog'g Since the early work with computerized testing, researchers have paid close attention to people’s reactions to the new assessment tools. Not only were computers being used in psychological testing, they were also becoming more and more a part of everyone’s lives. Thus, it was important to realize how people react to technology in general. As Spacapan and Oskamp (1990) point out, “In our culture, it appears that most people welcome the promise of new technology as an asset in their lives.” A number of research reviews on computerized testing have highlighted the importance of understanding reactions to these types of tests (Bartram & Bayliss, 1984; Burke & Normand, 1987; McHenry & Schmitt, 1994). These reviews discuss the various factors that are somewhat unique to computerized testing such as type of computer networks, examinees’ computer experience, human factors, interpersonal treatment by the machine, and impression management. These types of issues prompted the American Psychological Association (APA) to publish a set of guidelines for computer-based tests and interpretations that addressed concerns raised by practitioners and researchers (APA, 1986). The Standards highlighted a number of issues important to designing, administering, and interpreting results from computer-based tests. While not radically different from APA’s other guidelines, they do highlight the unique nature of these types of tests. Research on reactions to computerized testing has found that test-takers exposed to novel testing situations consistently responded favorably to them (Barbera, Ryan, Desmarais, & Dyer, 1995; Frank, 1993; Schmidt, Urry, & Gugel, 1978; Slack & Van Cura, 1968). In many cases, test-takers even preferred the high-tech version compared to its paper-and-pencil counterpart (Angle et al., 1977; Arvey, Strickland, Drauden, & Martin, 1990; Lucas, 1977; Ogilvie, Trusk, & Blue, 1999). Research supporting test- takers’ preference for novel tests spans from the first use of computers in the 19603 to their wide use today. Thus, it seems test-takers’ preferences are due to factors other than the mere introduction of computers into the society at large. Despite the positive reactions that novel tests normally elicit, research has shown that novel tests may also bring out a fear and anxiety in some people, especially with computers. The term computerphobia highlights not only a buzzword in the workplace (e.g., Faerstein, 1986), but also an area of concern to researchers (e.g., Ametz, 1997; Gardner, Render, Ruth, & Ross, 1985; Gilroy & Desai, 1986; Henderson, Deane, & Ward, 1995; Hill, Smith, & Mann, 1987; Igbaria & Chakrabarti, 1990; Rosen, Sears, & Wei], 1993). Hedl, O’Neal, and Hansen (1973) found that people taking a computerized intelligence test experienced more anxiety and had more negative attitudes than compared to two traditional examiner-presented conditions. Students’ reactions after the experiment hinted at the fact that lack of clarity in the instructions, unfamiliarity with the equipment, and the type of interaction with the system may have been the cause of their negative reactions. Torkzadeh and Angulo (1992) provide a review of the various factors that may contribute to computer anxiety. Gender, trait anxiety, computer knowledge, locus of control, and math anxiety are some of the potential correlates of interest. Thus, test-takers’ reactions to novel testing are dependent not only on factors of the system, but also on the examinees’ experiences and attitudes. Limitations of Previous Research Despite a considerable amount of research on reactions to novel testing, there are a number of limitations in the literature that need to be addressed. First, only a few researchers have attempted to develop a comprehensive framework of reactions to novel testing (Burke et al., 1987; Daum, 1994; Landis et al., 1998). Burke et a1. (1987) attempted to develop a model to explain attitudes towards computer testing, but only looked at a small number of individual differences (i.e., computer experience, education level, word processing experience) and their relation to general attitudes towards computer testing. Landis et al., (1998) attempted to examine different types of reactions to novel tests based on differences in their formats. Similarly, they only looked at a small subset of reactions applicants may have (i.e., fairness, anxiety, and satisfaction), but did not measure any individual differences or outcomes of these reactions. Daum (1994) examined a number of reactions to novel testing (i.e., fairness, ease, anxiety, preference), but did not look at the outcome of these reactions or the individual differences thought to be important. The current study is based on a comprehensive fi'amework that examines the individual differences thought to be important to explaining applicant reactions, different types of reactions applicants may have, outcomes of those reactions, and manipulations that may affect relationships of these variables. Second, as was noted earlier, most of the studies that have measured reactions have done so in non-applicant contexts. To date, there is only limited research (Daum, 1994; Martin & Nagao, 1989; Schmidt, Urry, & Gugel, 1978) that directly examines reactions to novel testing in an applicant context. In a study examining the implementation of computerized testing in a medical school, Ogilvie et al. (1999) note the caution involved in implementing computerized testing due to the need to "learn about the technology without jeopardizing students' performance in high stakes examination situations." Smither et a1. (1993) also suggest one practicality of studying applicant reactions is the potential for their indirect impact on test validity and utility. Supporting these concerns, research indicates that test-takers' perceptions influence not only their test performance (e.g., Chan, 1997; Chan & Schmitt, 1997; Chan et al., 1997) but also their subsequent withdrawal from the selection process (Schmit & Ryan, 1997). Thus, organizations seeking to implement novel tests should be aware of the potential reactions that applicants may have and the consequences of these reactions. Another reason for studying applicant reactions is the potential differences that may occur due to the unique nature of the testing situation. In most cases, applicants are in an unfamiliar situation being tested for a job that they would like to have. In addition, 10 they most often are in direct competition with the other applicants for the positions. While this context is not entirely unique from other testing situations, it is unlikely that the factors affecting an applicant's reactions and the consequences of those reactions are similar when compared to other testing Situations. In order to examine applicant reactions to novel tests, the current study will be conducted in a simulated selection setting. Finally, most of the research on reactions to novel testing has used either cognitive ability tests (e.g., Arvey et al., 1990; Burke et al., 1987; Daum, 1994; Landis et al., 1998; Schmidt et al., 1978) or interviewing (e.g., Lucas, 1977; Lucas et al., 1977; Martin & Nagao, 1989; O’Brien & Dugdale, 1978; Skinner & Allen, 1983). There is limited applicant reaction research using other types of novel tests such as multimedia tests (Shotland & Alliger, 1999, Barbera et al., 1995) and video-based tests (Chan & Schmitt, 1997) that go beyond simply transferring a paper-and-pencil test to a computer. Research has shown that people do not perceive various selection tests similarly in terms of such factors as face validity, predictive validity, and interpersonal treatment (Rynes & Connerly, 1993; Smither et. a1, 1993; Steiner & Gilliland, 1996). As organizations are increasingly using a variety of novel tests, it is important for researchers to examine these issues for the types of testing situations applicants have likely not encountered before. The current paper will use a computerized in-basket test as an example of a novel test not widely used in selection testing. Reactions Framework The purpose of the current paper is to examine what factors contribute to explaining applicants’ reactions to novel testing. While there are many types of novel 11 testing currently being used, this research will focus on a computerized in-basket test. As little research has been conducted in this area, much of the current research will be exploratory in nature. Thus, a wide variety of potential factors will be examined in order to understand what determines applicants’ reactions to novel tests. To better understand these issues, I present an initial framework of the various factors thought to be important (Figure l). The framework provided is not meant to be a causal model; instead, it is intended to be a guide to help understand applicant reactions to novel testing. The framework presented here is a blend of ideas taken from a number of literatm'es relevant to the current research such as the applicant reactions, computerized testing, organizational justice, and educational literatures. The current framework included antecedents of reactions, types of reactions, and consequences of reactions in the context of novel technologies. Previous research on novel technologies has examined individual difference measures thought to be important in explaining people’s perceptions. Differences in computer experience (e.g., Comber, Colley, Hargreaves, & Dom, 1997; Igbaria & Chakrabarti, 1990; Kerber, 1983), computer self-efficacy (e.g., Compeau, 1992; Hill, Smith, & Mann, 1987; Webster & Martocchio, 1995), test-taking experience (Kravitz, Stinson, & Chavez, 1994; Ryan, Greguras, & Ployhart, 1996), and test-taking anxiety (Barbera et al., 1995; Schmit & Ryan, 1997) have been found to affect reactions to novel testing. The current study will highlight those constructs important in explaining differences that occur. Research has found that applicants react to a number of different aspects of the selection system. The applicant reactions literature has found that applicants perceive a 12 32:552. 8.88.86; E23» 2,385 8383 E23, 8S E253; 328.8985 555980 8mm 76,—. 8068.» 2:830 325$ 3085 may: xomncoom-.m £5: 88.— ioBoEEm meocoaom “58:3" ._ oBME oozeEotom comwommafiom E2; 2:285 8283 £23, 8E E0832... 3:082:85 Seaway—o0 8am amok mmoEmmm $805 was: 3. odinvoouaa JmoEmoé 205323 cocotoexo e. 30::on .9353 33955 $350.28 SEQ—coo 082.698 E23600 323?— “against—5800 been.“ mac—5-63. 28:52: eouuceufifiooom SEES:— oocfinooo< £2. meow—3.:— Eorfinom 303:8:on DEERE—om w:_e_8-.mo._. 35596 mac—8-33. moocoeota @239: Sumo“ .5 woo—om 08845.0: new 05 mo :32 awe—0532 EBay—om / Sewage? no 282 l3 selection system in terms of the perceived fairness of the process as well as the perceived fairness of the hiring decision (e.g., Gilliland, 1994; Ployhart & Ryan, 1997; Ployhart & Ryan, 1998). In addition, factors of a selection system may affect applicants’ perceptions of the consistency of the process, preference for the process, perceptions of the perceived job-relatedness of the test, self-assessed performance, etc (Gilliland, 1993). The current study will measure a number of these reactions thought to be important. The importance of applicant reactions stems from the consequences that they pose to researchers and practitioners. As noted earlier, applicant reactions have been shown to affect a number of important variables such as productivity (e. g., Gilliland, 1994), organizational attractiveness and recommendation intentions (e.g., Smither et al., 1993), and job acceptance intentions (e.g., Linden & Parsons, 1986; Macan, Avedon, Paese, & Smith, 1994). The current study will focus on a subset of these variables in order to better understand some of the potential consequences that may arise from reactions to novel testing. The next section will more fully examine the framework and the evidence that currently exists to justify the inclusion of the variables and manipulations used in the study. Finally, in order to more fully examine applicant reactions to novel testing, I reviewed the literature for variables that would affect reactions. In conducting the literature review, there were a number of variables that would be likely to affect reactions to novel tests. For the current paper, I focused on those variables that were thought to have the most impact on reactions: mode of administration, perceived technology level of the job, and the selection outcome. 14 Individual Differences In this section, I will present individual difference measures that are important to understanding applicant reactions across a wide variety of tests. For all test-takers, I will be examining the role of test-taking experience, test-taking self-efficacy, and test-taking anxiety on their reactions. In addition, I will also examine the role of computer experience, computer self-efficacy, and computer anxiety on participants’ reactions for those taking the computerized version of the test. Test-taking experience is thought to be important in determining test-takers’ reactions to a selection test. The more experience a person has with taking tests, the more likely they are to see them as part of the process and will react more positively to them. Kravitz et a1. (1994) found that for many types of selection tests (e. g., interview, cognitive ability test, personality test, work sample test) previous experience with them was positively related to perceptions of the test. Ryan et a1. (1996) also found that experience with a specific type of physical ability test (PAT) was positively related to the perceived job relevance of the test, fairness perceptions of the test, and general perceptions of PAT’s job relevance and fairness. Hypothesis 1: Test-taking experience will be positively related to post-test perceptions (i.e., liking, process fairness, test ease, consistency, perceived job- relatedness, and self-assessed performance) of the selection test. As participants may be comfortable using computers, but not taking tests, I included a measure of their general test-taking self-efficacy. Test-taking self-efficacy is a Specific instance of the more general construct of self-efficacy (Bandura, 1977). Perceived self-efficacy can be defined as “beliefs in one’s capabilities to organize and 15 execute the courses of action required to produce given attainments” (Bandura, 1997). Test self-efficacy can then be defined as the belief in one’s capability to perform effectively when confronted by a test-taking situation. Test-takers low in test self-efficacy are more likely to have negative emotions surrounding the test (Gist, Schwoeder, & Rosen, 1989) which are likely to transfer to their perceptions after taking the test. Test-takers high in test self-efficacy approach the testing situation more positively so will be more likely to have more positive perceptions after taking the test. Ryan, Ployhart, Greguras, and Schmit (1998) found that test self- efficacy was related to a number of important variables in an applicant population. In particular, they found that test self-efficacy was positively related to motivation and test performance and negatively related to test anxiety. Furthermore, researchers (e. g., Bauer et al., 1998; Gilliland, 1994) have found that self-efficacy was positively related to fairness perceptions of the selection process. Hypothesis 2: Test-taking self-efficacy will be positively related to post-test perceptions (i.e., liking, process fairness, test ease, consistency, perceived job- relatedness, and self-assessed performance) of the selection test. Test-taking anxiety is an example of a situation-specific anxiety trait (Spielberger & Vagg, 1995). Hodapp, Glanzrnann, and Laux (1995) discuss that test anxious people are likely to “respond with excessive worry, self-deprecatory thoughts, and intense affect and physiological arousal when exposed to examination situations.” Thus, it should not be a surprise that test-taking anxiety has been found to relate negatively to test performance (e.g., Arvey et al., 1990; Hembree, 1988) and perceptions of the test 16 (Barbera et al., 1995; Schmit & Ryan, 1997). Schmit and Ryan (1997) also found that test-taking anxiety was positively related to withdrawal from a selection process. Hypothesis 3: Test-taking anxiety will be negatively related to post-test perceptions (i.e., liking, process fairness, test ease, consistency, perceived job- relatedness, and self-assessed performance) of the selection test. As computers are still not ubiquitous in society, researchers are concerned that previous computer experience may affect people’s attitudes towards computers and their applications (e.g., Hill et al., 1987; Igbaria & Chakrabarti, 1990; Pope-Davis & Twing, 1991; Zoltan & Chapanis, 1982). Similar to test-taking experience, applicants' experiences with computers are likely to influence their reactions to the test. The more experience people have with a computer, the more likely they are to see them as just another aspect of the selection process and will react more positively to them. Applicants with little computer experience may see them as an unfamiliar, unfair, part of the process that they should not have to contend with. A number of studies examining attitudes towards computers have found that past computer experience is related to general acceptance of computers (e.g., Burke et al., 1987), willingness to use computers (Zolton & Chapanis, 1982), anxiety (Igbaria & Chakrabarti, 1990), liking (Comber et al., 1998), task performance (Czaja & Sharit, 1993), confidence (Loyd & Gressard, 1984; Pope- Davis & Twing 1991; Torkzadeh & Koufteros, 1993), and general attitudes towards computers (Howard & Smith, 1986; Igbaria & Chakrabarti, 1990; Popovich, Hyde, Zakrajsek, & Blurner, 1987). While there is some evidence that experience may lead to negativity in users (e.g., Rosen et al., 1993), most research finds that computer experience is positively related to attitudes towards computers and their uses. 17 Hypothesis 4: Computer experience will be positively related to post-test perceptions (i.e., liking, process fairness, test case, consistency, perceived job- relatedness, and self-assessed performance) of the selection test for those taking the computerized version. Related to the amount of experience people have with computers is their perception of how able they are to use a computer. Computer self-efficacy is again a variant of the self-efficacy construct (Bandura, 197 7). Similar to test self-efficacy, applicants low in computer self-efficacy are more likely to have negative emotions surrounding the test, which are likely to transfer to their perceptions after taking the test. Computer self-efficacy is thought to be an important variable in understanding people’s decision to use computers (e.g., Hill et al., 1987), reactions to computers (e.g., Webster & Martocchio, 1995), their attitudes (e. g., Compeau, 1992; Compeau & Higgins, 1995), and their performance when using computers (e.g., Gist et al., 1989; Karsten & Roth, 1998). This research supports the conclusion that computer self-efficacy is positively related to attitudes, intentions, and behaviors with regard to computers and their applications. Hypothesis 5: Computer self-efficacy will be positively related to post-test perceptions (i.e., liking, process fairness, test case, consistency, perceived job- relatedness, and self-assessed performance) of the selection test for those taking the computerized version. Computer anxiety is another construct that has received a great deal of attention in the literature. As with self-efficacy, Carnbre and Cook (1985) discuss how computer anxiety is Similar to test anxiety as a manifestation of the general anxiety construct. 18 Similar to test anxiety, computer anxiety is thought to be an affective response where people are intimidated by the computer, worry about damaging the computer, and worry about looking stupid. While test-taking anxiety and computer anxiety are conceptually similar, Heinssen, Glass, and Knight (1987) found that while the Computer Anxiety Rating Scale was correlated with trait anxiety, it was not correlated with Sarason’s Test Anxiety Scale (1978). Sherrnis and Lombard (1997) also found no relationship between measures of computer and test anxiety. Thus, computer anxiety seems to be capturing unique variance in people’s reactions to computers beyond that of test anxiety. Research has found evidence that computer anxiety is negatively related to such variables as attitudes towards computers (e.g., Howard, 1986; Igbaria & Chakrabarti, 1990; Popovich et al., 1987), amount of interaction with computers (e.g., Mahar, Henderson, & Deane, 1997; Rosen et al., 1993), and performance (e.g., Bloom & Hautaluoma, 1990; Elder, Gardner, & Ruth, 1987). Hypothesis 6: Computer anxiety will be negatively related to post-test perceptions (i.e., liking, process fairness, test ease, consistency, perceived job- relatedness, and self-assessed performance) of the selection test for those taking the computerized version. When novel technologies are first introduced to people, it is likely that there will be some resistance to using them. When computers were far less prevalent in society compared to the present time, Zoltan and Chapanis (1982) noted that many people noticed that the public did not accept computers. Other researchers have discussed the resistance that occurs in the workplace (e.g., Faerstein, 1986; Henderson et al., 1995). Faerstein (1986) noted that workers’ apprehension for using novel technologies may be 19 due to their need for control or autonomy, resistance to change, and fear of failure or the unknown. Henderson et a1. (1995) also note that these and other factors may be the source of failure that commonly occurs for organizations trying to implement management information systems. Thus, peOple who are less resistant to new experiences and/or change may be more likely to interact with novel technologies and rate these experiences as more positive than people who are resistant to new experiences and/or change. Hypothesis 7: Openness to experience will be positively related to post-test perceptions (i.e., liking, process fairness, test ease, consistency, perceived job- relatedness, and self-assessed performance) of the selection test for those taking the computerized version. Mode of Administration Landis et a1. (1998) examined applicants’ reactions to cognitive ability testing using a paper-and-pencil, computerized, and computer adaptive manipulation. They hypothesized computer adaptive testing would produce more anxiety in examinees, lower fairness perceptions, and less satisfaction compared to the more traditional forms of testing (i.e., paper-and-pencil and computer administered testing). They found some support of differences in anxiety and fairness between the computer administered and computer adaptive conditions compared to the paper-and-pencil condition. Based on these findings, I may similarly expect some differences in reactions to traditional versus novel testing conditions. Daum (1994) also examined applicant reactions to different versions of a cognitive ability test. This research compared fairness perceptions of a paper-and-pencil 20 versus computer adaptive (CAT) version of a cognitive ability test. Results showed that test-takers rated the CAT as less consistent but equally fair as the paper-and-pencil version. Daum (1994) attributed participants’ perceptions of fairness to the explanation given for using the computer adaptive test. As this was only a post-hoe explanation of the results and not an experimental manipulation, it is important to consider further research findings. Previous research shown not only positive overall reactions from test-takers towards computerized testing (Angle et al., 1977; Barbera et al., 1995; Frank, 1993; Lucas, 1977; Schmidt et a1. 1978; Slack & Van Cura, 1968) but also that test-takers prefer computerized versions of tests over their paper-and-pencil versions (Arvey et al., 1990; Ogilvie et al., 1999). These findings may be attributed to the novelty of the computerized testing. Ogilvie et a1. (1999) found that participants reacted positively to the computer due to being perceived as efficient and an improvement over the paper-and- pencil tests. Arvey et a1. (1990) also found the test-takers found the computerized testing was "intrinsically more interesting, more challenging, less boring. . ." than the paper-and- pencil version. Based on these findings, I hypothesize that participants taking the computer version will like the test more than participants taking the paper-and-pencil version. Hypothesis 8a: Participants taking the computerized test will Show higher liking than those taking the paper-and-pencil version. In addition to overall reactions, researchers have found that applicants either rate different versions of a test as equally fair (Daum, 1994) or rate the computerized version as more fair (e.g., Schmidt et al., 1978; Schmitt, Gilliland, Landis, & Devine, 1993) than 21 their traditional paper-and-pencil versions. This may be due to peoples' perceptions that computers are more objective, accurate, and less prone to biases than traditional forms of selection testing. As these characteristics have been discussed in regard to determining fairness perceptions (e.g., Gilliland, 1993; Leventhal, 1980), it is hypothesized that these factors will lead to more positive reactions from participants taking the computerized version versus those taking the paper-and-pencil version. Hypothesis 8b: Participants taking the computerized version will rate the test as fairer than those taking the paper-and-pencil version. In assessing participants’ reactions, it is hypothesized that the general preference/liking and perceptions of fairness towards novel testing will be related to how difficult they perceived the test to be. Kluger and Rothstein (1993) found support for a positive relationship between difficulty and negativity towards the task. As past research has shown that participants generally prefer computerized tests to their paper-and-pencil versions, I hypothesize that these reactions may have been due in part to test-takers’ perceptions that the computerized tests were easier than the paper and pencil tests. Hypothesis 8c: Participants taking the computerized test will feel the test was easier than those taking the paper-and-pencil version. Another perception of importance is participants’ self-assessed performance. Researchers (e.g., Chan et al., 1998; Kluger & Rothstein, 1993; Macan et al., 1994; Ployhart & Ryan, 1997; Rynes & Connerley, 1993) have found evidence that self- assessed performance is important for understanding reactions to a selection system. In conducting a literature review, I could not find research that exarrrined differences in self- assessed performance for novel versus traditional testing. Above, I hypothesized that 22 participants taking the computerized test will feel the test was easier than those taking the paper-and-pencil test. Thus, if participants think the computerized test was easy, they should be more likely to rate that they performed well on the test. Hypothesis 8d: Participants taking the computerized test will rate that they performed better compared to participants taking the paper-and-pencil test. One of the advantages of novel testing mentioned above is the increased standardization in test administration (e.g., Kleinmuntz & McLean, 1968; McHenry & Schmitt, 1994). Gilliland (1993) discusses the importance of consistency of administration and its effect on participants’ fairness perceptions. Gilliland posits that certain types of tests may raise concerns of consistency and that these concerns may be more salient for some types of tests (e.g., interviews) more than others (e.g., paper-and- pencil tests). Daum (1994) found support for this idea as participants rated a computer adaptive test as more consistent than a paper and pencil test. As one of the key advantages of computerized testing is higher degree of consistency compared to human administered tests (e.g., Bartram & Bayliss, 1984; Burke & Normand, 1987; Denner, 1977; Thompson & Wilson, 1982), it is likely that participants taking the computerized version will rate the test as more consistent than participants taking the paper-and-pencil version. Hypothesis 8e: Participants taking the computerized test will feel the test was more consistently administered than those taking the paper-and-pencil version. As discussed earlier, one of the concerns for researchers was the impersonal nature of computers (e.g., Erdman et al., 1985; Space, 1981). Research examining computerized interviewing (Martin & Nagao, 1989; Greist et al., 1973) has provided 23 some evidence that test-takers react negatively to this aspect of novel testing. I hypothesize that participants taking the computerized test will react more negatively to the interpersonal aspects of the computerized test than participants taking the paper and pencil test. Hypothesis 8f: Participants taking the computerized test will rate that they were treated more impersonally than those taking the paper-and-pencil version. For perceived job-relatedness, I do not predict a direct effect of mode of administration. Instead, I predict that the relationship between mode of administration and perceived job-relatedness will be moderated by perceived technology level of the job. This moderated hypothesis will be discussed in the subsequent section on perceived technology level of the job. Perceived Technological Level of the Job (Job Type) Hesketh and Neal (1999) discuss the impact that the current technological revolution is having on the workplace by changing the amount of involvement workers have with technology. While many more workers are interacting with technology, there are still differences between jobs in the degree of involvement they require. For example, a sales clerk in a grocery store probably does not encounter the same degree of technology that a computer engineer does. While novel selection tests rrright be cheaper and easier to administer, the applicant may not see the relevance of the technology to the type of job to which they are applying. It is important to note that not every novel selection test is designed to be more job-related. In many cases, the use of novel testing is to increase the testing standardization, lessen the scoring time, and decrease the financial cost of testing. 24 Kravitz et a1. (1994) found some support that applicants may view selection tests differentially depending on the type of job for which they are used. They found that personality tests, honesty tests, and criminal records were viewed more positively when used for managers as opposed to production workers. They also found that physical ability tests were viewed more positively for production workers than for managerial positions. Murphy and colleagues (Murphy, Thornton, & Prue, 1991; Murphy, Thornton, & Reynolds, 1990) have also found evidence that people’s perceptions of selection tests vary by job type. In both studies, they found that college students’ acceptance of employee drug testing varies by type of job. In addition, their research shows that these differences are due to the amount of perceived danger from impaired job performance. Thus, drug testing was rated as more acceptable for an airline pilot than compared to a janitor. Further support for the importance of job type comes from research comparing acceptance of computerized interviews. Martin and Nagao (1989) compared paper-and- pencil, computerized, face-to-face with a cold interviewer, and face-to-face with a warm interviewer versions of an interview simulation for a clerk or management trainee position. They found attitudes towards the nonsocial interviews (i.e., paper-and-pencil and computerized interviews) varied by the type of job to which participants were applying. Participants applying to a management position resented the nonsocial interviews significantly more than participants applying for a clerk position. 25 Lucas (1977) also hypothesized that people with various job types would differ in their attitudes towards computer interviewing. Lucas found that non-manual workers had less favorable attitudes towards computer interviewing than manual workers. Based on previous work, it is expected that perceived technology level of the job will moderate the relation of mode of administration to participants’ reactions to the test. Specifically, the strength of the relationship between mode of administration and perceptions of the selection test will depend on the perceived technology level of the job. This relationship will be apparent for perceptions of perceived job-relatedness (face validity and perceived predictive validity). In designing a selection test, it is important that the test be related to the job for which it is intended. Gilliland (1993) posits that a selection procedure’s job-relatedness may be one of the most important determinants of fairness perceptions. One of the advantages of novel tests is their ability to present applicants with a more face valid test than using a traditional test. Researchers (e.g., McHenry & Schmitt, 1994; Shotland, Alliger, & Sales, 1998; Smither et al., 1993) have discussed the importance of the face validity of novel tests in increasing applicants’ perceptions of test fairness. If a novel test is able to simulate the actual job, it should provide applicants with a more realistic job preview than a test with less fidelity (e.g., McHenry & Schmitt, 1994; Shofland & Alliger, 1999). Researchers have found some support that video-based tests (Chan & Schmitt, 1997) and multimedia tests (Shotland & Alliger, 1999) are more face valid than their paper-and-pencil counterparts. However, novel tests should only have more face validity if they are used for jobs that require a similar interaction with the technology. Using a novel test for a job that does not use that type of technology would make it less 26 face valid than a traditional paper and pencil test. Thus, to the degree that the selection test is perceived as matching the requirements of the job, the test should be perceived as more job-related. Hypothesis 9: Perceived technology level of the job will moderate the relationship between mode of administration and perceived job-relatedness (face validity and perceived predictive validity). Participants taking the computerized version will perceive the test as having greater perceived job-relatedness than those who are taking the paper-and-pencil version when they are applying for a highly technical job. Participants taking the paper-and-pencil version will perceive the test as having greater perceived job-relatedness than those taking the computerized version when they are applying for a less technical job. Decision Outcome Adams’ (1965) Equity Theory was one of the first attempts at examining reactions to decisions made in an organizational setting. Following Adams’ (1965) initial work on Equity theory, much research was devoted to testing his ideas and the various factors that may affect these reactions (Greenberg, 1990). This stream of research became known as distributive justice and refers to “the perceived fairness of the outcomes or resources received” (Cropanzano & Ambrose, 1996). Research has found applicants who are hired have more positive perceptions of the selection system compared to those applicants who are not hired (e.g., Bauer et al., 1998; Cunningham, 1989; Farmer et al., 1998; Gilliland, 1994; Kluger & Rothstein, 1993; Macan et al., 1994; Ployhart & Ryan, 1997; Smither et al., 1993; Thorsteinson & Ryan, 1997). These studies generally provide evidence for the effect of outcome on 27 process fairness (e.g., Farmer et al., 1998), outcome fairness (e. g., Thorsteinson & Ryan, 1997) or both process and outcome fairness (e.g., Gilliland, 1994; Ployhart and Ryan, 1997; Smither et al., 1993). In addition, other researchers have found support for the effect of outcome on other reactions (Bauer et al., 1998; Kluger & Rothstein, 1993). Kluger and Rothstein (1993) measured a number of reactions and found that selection outcome affected test- takers’ reactions of coping, test fairness, pleasure and calmness. Bauer et a1. (1998) found that selection outcome affected applicants’ general attitude toward employment testing. Thus, while there may be stronger support for the effect of selection outcome on process and outcome fairness, there is some indication that rejected participants may Show a number of negative reactions. Hypothesis 10: Participants who are not selected for the job will have more negative post-feedback perceptions (i.e., liking, process fairness, outcome fairness, test case, consistency, interpersonal treatment, perceived j ob-relatedness) compared to participants who are selected for the job. Despite strong evidence that selection outcome has an effect on perceptions, there is no research, which examines whether this effect differs for novel tests compared to traditional tests. Consistent with earlier hypotheses, I predict that the effect of the selection outcome on performance attributions will vary with the mode of administration. Ployhart and Ryan (1997) provide some ideas from attribution theory that help to explain how these differences in applicant reactions may occur. They posit that applicants going through a selection process might attribute success or failure to sources internal or external to themselves. For example, applicants selected for a job may be more likely to 28 attribute their success to their own ability and not to the factors of the selection process such as test case, interviewer bias, or preferential hiring. Applicants not selected may be more likely to attribute their failure to these outside sources and not to internal factors so that they may protect their self-perceptions. Ployhart and Ryan (1997) found that selected applicants were more likely to attribute their success to internal, stable, and controllable factors than were rejected applicants. Chan and colleagues (Chan, 1997; Chan et al., 1997) found similar evidence that links reactions to attribution theory. They found that test-takers make different attributions based on test performance in that poor performing test-takers are more likely to see the test as not job-related or predictive of job performance than are high performing test-takers. In the current study, the types external attributions that rejected applicants make may be important to understanding reactions to novel testing. The addition of technology into a selection setting may provide applicants with a salient factor to which they may attribute their failure in the selection process. For example, it may be easier for applicants who are not selected for the job to attribute the outcome to the technology used. As selected applicants typically attribute their success to internal, stable, and controllable factors, the type of selection test should not impact their attributions. Hypothesis 11: Mode of administration will moderate the relationship between the selection outcome and post-feedback performance attributions of the selection test. For those rejected, participants taking the computerized version will be more likely to make external, uncontrollable, and unstable performance attributions compared to participants taking the paper-and-pencil version. For those selected, 29 participants in the computerized and paper-and-pencil versions should not differ in their performance attributions. Behavioral Intentions An important aspect of applicant reactions is their effect on applicants’ behaviors towards the organization. Researchers (e.g., Bauer et al., 1998; Dailey & Kirk, 1992; Farmer et al., 1998; Gilliland, 1993; Gilliland, 1994; Macan et al., 1994; Ryan et al., 1996; Schmit & Ryan, 1997) have found applicant reactions relate to various outcomes of importance to applicants and the organization (e. g., job acceptance, self- efficacy, job satisfaction, recommendation intentions). Murphy (1986) points out that the utility of a selection system depends on its ability to hire the most qualified applicants. Applicants who react negatively to a selection system may be more likely to reject a job if it is offered or withdraw from the selection process. In addition, researchers (e. g., Gilliland, 1994; Rynes, 1993; Rynes & Barber, 1990) also highlight that applicant reactions may negatively impact a company’s ability to recruit qualified applicants through past candidates’ recommendations. Research has found that applicant reactions are positively related to job acceptance intentions (e.g., Korsgaard, Sapienza, Turnley, & Diddams, 1996; Macan et al., 1994; Ployhart & Ryan, 1997), organizational attractiveness (Smither et al., 1993), recommendation intentions (Bauer et al., 1998; Gilliland, 1994;) and company image (e.g., Kluger & Rothstein, 1993). Hypothesis 12a: Post-feedback perceptions of the selection process (i.e., liking, process fairness, outcome fairness, test case, consistency, interpersonal treatment, perceived job-relatedness) will be positively related to job acceptance intentions. 30 Hypothesis 12b: Post-feedback perceptions of the selection process (i.e., liking, process fairness, outcome fairness, test ease, consistency, interpersonal treatment, perceived job-relatedness) will be positively related to recommendation intentions. In addition to affecting the utility of a selection system, researchers (e.g., Gilliland & Steiner, 1998, Rynes & Barber, 1990) have suggested reactions may have an effect on applicants’ intentions to use the company’s services or products. Evidence for this relationship comes from Macan et a1. (1994) who found that applicant reactions were positively related to purchase intentions. Hypothesis 12¢: Post-feedback perceptions of the selection process (i.e., liking, process fairness, outcome fairness, test case, consistency, interpersonal treatment, perceived job-relatedness) will be positively related to purchase intentions. METHODS Participants 212 students from a large Mid-western university participated in the study. Students were recruited from an introductory psychology class and received class credit for their participation. The average age of the sample was 20 years. The sample was 76% Female, 73% White, 10% African American, and 9% Asian. Of the 212 participants, 10 participants were dropped from the analyses. Three participants were dropped from the analyses as they did not show up for part two of the experiment. In addition, four participants were dropped for not following instructions, one for receiving the wrong instructions, one for displaying a clear lack of effort in the experiment, and one for conveying to the test administrator that he learned about the 31 experimental task from a previous participant. Thus, 202 participants were retained for the analyses. Man The design of the current study was a 2 (mode of presentation: paper-and-pencil - computerized) X 2 (perceived technical level of the job: high technical job - low technical job) X 2 (selection decision: rejected or selected) between subjects design. Due to the resources needed for the mode of presentation manipulation, sessions required the researcher to designate a condition before participants were able to volunteer for the study. In addition, each session was also randomly assigned to the perceived technical level of the job condition. The selection decision was based on their performance on the selection test. To make the hiring decisions, a cut score was determined from pilot testing as well as from norms gathered during the test’s original development. Pilot Studies To ensure that test content and instructions were perceived similarly, a pilot test was conducted in which 12 participants were given both versions of the test. Participants were randomly assigned to take either the paper-and-pencil or computerized test first in order ensure no order effects. Participants were told they were participating in a study designed to examine a selection test for an organization (Appendix A). Next, participants filled out a consent form (Appendix B). After taking both tests, the adrrrinistrator asked participants to provide feedback on the content, instructions, and various aspects of the tests that they found to be non-equivalent so adjustments could be made to the tests. Participants were then given a debriefing as to the purpose of the study (Appendix C). 32 After each pilot session, participant suggestions were examined and appropriate changes were made to the content, instructions, and presentation of the stimulus materials in order to make the computer and paper-and-pencil versions as equivalent as possible. A second pilot study was conducted in which 50 participants were asked to rate 20 jobs as to their perceived level of technology used in the job (Appendix D). The jobs were selected from a list of positions for which the test is used to select applicants. Results of the survey are shown in Table 1. Based on the results, I selected two jobs that would require similar skills and represented the high and low end of the scale (i.e., business analyst and customer service representative). In addition, the two jobs were rated significantly different from each other in technology level (t(1,49) = 51.68, p < .05). I then searched a variety of online job search companies (e.g., hotjobs.com, misconsult.com) for job descriptions of the selected jobs. These descriptions were used to describe the jobs for which participants were taking the test. Procedure As noted above, I designated the mode of administration condition for each session before participants signed up and the perceived technology level of the job condition for each session before participants arrived. Participants received either the computer or paper-and-pencil version of the test. Four versions of the test administrator instructions were created to correspond to each of four conditions created by the mode of administration and perceived technology level of the job manipulations (Appendix E). The administrator first gave a brief introduction to the study. Next, participants filled out an informed consent, which further explained the incentives (Appendix F). 33 Table 1 Technology Ratings Across Pilot Job Titles. Job Title Mfl Programmer 4.39 Telecommunications Technician 4.16 Electrician 3.95 Plant/building Mechanic 3.74 Business Analyst“ 3.67 Instructional Designer/T raining Consultant 3.63 Accountant 3.55 Financial Analyst 3.55 Insurance Claims Processor 3.51 Auditor 3.49 Insurance Fraud Investigator 3 .44 Administrative Assistant 3.37 Insurance Account Representative 3 .34 Executive Secretary 3.33 Human Resources Representative 3 .30 Legal Researcher/Paralegal 3.29 Records Clerk 3.28 Sales Support Specialist 3.22 34 SI) .72 .68 .73 .73 .64 .65 .68 .65 .59 .65 .65 .75 .61 .68 .62 .52 Customer Service Representative“ 3.02 Receptionist 2.88 .62 .59 "' Indicates jobs chosen for the job type manipulation Note. N=5 l 35 Participants were told that cash awards of$15 were available for those participants who would have been selected for the position described. Incentives were used so participants would have an increased motivation to perform well on the selection test, thus, simulating realistic applicant conditions. Next, participants were given a description of the specifics of the study and the manipulation of the perceived technical level of the job. Participants were told they were applying for a high technical or low technical job. Participants were then given a description of the in-basket examination and filled out a questionnaire asking demographic questions and the first set of measures (Appendix G). Participants then took either the computerized or paper-and-pencil in-basket examination. After completing the test, participants were given a second questionnaire (Appendix H). After completing the questionnaire, participants were told that their tests would be scored and a selection decision would be made. They were told they would learn of this decision at the next session one week later. During the time between session one and the second session, participants’ in-basket results were scored and participants were classified as being hired or not hired based on those scores. At'the second session, participants were reminded as to the purpose of the study, the type of job for which they were taking the test, the type of test they took, and the basis of the selection decision. Participants were then given their selection decision and final questionnaire (Appendix I for selected participants & Appendix J for rejected participants). Participants were then given a debriefing about the purpose of the study (Appendix K). 36 Measures A complete list of measures used in the study is provided in Appendix L. Those scales adopted from previous research have scale reliabilities provided. Due to the nature of the test used, other scales have either been modified to fit the current context or have been created by the researcher. Unless noted otherwise, all scales used five point Likert- type scales (l=Strongly Disagree, 5=Strongly Agree). Manipulation Check. In order to check the effect of manipulating the perceived technology level of the job, I used the perceived level of technology scale that was used in the second pilot study to select the job type. The measure had an alpha of .86. This measure was given after test instructions, but before participants took the selection test. Motivation Check. To assess participants’ motivation to do well on the selection test, participants were given 10 items adopted from the Motivation sub-scale from the Test Attitude Survey (Arvey et al., 1990). For this sub-scale, Arvey et a1. (1990) found an alpha = .85. This measure was given after test instructions, but before participants took the selection test. Individual Difference Measures. The following set of measures was given after test instructions and an explanation of the type of test participants would be taking, but before participants took the selection test. Test taking exm'ence was measured using a four-item scale asking about experiences with in-basket tests (e. g., “I have taken a test similar to this test”). Test taking self-efficacy was measured using five items adopted from Pintrich and DeGroot (1990). They found an alpha of .90 for this measure. 38$ taking anxiety was measured using Sarason and Ganzer’s (1962) Test Anxiety Scale (TAS), which is composed of 16 true-false items. Reliability estimates could not be 37 found for this measure, but Sarason, Pederson, and Nyman (1968) reported a .93 correlation between the 16-item and the updated 37-item TAS measures. Based on this high correlation, I can infer that the reliability of each scale was above .90 (DeShon, personal communication, September, 1999). Computer-Related Individual Difference Measures. The following variables were measured before participants took the selection test. Computer experience was measured using 12 items from Potosky and Bobko (1998). Factor analyses showed two six-item factors, which represented technical competence and general competence aspects of computer experience. Potosky and Bobko (1998) obtained an alpha of .93 for this measure. As results of a factor analysis supported these two components of computer experience, two-six item measures of technical and basic experience were created. Computer self-efficacy was measured using an eight-item scale based a measure from Levine and Donitsa-Schmidt (1997). Based on a factor analysis, Levine and Donitsa- Schmidt created a ten-item scale, which had an alpha of .90. Despite their findings, I dropped two items fiom the scale. I dropped one item as it seemed more geared towards the high school population on which it was created (“Computer studies is one of my best subjects”) and another item because its factor loadings in the current study indicated that it is was tapping a construct other than computer self-efficacy. Computer anxiety was measured using a ten-item scale fiom Igbaria and Chakrabarti (1990). They obtained an alpha of .91 for this measure. Openness to experience was measured using 12 items from the Openness to Experience sub-scale of the NEO-PI (Costa & Mche, 1989). Costa and Mche (1992) found an alpha of. 90 for this measure. 38 _P_ost-Te_st. Pre-Feedbz£k_ and Post-Test. Post-Feedback Measures. The following set of questions were asked in both post-test, pre-feedback and post-test, post-feedback questionnaires. Process fairness was measured using four items adopted from Gilliland (1994). Gilliland (1994) obtained an alpha of .85 for this measure. Perceived job relatedness was measured using 10 items adopted from Smither et a1. (1993). Smither et a1. (1993) discuss perceived job relatedness as comprising two factors, face validity and perceived predictive validity. Smither et a1. (1993) obtained alphas above .80 for these scales. Consistency was measured using a four-item measure used in Ployhart and Ryan (1997), which was created fiom 2 items fi'om Gilliland and Honig (1994) and 2 items fi'om Ployhart and Ryan (1997). Ployhart and Ryan (1997) found an alpha above .80 for this measure. Interpersonal treatment was measured using six items created by the researcher in order to understand how applicants felt they were treated during the experiment. While Gilliland and Honig (1994) created a reliable measure of a similar nature, their measure did not capture the same aspects of the interaction during the experiment in which I was interested. The current measure was created to measure the potential feelings of being treated inhurnanely by a computer as has been postulated by some researchers (e.g., Erdman et al., 1985). Results of a factor analysis indicated that these six items were tapping two aspects of interpersonal treatment. Three items tapped participants' interaction with the test (e.g., “I felt the test was impersonal”) and three items tapped aspects of participants' interaction with the test administrator (e.g., "The test administrator cared how I felt during the testing"). Thus, two three-item measures were created to reflect these components of interpersonal treatment. The higher the scores on these measures, the more negatively participants viewed their treatment. Test ease was 39 measured using three items created by the researcher and two items adopted from Arvey et a1. (1990) to examine how difficult examinees perceived the test to be (e.g., “I thought this test was easy”). The modified measure was used due to due to Arvey et al.’s low reliability (.56) for their four-item measure. L_iki_ng was measured using four items developed by the researcher in order to measure applicants’ generally perceptions of the test (e.g., “I liked taking this type of test”). Lost-Test. Pre-Feedbrgk OnlLMeasure. The next set of questions were asked only on the post-test, pre-feedback questionnaire. Self-assessed performing was measured using five items from Brutus and Ryan (1996). Brutus and Ryan found an alpha above .80 for this measure. Post-Test, Post-Feedback Only Measures. The last set of questions were only asked on the post-test, post-feedback questionnaire. Performance attributions was measured using the Causal Dimension Scale (CDS) (Russell, 1982) and six items developed by the researcher. The CDS is based on Weiner’s (1979) three dimensions of causality: locus, stability, and controllability. Participants are first asked to write down the primary (perceived) reason that they were selected or rejected. Next, they are asked to rate this reason on the three dimensions of locus, stability, and controllability. Each dimension consists of three items rated on a nine-point Likert-type scale. The CDS contains three three-item measures reflecting these dimensions. In addition, six items were developed by the researcher to measure additional aspects of the locus dimension, which might capture the different types of attributions that selected and rejected participants make. That is, two three-item measures were written to reflect the degree to which participants attributed their performance to external factors and internal factors. 40 Outcome fairness was measured using four items from Gilliland (1994). For this measure, Gilliland (1994) obtained an alpha of .86. Behavioral Intentions. The following questions were also provided on the final questionnaire given to applicants. Job acceptance intention_s was measured using two items from Ployhart and Ryan (1997) and two items created by the researcher. Ployhart and Ryan found an alpha of .98 for their two-item measure. Recommendation intentiops was measured using four items fiom Gilliland (1994). For this measure, Gilliland (1994) found an alpha of .83. Purchase intentions was measured using three items developed by the researcher to understand if participants’ attitudes towards the organization are affected (“1 would not use this organization’s products or services”). Priorig Management Exercise. The Priority Management Exercise (PME) is an in-basket exercise designed by AON Consulting to measure competencies in multi- tasking, time management, adaptability, ability to deal with complexity, and prioritizing tasks. Due to the proprietary nature of the PME, it is not included as an Appendix. The in-basket requires participants to use policy guidelines in order to route the numerous requests in their in-box. These requests vary in how clearly they match the stated policies for handling different types of requests. For each request there are a number of cues that must be completed to correctly process the message (e. g., time frame, type of message, routing order, type of routing). During the exercise, participants must also deal with new requests that arrive during the exercise as well as new messages that change the priority of the requests. Performance on the test consists of the number of correct number of cues (e.g., time 41 fi'ame, type of message, routing order, type of routing) that participants completed for the requests. The PME first gives participants a 20-minute introduction to the task and then a 10«minute period to review the policies they will be using. After reviewing the policies, participants spend 30 rrrinutes completing the PME. RESULTS Analysis Plan In order to test all of the relationships hypothesized, I used the following steps. First, I began the analyses by confirming that the manipulation of the perceived technology level of the job worked and then checked for participants' motivation. Next, I describe some overall descriptive statistics for the measures and some changes in the measures that were conducted as a result of these descriptive statistics. I then assess the relationships between the general test-taking measures (i.e., test experience, test anxiety, test self-efficacy, and openness to experience) and post-test reactions for the whole sample using correlations. Using correlations, I also assess the relationships between the computer-related measures (i.e., computer experience-technical, computer experience- basic, and computer anxiety) and post-test reactions for participants in the computer condition. Next, I test for mean differences in post-test perceptions by Mode of Administration using a series of one-way ANOVAS. I then test for the moderated relationship of Mode of Administration and perceived job-relatedness by perceived technology level of the job using moderated regression. To test for the effect of Selection Outcome on post-feedback perceptions, I use the MANOVA procedure. Next, I test for the moderated relationship between the selection outcome and post-feedback 42 performance attributions (locus, stability, controllability, internal attributions, and external attributions) of the selection test by mode of administration using moderated regression. As a follow-up to this analysis, I examined the reasons given by participants for their selection decision to see if any overall differences were apparent. Finally, I examined the relationship between post-test perceptions and intentions using correlations. In addition to the hypothesized relationships, I also conducted a set of follow-up analyses. The first analysis examined the interaction of process and outcome fairness on intentions by using moderated regression. I also examined the unique effects of the computer-related variables beyond the general test-taking measures (i.e., test experience, test anxiety, and test self-efficacy) in two sets of analyses. The first set examined the incremental validity provided by the computer-related measures in predicting test performance and the selection outcome and the second set of analyses examined the incremental validity provided by the computer-related measures in predicting post-test reactions. I used moderated regression to test both sets of analyses. Manipulation Check A two-way AN OVA was conducted to test the effect of the job type and mode of administration manipulations on participants' perceptions of the amount of technology involved in the job for which the test was used to select applicants. The results indicated that both the job type manipulation (F(l ,198)=7.13) and mode of administration manipulation (F (1 ,198)=32.30) influenced participants' perceptions of the amount of technology in the job they were taking the test for. The means indicated that participants viewed the high-tech job as involving more technology than the low-tech job (3.44 vs. 3.17, respectively). Participants taking the computerized test viewed the job as involving 43 more technology than participants taking the paper-and-pencil test (3.57 vs. 3.00, respectively). While the significance of the job type indicates the manipulation was successful, the significance of the mode of administration manipulation was unexpected. The results indicate that telling participants the type of test they will be taking can influence their perceptions of the type of job for which they are taking the test. Regardless of the condition participants were in, they perceived the job as involving at least a moderate amount of technology. This result also may signify an overall perception by participants that most jobs involve some amount of technology despite the differences in the outward description of the technologies involved in these jobs. Motivation Chech A two-way AN OVA was conducted to test the effect of the job type and mode of administration manipulations on participants' motivation to perform well on the selection test. Results indicated that there was not a difference in motivation across conditions and that participants were all highly motivated (average = 4.17 out of 5) to perform well on the selection test. Descriptive Statistics Means, standard deviations, intercorrelations, and reliabilities of the pre-test measures, post-test, pre-feedback measures, and post-feedback measures are in Tables 2, 3, and 4, respectively. Overall, most measures had acceptable reliabilities (> .70). The exceptions to this were the interpersonal treatment-administrator and interpersonal treatment-test measures which both had reliabilities less than .70 for both Table 2 Descriptive Statistics and Intercorrelations of the Pre-Test Individual Difference Measures. M $2 I 2 3 Q 5 6 Z 1.Test-taking experience 2.48 1.00 (.88) 2. Test-taking efficacy 3.24 .66 .27 (.91) 3.Test-taking anxiety 22.39 4.13 -.09 -.19 (.84) 4. Computer exp-technical 2.63 .81 .15 .40 -.12 (.79) 5.Computerexp-basic 3.91 .56 .27 .57 -.11 .51 (.74) 6. Computer anxiety .00 .97 -.21 -.59 .22 -.52 -.78 (.94) 7. Openness to experience 3.63 .48 .18 .22 -.09 .12 .33 -.38 (.74) Note. N=202. Bolded correlations are significant at p < .05. Reliabilities are in parenthesis on the diagonal. Scale range for test-taking anxiety is 16 (low) to 32 (high). Computer anxiety is a standardized composite of computer anxiety and computer efficacy. All other scales range from 1 — 5. 45 Table 3 Descriptive Statistics and Intercorrelations of the Post-Test. Pre-FeedbacQ/Ieasures. MeanSDI23£§§Z§2 1. Process fairness 3.64 .83 (.92) 2. Face validity 3.88 .60 .38 (.83) 3. PPV 3.04 .79 .64 .36 (.88) 4. Consistency 4.09 .55 .32 .49 .30 (.74) 5.1T-admin 2.52 .61 -.14 -.15 -.19 -.13 (.67) 6.IT-test 3.22 .68 -.21 -.09 -.07 -.04 .30 (.62) 7.Testease 2.73 .81 .27 .11 .10 .07 .12 -.O6 (.84) 8. Liking 2.96 .86 .41 .30 .36 .12 -.16 -.39 .46 (.91) 9. SA performance 2.94 .64 .38 .23 .24 .14 -.08 -.30 .53 .58 (.77) Note. N=202. Bolded correlations are significant at p < .05. Reliabilities are in parenthesis on the diagonal. All scales range from 1 to 5. PPV is Perceived predictive validity, IT — admin is Interpersonal treatment-administrator, and IT — test is Interpersonal treatment-test. 46 2: co. om. _N. I: No. N_. 3”. mm. mm. 2: mm. _N. 2.- 2.- 2. mm. 0—. mm. co; we. we: Sr 2. em. 9.. mm. 8.. _N.- o_. 2.- 3. 2. no. 2: mm. co. mm: 2.- 5.- co; S..- 2.. on: mm.- co._ no. EV. wm. co; ow. mm. 2: 3. 84 em. 3. me. 2. mm.- om.- em. we. 2.. co.— —| Q..— wad am; Re ca. 34.. E. and R. cud mm. NYN mo. :6 2.. 8N no. med e0. Nod mm. we...” mm. 502 mace-H ._ _ £33m .2 was: a ammo “mo-r .w “me-E258: .22—8.5985 s ESE-2.2585 BEER—LB:— .c 5:23:00 .m .EE—g 9533:“ 3333; .v 36:? 8mm .m 32:3 080230 .N 3258 380.5 .— .moSmmoE-fianeoomamom 05 mo mutate. .2888:— Ea moumcafi oz“ Eamon v 2an 47 0—. 2. me. C. 5.. no. mo. om. 2. mm. on. 8.— mod mecca—25 8533034 .2 2. M2. em. 3. 2.- 3.- M2. on. om. av. 3. mm. _m.m 22.5.5 emanoSm .m: 3. _~. on. om. mm: «.0..- om. em. 3.. 3.. av. Ne. .m.m 8225:: 55358808- .2 mm. mm. ow. em. 3.. N_.- E. 5. mm. 3. D». vw. mm.m 223956 3:32.: .3 mm: 8.- _~.- 3.- mm. mo: 3.- 3.... our em: 3.. 3.. S...” meet—5E3 35026 .m— S. me. ha. mm. 2.. 3.- 2. 3. m_. 3. mm. 9..— «ed $3.228 .2 :_mwNawmeqm:—82 48 .m 2 _ eé 0mg.— mofiom =< .320me 05 co £85552. 5 03 moE—Emzom .3. v a 3 Egg? pa 2322.8 82.5 .98-2 .062 2: 3m. cm. 3... 2.- 2. co; xv. mm. 2.- on. 2: av. vmv on. 2: on: mm. 2: 3.- co.— mcoccoufi oocflqooo< .2 92685 8.20::— .0— mcoccofim scavenge—bound .m— mcozsntcm BEBE .3 22:5th .manxm .m— §=fi=§=ou .2 49 post-test, pre-feedback (.67 and .62, respectively) and post-feedback (.62 and .69, respectively) measures. The low reliabilities are likely due to the fact that they were created by the researcher and that they only contain three items per measure. As the six items were originally created to measure overall interpersonal treatment, the finding of a two-factor solution limited the chances of reliable measures. The other measure with low reliability was the controllability scale on the Causal Dimension Scale (Russell, 1982) measured post-feedback. While Ployhart and Ryan (1997) found a low reliability for this dimension (.71), the current study found a much lower reliability (.58). A possible reason for this low reliability may be due to the confusing nature of this measure. Compared to the rest of the tasks in the study, participants had the most number of questions about this measure and often times needed to be corrected as how to correctly complete the measure. In examining the relationships among the variables across Tables 2—4, the variables showed good evidence of discriminant validity as indicated by the low to moderate correlations among the measures and the differences in relationships across the measures. One exception to this finding was an initial correlation of -.87 between computer anxiety and computer efficacy. As a result of this finding, I combined the two measures into the measure of computer anxiety shown in Table 2 and used it in the subsequent analyses. Tables 3 and 4 also show the general reactions that participants had to the PME. Overall, the perceptions of Liking were slightly below average at post-test, pre-feedback and post-feedback (2.7 3 and 2.91, respectively). This is not unexpected given that participants also perceived the PME on the more difficult end of the scale (2.73 and 2.68, 50 respectively) and Test Base and Liking were positive and moderately correlated at post- test, pre-feedback and post-feedback (.46 and .42, respectively). FairnessfiPerceptions The results of the hypotheses relating general test-taking individual differences (Hypotheses 1-3) and openness to experience (Hypothesis 7) are shown in Table 5. The correlations are based on all 202 participants. Hypothesis 1, which stated that test-taking experience would be positively related to post-test perceptions, received weak support as only liking was positively related to test-taking experience (r=.19). Hypothesis 2 which stated that test-taking efficacy would be positively related to post-test perceptions received partial support as liking (r=.20), face validity (r=.19), test case (r=.20) and self-assessed performance (r=.31) were positively related to test-taking efficacy. Hypothesis 3, which stated that test-taking anxiety would be negatively related to post-test perceptions, received weak support as only self-assessed performance was negatively related to test-taking anxiety (r=-.29). Hypothesis 7, which stated that openness to experience would be positively related to post-test perceptions, received weak support as only face validity (r=.28) and consistency (r=.20) were positively related to openness to experience. The results of the hypotheses relating the computer-related individual differences to post-test perceptions (Hypotheses 4-6) are shown in Table 6. For these tests, only the participants that took the computer version are included (n=107). Hypothesis 4, which stated that computer experience would be positively related to post-test perceptions, received differential support across the technical and basic sub-scales. 51 .30—38-2.8 _o.v a «a cots—oboe 3:8:in 8832: 22.22.50 vow—em .momuz .202 N _. am.- oo. 3.- No. 2.- :N. 3 3.- 8. co.- wu. 3.- m _. m 3 .- 3 0358 EVE-amok 2 amended-O 7c: Nels—bu was—5-38.3. 8533 mo. 2. a. mo. X0 Gav—“Tamek- oocmccetoq vommommm-tom 88 38,—- 955 xocoammaou 36:? 036335 338qu 33? 8am 32:33 338m .maouomom “mob-“mom 533 macaw—ebony .8530: 85333 .9239: .9250 m 29¢ 52 2865 203838.38 3633383 $033.38 3o.v n3 3 53322.80 3583-3333 3371-23 38. 3683322388 233 megs m3§q3o3t§3 :e woman 203322.80 .BoZ mm: 3.3.. :6 mm. me: am. am: m 3. we: we. 9.: 5m. an: an. Nv. FM. 39. mo.- 8. 3m. 3. totem 32234380 063 I 3:03.598 33:93.33 0058.83.53 vommommm-33om 0on 38.3- was: 3533300 3333339 0363353 3533.53 32.? 05 meEMN-w mmOUOunm 34835383 I 8:03.385 83.5800 .mao33oMoy3 383.383 3333 80333880 .8382 8:95.335 39333353 chafing-3229800 a 23 53 While both technical and basic experience were positively related to liking (r=.41 and r=.36, respectively), test case (r=.37 and r=.35, respectively) and self-assessed performance (r=.42 and r=.47, respectively), basic experience was also related to process fairness (r=.28) and face validity (r=.3 7). Hypothesis 6, which stated that computer anxiety would be negatively related to post-test perceptions, received good support as liking (r=-.45), process fairness (r=-.29), face validity (r=-.43), test ease (F-Al ), consistency (r=-.26), and self-assessed performance (r=-.55) were all negatively and significantly related to computer anxiety. Hypotheses 8a-f, which predicted differences in post-test reactions by Mode of Administration, were tested using one-way ANOVAS. The results fi'om Table 7 show that mode of administration did not have a significant effect on any of the post-test perceptions. Hypothesis 9, which stated that the perceived technology level of the job would moderate the relationship between mode of administration and perceived job-relatedness (face validity and perceived predictive validity), was tested using moderated regression. The results in Table 8 show Hypothesis 9 was not supported as the interaction was non- significant for both face validity and perceived predictive validity. A MAN OVA was conducted to assess the effect of selection outcome on liking, process fairness, outcome fairness, ease, consistency, interpersonal treatment- administrator, interpersonal treatment-test, face validity, and perceived predictive validity as the set of dependent variables (Hypothesis 10). Pillai's trace indicated the main effect of selection decision was significant (£(9, 192) = 9.02, 112:.297). Table 9 shows 54 Table 7 0 O Slgngficance Results of Post-Test Reaction Measures by Mode of Administration. Measure Process fairness Consistency Interpersonal treatment-admin Interpersonal treatment-test Liking Test ease Self-assessed performance Computer M S_D 3.66 .78 4.14 .49 2.48 .60 3.22 .69 2.98 .89 2.85 .87 2.99 .67 fliper-and-Pencil Mean 3.61 4.04 2.56 3.23 2.94 2.59 2.87 _S_D_ .89 .61 .61 .67 .83 .72 .60 F-Value .13 1.60 .83 .Ol .08 5.29 1.80 Note. N=107 and 95 in the computer and paper-and-pencil conditions, respectively. All Fs are non-significant at p <.01. Table 8 Moderated Regression of Perceived Technology of .1 ob and Mode of Administration. Model. 1.2 a .133 E A_R.E .4315. Face validig STEP 1 .01 1.27 Perceived Technology of Job .08 .10 Mode of Administration -.02 -.02 STEP 2 .02 1.53 .01 2.03 Perceived Technology of Job -.04 -.14 X Mode of Administration Perceived predictive validity STEP 1 .03 2.64 Perceived Technology of Job .07 .07 Mode of Administration -.19 -. 12 STEP 2 .03 1.87 .00 .35 Perceived Technology of Job -.02 -.06 X Mode of Administration Note. All beta estimates, PS, and Ast were non-significant. S6 Table 9 Significance Results of Post-Feedbzgl; Reactions by Selection Outcome. Measure Selected Rejected F-Value Means—DMLauSQ Process fairness 3.90 .86 3.47 .77 13.32* Outcome fairness 3.92 .69 3.42 .63 2929* Face validity 3.96 .64 3.62 .63 14.40* Perceived predictive validity 3.24 .64 2.77 .73 21 .76* Consistency 4.19 .68 4.06 .59 2.00 Interpersonal treatment-admin 2.44 .54 2.40 .57 .27 Interpersonal treatment-test 3.07 .79 3.37 .73 7.41* Liking ' 3.35 .78 2.62 .85 38.21 * Test ease 2.99 .86 2.46 .69 23.73* * p < .05 Note. N=183 for the Selected group and N=119 for the Rejected group. 57 the means and standard deviations for these measures by the selection decision. Post-hoc analyses indicated that liking, process fairness, outcome fairness, ease, interpersonal treatment-test, face validity, and perceived predictive validity were more positive for selected participants than rejected participants. Thus the Hypothesis 10, which stated that participants who are not selected for the job would have more negative post-feedback perceptions compared to participants who are selected for the job, was mostly supported. Hypothesis 11, which stated that mode of administration would moderate the relationship between the selection outcome and post-feedback performance attributions (locus, stability, controllability, internal attributions, and external attributions) of the selection test, was tested using moderated regression. Table 10 shows the interaction term was significant for the stability dimension of the Causal Dimension Scale but not the locus dimension, controllability dimension, or the external attributions and internal attributions measures. Figure 2 shows the interaction for perceptions of stability. While selected participants' perceptions do not differ across mode of administration, rejected participants in the computer condition perceive their performance as more stable than participants in the paper-and-pencil condition. While selected participants' perceptions were consistent with Hypothesis 11, rejected participants' perceptions were contrary to the hypothesis. Thus, Hypothesis 11 was not supported. I also examined the reasons given by participants across all conditions. Most participants stated that they did or did not get the job due to their ability to either perform or not perform up to the standards of the test. Of the remaining reasons, participants gave a variety of explanations for their selection decision. A few selected and rejected 58 Table 10 Moderated Regression of Selection Outcome and Mode of Administration. 10" Iw ARZF Locus STEP 1 Selection Outcome Mode of Administration STEP 2 Selection Outcome X Mode of Administration Stabilig STEP 1 Selection Outcome Mode of Administration STEP 2 Selection Outcome X Mode of Administration -l.63* -.41 .05 -2.51* -.32 -.87* -.47 -.12a -.63 -.08 -.51 59 .25 .25 .42 .43 3270* 21 .70* .00 70.49”“ 49.01 * .01 .02 3.95* 60 Mada; h B E F 433 ARZF Controllability STEP 1 .29 40.71 * Selection Outcome -1.59* -.54 Mode of Administration -.06 -.02 STEP 2 .29 27.40* .00 .84 Selection Outcome X .33 .26 Mode of Administration Internal attributions STEP 1 .23 2930* Selection Outcome -.79* -.46 Mode of Administration -.14 -.08 STEP 2 .23 19.47* .00 .10 Selection Outcome X -.07 -.09 Mode of Administration External attributions STEP 1 .02 2.39 Selection Outcome .22* .14 Mode of Administration .07 .04 _l l l Perceptions of Stabili '\-I Mg! 9 a 33 F _Aisi m STEP 2 .02 1.61 .00 .08 Selection Outcome X -.06 -.O9 Mode of Administration * p < .05 91 8 - 7 _ 5 4 +Rejected 3 2 1 Computer Paper-and-Pencil Md'Adnirl'straim Figge 2. Selection Decision and Mode of Administration Interaction participants attributed their success or failure to experience or lack of experience with work similar to that in the in-basket test. Similarly, two participants cited a lack of computer experience as why they were not selected for the job. Some rejected participants also cited a lack of interest in the job or motivation in general as to why they were not selected. A few rejected participants noted that the test was not job-related to 61 the position described or, more generally, “any real world job.” Two participants also cited external reasons such as hunger and lack of sleep as reasons. A number of rejected participants also cited a lack of clear instructions as a reason why they were not able to perform well. Finally, one rejected participant indicated that “It’s hard to learn from a computer - very impersonal” as their reason for not being selected. Hypotheses 12a-c, which stated that job acceptance, recommendation intentions, and purchase intentions would be positively (negatively for interpersonal treatment) related to post-feedback perceptions, were tested using correlational analyses. Table 11 shows that all but 22 of the 27 relationships were significant and in the expected direction. The magnitude of the significant correlations between the post-feedback reactions and intentions were low to moderate in size: recommendation intentions (average r = .40, range .20 - .70), purchase intentions (average r = .33, range .18 - .44), and acceptance intentions (average r = .24, range .17 - .43). The main source of the non- significant relationships was for interpersonal treatment-administrator (non-significant for purchase and acceptance intentions) and acceptance intentions (non-significant for consistency and face validity). Additional Analjges Fairness Interaction on Intentions Similar to previous literature (Gilliland, 1994; Ployhart & Ryan, 1997; see Brockner & Wiesenfeld, 1996, for a review), I also tested for an interaction between selection outcome/outcome fairness and process fairness on behavioral intentions (recommendation, purchase, and job acceptance intentions). The expected interaction is that process fairness is more positively related to intentions when outcome fairness is 62 .68-€08.85 Beep—0&2:— m. .8. I .E was ..efimemEvm-EoESob 3:082:25 mm 563 I .2 56:? 3.868.. 323.0; mm >nE .30—5-2.8 _o.v a E case—oboe .caoc_=w_m 28:2: macaw—oboe vow—om .8ng .202 2. M... 3.- 8. me. am. 2. «N. am. 92.5.5 8.8.339... 3. an. 2.- E.- a. on. an. NV. 3.. 92.22.: 328.5 cm. 2.. v".- cm. on. 3. vv. 3.. 2855.5 :o_.Mw=oEEooom 38 .8... mi... 530: 5:69.: 5:066:00 >.E 3.2.9. 8.... mwoemfl 080330 mwoemam mmOOOun— #:383— o—oflvoom-fiom we... 8.832 2.2.5.:— ofi he mecca—9:088:— _ _ 05m... 63 low. In addition, outcome fairness is more positively related to intentions when process fairness is low. I examined both selection outcome and outcome fairness similar to Ployhart and Ryan (1997). As they note, selection outcome and outcome fairness are not synonymous, as one can perceive a negative outcome as fair. In addition, I expected differences in the interactions for a number of reasons. One reason for expecting differences in the results is that as one of the outcome indicators is a subjective variable (outcome fairness) while the other is an objective variable (selection outcome). I also expected differences in results as one of outcome indicators is a dichotomous variable (selection outcome), while the other is a continuous variable (outcome fairness). As the power to detect an interaction involving a dichotomous variable is lower compared to a continuous variable (McClelland & Judd, 1993), I would expect a different pattern of significant results. For the Selection Outcome by Process Fairness interaction (Table 12), the interaction term for acceptance intentions was significant, but not for recommendation intentions or purchase intentions. Figure 3 shows the source of the interaction for acceptance intentions. As expected, process fairness is more positively related to acceptance intentions for rejected participants compared to selected participants. For the Outcome Fairness by Process Fairness interaction (Table 13), the interaction terms were significant for recommendation intentions, purchase intentions, and acceptance intentions. Figure 4 shows the source of the interaction for acceptance intentions. The difference between high and low outcome fairness is smaller when process fairness is low compared to when process fairness is high. The same type of the interaction was also found for recommendation intentions and purchase intentions. 64 Table 12 Moderated Regression of Selection Outcome and Process Fairness. Process Fairness 65 Model 1_) g R_2 E A_R2 ARZF Recommendation intentions STEP 1 .28 38.17* Selection Outcome .12 .067 Process Fairness .70* .22 STEP 2 .28 25.72* .00 .87 Selection Outcome X -.13 -.28 Process Fairness Purchase intentions STEP 1 .23 2897* Selection Outcome -.13 -.12 Process Fairness 29* .44 STEP 2 . 23 1923* .00 .04 Selection Outcome X -.02 -.O7 Model I; _B_ _R_2 E 433 ARZF Jolucceptance intentions STEP 1 .13 14.63* Selection Outcome -2.48* -1.22 Process Fairness -.62* -.52 STEP 2 .17 1339* .04 9.63* Selection Outcome X .50* 1.00 Process Fairness *p_<.05 66 (TI .1 .- a g ‘- 2: 4 l = r 3 I, '\I i 3 I + Rejected g l / +Selected s ! 3' i 3 2 j < l 1 l 1 Low High Process Fairness & Figge 3. Process Fairness and Selection Decision Interaction :51 a: l = 1 a 4 . ..- g 2’” -+- Unfair S l 0" Outcome H o g 3 l +Falr Outcome = l a 1 E 2 1 ° 1 O . o 1 9‘ 1 2 Low High Process Fairness Figm 4. Process Fairness and Outcome Fairness Interaction 67 Table 13 Moderated Regression of Outcome Fairness and Process Fairness. Model 13 g R_2 E A_R2 ARZF Recommendation intentions STEP 1 .25 33.84* Outcome Fairness -.40 -.30 Process Fairness -.14 -.13 STEP 2 .27 2456* .02 4.73* Outcome Fairness X .17* .91 Process Fairness Purchase intentions STEP 1 .21 2676* Outcome Fairness -.27 -.34 Process Fairness -.20 -.30 STEP 2 . 24 20.45“ .03 637* Outcome Fairness X .12* 1.08 Process Fairness 68 Model 1_) 3 BE F AR2 ARZF Job acceptance intentions STEP 1 .05 528* Outcome Fairness -.52 -.36 Process Fairness -.59 -.49 STEP 2 .07 5.18"“ .02 4.80* Outcome Fairness X .21* 1.04 Process Fairness *p < .05 69 Determinants of Performance I also examined the variables that influence participant performance and the subsequent selection outcome. I attempted to assess the influence of the general individual difference variables (test anxiety, test self-efficacy, test experience) on participants' test performance. If these more general measures should affect test performance as suggested by previous research (see Ryan, 2000, for a review), I was also interested in assessing whether the computer-related measures (computer anxiety, computer self-efficacy, computer experience — basic and technical) would add to the 5 prediction of performance for participants taking the computerized test. For these analyses, I regressed the individual difference variables (test anxiety, test self-efficacy, test experience) in STEP 1 and the computer-related variables (computer anxiety, computer experience — basic and technical) in STEP 2 onto both test performance and selection outcome. I also ran separate sets of regressions for those taking the paper-and-pencil version and those taking the computerized version. The expectation was that the general test measures would significantly predict test performance and the selection outcome for both groups but the computer related variables would only add to the prediction of these dependent measures in the computer condition. The results of these analyses for the paper and pencil and computer conditions are in Tables 14 and 15, respectively. The results indicated that the general test-taking variables did not predict either test performance or the selection outcome for either the paper and pencil or computer conditions. The results also indicate that, as expected, the computer-related variables did not add to the prediction of either test performance or selection outcome for the paper and 70 Table 14 Regression Analyses of Selection Outcome and Test Performance for the Paper-an_d- Pencil Condition. M h a a2 r AR_2 ARZF Selection Outcome STEP 1 .02 .69 Test experience .06 .13 Test efficacy .03 .05 Test anxiety .01 .05 STEP 2 .09 1.49 .07 2.27 Computer experience-technical -.02 -.03 Computer experience-basic -.22 -.25 Computer anxiety .07 .12 Test Performance STEP 1 .05 1.67 Test experience -.19 -.20 Test efficacy -.10 -.07 Test anxiety -.01 -.04 STEP 2 . 09 1.37 .03 1.07 71 Computer experience-technical .07 .05 mac: h a 1:3 AR: AK: Computer experience-basic .35 .19 Computer anxiety -.05 -.04 Note. For Selection Outcome: 1=Selected, 2=Rejected. All beta estimates, Fs, and ARst were non-significant. 72 Table 15 Regression Analyses of Selection Outcome and Test Performance for the Computer Condition. AAA-2.1 b. a .133 E. 4133 ARZF Selection Outcome STEP 1 .04 1.40 Test experience -.06 -.10 Test efficacy .04 .05 Test anxiety .02 .17 STEP2 .16 3.16* .12 4.76* Computer experience-technical -.1 1 -.18 Computer experience-basic -.04 -.05 Computer anxiety .12 .25 Test Performance STEP 1 .04 1.3] Test experience -.04 -.04 Test efficacy -.02 -.01 Test anxiety -.05 -.20 73 Model IO‘ lw wIN) '11 [> W N ARZF STEP 2 .14 280* .11 Computer experience-technical .31* .26 4.16* Computer experience-basic . 12 .07 Computer anxiety -. 10 -.1 1 *p < .05 Note. For Selection Outcome: 1=Selected, 2=Rejected. 74 “an.“ ‘- pencil condition. As expected, the computer-related variables did significantly add to the prediction of both test performance and selection outcome for the computer condition. While the overall AR2 was significant for both test performance and selection outcome 0, only one beta was significant in both regressions (i.e., computer experience - technical for test performance). The beta (b=.31) indicates that participants possessing more technical components of computer experiences perform better than participants with less technical experience. Determinants of Reactions In addition to examining the incremental validity of the computer-related E measures in predicting test performance and the selection outcome, I also assessed the incremental validity provided by the computer-related measures in predicting post-test reactions. I examined these relationships by partialling out either the general test-taking individual difference measures (i.e., test taking experience, test anxiety, test self-efficacy) or test ease. These measures were selected as they were the most likely variables to influence post-test reactions. To examine these relationships, I ran a series of regression analyses for each computer-related measure in which I entered in either the general—test taking measures (i.e., test experience, test anxiety, and test self-efficacy) or test case in STEP 1 and the computer-related measure in STEP 2. Controlling for either the general test-taking measures or test case produced the same results in that computer experience- technical produced a significant ARZ for both liking (AR2 = .08 and .05; beta = .30 and .23, respectively) and self-assessed performance (AR2 = .08 and .05; beta = .30 and .23, respectively). Controlling for either the general test-taking measures or test case also produced the same results for computer experience-basic in that computer experience- 75 basic produced a significant AR2 for both face validity (AR2 = .06 and .11; beta = .30 and .3 5, respectively) and self-assessed performance (AR2 = .07 and .08; beta = .32 and .30, respectively). Finally controlling for either the general test-taking measures or test ease also produced the same results for computer anxiety in that computer anxiety produced a significant AR2 for face validity (AR2 = .11 and .16; beta = -.40 and -.43), consistency (AR2 = .07 and .07; beta = -.33 and -.30), liking (AR2 = .09 and .05; beta = -.36 and -.25, respectively) and self-assessed performance (AR2 = .12 and .11; beta = -.42 and -.3 7, respectively). While some of the correlations that were significant between the .1! ho “a- computer-related measures and post-test reactions became non-significant in these 1 um. analyses, most remained significant. Thus, after controlling for either the general test- taking measures or perceptions of test case, the computer-related measures add significant variance in predicting post-test reactions. DISCUSSION The results of the study suggest that implementing new selection tools will likely proceed without major effects on applicant reactions, but there are factors that practitioners and researchers should consider. I will begin the discussion by first addressing the results for the mode of administration, job type, and selection outcome manipulations. Next, I will address the relationship between various components of the fi'amework: individual differences and attitudes, individual differences and performance, and attitudes and intentions. I will then discuss limitations of the current study, future directions, and implications for implementing novel technologies. 76 Mode of Administration Results indicated that mode of administration did not have an effect on participants’ post-test reactions. This finding is contrary to previous research, which has shown differences in reactions across paper and pencil and computer tests (e.g., Landis et al., 1998; Ogilvie et al., 1999; Arvey et al., 1993; Martin & Nagao, 1989). There are a number of differences between the current study and the other findings that make a clear explanation for the discrepancy difficult. First, the other studies assessed these relationships using three different types of tests: computer adaptive cognitive ability (Arvey et al., 1993; Landis et al., 1998), interview (Martin & Nagao, 1989), and computerized multiple choice exams (Ogilvie et al., 1999). Second, although three of the studies used college samples, Arvey et al. (1993) examined reactions using military recruits. Finally, only two of the four studies (i.e., Arvey et al., 1993; Martin & Nagao, 1989) examined the differences in reactions in an applicant setting. It will be important for filture studies to explore the impact of these differences on reactions to novel testing. Although research has shown some evidence for differences across some of the reactions variables (liking, fairness, consistency, interpersonal treatment), the hypotheses related to test ease and self-assessed performance were based on the generally positive reactions test takers have towards computerized testing in previous research (e. g., Barbera et al., 1995; Frank, 1993; Arvey et al., 1990; Ogilvie etal., 1999). As Kluger and Rothstein (1993) found a relationship between test difficulty and negativity towards the task, I put forth the idea that as test takers have been found to prefer computerized testing, that this may be due to the fact that they think the test is easier than paper and pencil tests. The easier participants think the test is, the more likely they will be to assess 77 that they performed well on the test. In the current study, there were not differences between conditions in test ease, which may be why they did not differ in their overall liking of the test or self-assessed performance. Situations in which the tests are more dissimilar may produce differences in the perceived difficulty of the test that could affect test-takers’ ability to accurately assess their performance as well as their overall liking of the test. Thus, difficult tests may lead to an applicant's uncertainty of how he/she performed relative to other applicants, which, in turn, may provoke more negative applicant reactions to the test. Additional analyses indicated that controlling for test case had some influence on the relationship between individual difference measures and post- test reactions. Thus, it is important for test administrators to keep the influence of test difficulty in mind when trying to predict reactions. While the difficulty level is inherent in tests, it is important for practitioners to acknowledge the uncertainty that may arise from the test difficulty. The lack of differences for the other variables (i.e., process fairness, consistency, and interpersonal treatment) may have been due to the type of test that was used in the current study. To examine these variables, researchers have used computer adaptive tests, interviews, and multimedia tests. As research has shown substantial variance in peoples’ reactions to across types of tests (Rynes & Connerly, 1993; Steiner & Gilliland, 1996), this may be a source of variance in reactions across studies that accounts for the lack of differences across mode of administration in the current study. Results also indicated that mode of administration did not moderate the relationship between the selection outcome and post-feedback performance attributions. 78 This result is not surprising considering the large main effect of the selection outcome on all five measures of attributions. Overall, the results of the current study suggest that practitioners implementing new computerized selection tools should not be overly concerned with potential differences in applicants’ reactions. It is important to note however that the current study used only one type of test on a college sample who are likely to have different is experiences with computers than the general population. Thus, practitioners should still consider the type of test they are implementing and the type of applicants who will be taking the test when anticipating reactions to novel tests. t Perceived Level of Technology (Job Type) The finding that the perceived level of technology did not moderate the relationship between mode of administration and perceptions of perceived job relatedness (face validity and perceived predictive validity) was surprising. Previous research strongly supports the notion that perceptions of perceived job relatedness should be affected by the match between the selection test and job for which an applicant is applying for (Lucas, 1977; Martin & Nagao, 1989; Murphy et al., 1991; Murphy et al., 1990; Kravitz et al., 1994). One explanation for this result would be the lack of a pure manipulation of the perceived level of technology between the two jobs, which was noted earlier. The manipulation did lead participants to view the high-tech job as involving more technology than the low-tech job but both jobs were rated as having higher than average levels of technology. The problem is the participants taking the computerized test viewed 79 the job as involving more technology than participants taking the paper-and—pencil test. Thus, participants attended to differences in the job descriptions that were given in the instructions, but also gathered information about the job from the type of test they knew they would be taking. While this outcome resulted in a tainted manipulation, it does provide a useful insight into how test-takers perceptions of perceived job relatedness may have been formed. To the degree that applicants have limited information about the requirements of job to which they are applying, they may look to aspects of the selection system to understand what the job may involve. Selection Outcome As expected, the selection decision had a significant impact on most of the post- test reaction variables and attribution measures. This result is consistent with previous research that indicates a “self-serving bias” (Chan et al., 1998; Horvath et al., in press; Ployhart & Ryan, 1997). Participants who are selected like to think the test was fair, valid, etc., which helps to maintain their self-image. In order to also maintain their self- irnage, participants who are not selected reflect negatively on the selection test, which diverts blame for the outcome outside of themselves. The current study did not support that mode of administration moderates the relationship between the selection outcome and post-feedback performance attributions of the selection test. While a significant interaction was found for perceptions of stability, the nature of the interaction was contrary to the hypothesis. This finding is not surprising due to the presence of a strong main effect of selection outcome on attributions. Participants’ attributions appear to depend more on the outcome of the procedure than the mode of administration. 80 m: i-M rm- The results from the CDS did indicate differences in participants’ attributions for their success or failure. A subset of the participants cited the nature of the test as the reason for why they were hired or not. Some participants noted their experience with the test content influenced their performance while others noted that the test was not job- related. In addition, a few participants cited a lack of experience with computers and the impersonal nature of learning fiom computers as reasons why they were not hired. While most participants gave reasons that I would expect fiom most test takers, the subset of people who gave alternative reasons suggests that practitioners should be aware that alternative attributions are always possible. As most external attributions will likely come from rejected participants, practitioners should be aware of these possibilities and try to address the concerns before implementing a new selection tool. The results also supported a process by outcOme interaction found in previous research (Ployhart & Ryan, 1997; Horvath et al., in press; see Brockner & Wiesenfeld, 1996 for a review). While the pattern of the interaction between selection outcome and process fairness on acceptance intentions was expected, the pattern of the three significant interactions between outcome fairness and process fairness on intentions was unexpected. The unexpected cell was that for low outcome fairness, the mean for high process fairness was not significantly higher than when outcome fairness was high. One explanation for these results is that the context in which rejected participants are asked these questions may be an odd evaluation for them. In many cases, a selection context is a one-shot interaction between an organization and an applicant. In today’s job market, applicants likely have many options available to them so when asked these questions, they may discount these questions and find employment elsewhere. In the current study, 81 rejected participants are given the selection decision and then immediately asked whether they would take a job if was now offered to them. In addition, I asked questions of their intentions to recommend the experiment and purchase products fiom this organization. In a context where applicants are internal to the organization or the job market is tight, I would expect that applicants would respond differently to these types of questions. In these contexts, applicants know they may have future interactions with the organization and so will respond with a different frame of reference. The results of the overall interactions highlight the importance of attending to both process fairness and outcome fairness. In a selection setting, some applicants will inevitably be rejected for the position and will likely have negative perceptions of this outcome. For the rejected participants, it becomes important for the selection system to have fair procedures for making the selection decisions. Researchers have suggested numerous ways to increase perceptions of process fairness (Leventhal, 1980; Gilliland, 1994; Bies & Moag, 1986; Thibaut & Walker, 1975). Factors such as voice, interpersonal treatment, timely feedback, lack of bias, and consistency have been shown to be important predictors of perceptions of the fairness of procedures. The current study suggests researchers and practitioners should attend to these factors when designing and implementing selection systems. Individual Differences and Attitudes The current study supported the hypotheses that individual differences in measures relating to testing in general as well as to computers are important in predicting reactions to selection systems. For the measures relating to general test-taking individual differences, the current study supported the finding that test-taking efficacy is related to 82 reactions surrounding the test (e.g., Bauer et al., 1998; Gilliland, 1994; Gist et al., 1989). Test-taking efficacy was positively related to participants’ general liking of the test as well as their perceptions of face validity. Participants with high test-efficacy may be less likely to search for “flaws” in testing situations as they are confident that they can handle the requirements of the test. Participants low in test-efficacy expect they will not do well so may search for factors that may account for their performance. Openness to experience was also positively related to face validity. This result supports previous suggestions that resistance to new systems and technologies stems from a fear of change and of the unknown (Faerstein, 1986; Henderson et al., 1995). In the current study, it may be that participants who are more open to new ideas and experiences are less resistant to the suggestion that the novel test is related to the job for which they are taking it and less prone to expect only certain types of tests. Finally, results also indicated that both test-taking self-efficacy and test anxiety were related to participants’ self-assessed performance. These results are consistent with the conceptual nature of these variables, which suggest that participants who are anxious and low in self-efficacy have more off-task droughts such as negative self-thoughts, fear of social disapproval, and heightened arousal (Hodapp et al., 1995). The more off-task thoughts test-takers have, the less likely they will be able to judge their success in the selection system. While I did not find a relationship between self-assessed performance and fairness perceptions, research has shown that the ability of participants to judge their performance is important in determining fairness perceptions (Chan et al., 1998; Kluger & Rothstein, 1993; Macan et al., 1994; Ployhart & Ryan, 1997 ; Rynes & Connerley, 1993). Thus, it is important to further understand the types of individual differences that 83 may be important in determining test-takers’ ability to assess their potential success in a selection system. The results for the computer-related measures are similar to previous research that has shown their importance in explaining reactions to computers (Hill et al., 1987; Igbraria & Chakabarti, 1990; Levine & Donitsa-Schmidt, 1997). Results indicated that both technical and basic computer experience and computer anxiety were related to fairness perceptions (process fairness, face validity) as well as general perceptions of liking and test ease. Also similar to the general measures, all three measures were significantly related to self-assessed performance. Results also indicated that the measures of anxiety and efficacy across the general test measures and computer-related measures showed discriminant validity. Additional analyses also indicated that the computer-related measures still provide incremental validity in predicting reactions after accounting for general test- taking measures and test ease. These analyses indicated that although some of the relationships were no longer significant or slightly weaker after controlling for these measures, most relationships remained significant. Thus, when assessing reactions to computerized testing, researchers should assess both types of individual difference measures. For practitioners, these findings are important as they suggest an additional set of factors that they should consider when implementing a computer versus a paper-and- pencil test. Not only would practitioners have to be concerned with applicants general interactions with tests but also the types and amount of experience that applicants bring to the selection process. 84 Individual Differences and Performance Results indicated that neither performance on the in-basket test nor the selection outcome was predicted by measures such as test anxiety, test efficacy, and test experience which have been shown to affect test performance (Hembree, 1988; Sadri & Robertson, 1993). An explanation for this finding is that the measures used to assess these constructs were about tests in general. In the current study, I used a test that was meant to be unfamiliar to test takers so when participants were responding to these questions, they may have not been able to answer the question with this type of test in mind. If I had used items more specific to the type of test or the test had been more traditional (i.e., ii cognitive ability test), I would likely find a relationship between these measures and test performance. While the general test measures did not predict performance or selection outcome, consistent with previous research, the computer-related measures significantly predicted both performance and selection outcome (Czaja & Sharit, 1993; Karsten & Roth, 1998; Shermis & Lombard, 1998). These results are consistent with the suggestions that these variables may handicap the performance of test takers by feelings of uneasiness and irrelevant thoughts during the test. Of interest was the finding that the computer-related measures predicted significant variance in performance and the selection outcome after accounting for the general test measures. These results provide further evidence that individual differences in computer-related measures capture unique variance when assessing the impact of using computerized testing. As these measures can affect applicants’ test performance, we need to better understand how these computer-related individual difference measures develop and how 85 to change them. If tests that involve computers are not supposed to measure differences in experience, then the test may be selecting applicants based on experience without knowing it. Biases such as this would likely pose a legal problem to an organization that attempts to use such a test. Thus, organizations should be prepared to either provide the type of computer training needed to eliminate these differences or eliminate these biases fi'om the selection test. Attitudes and Intentions The relationship between attitudes and intentions found in previous research (e. g., Bauer et al., 1998; Gilliland, 1993; Ryan et a1. 1996; Schmit & Ryan, 1997) was supported by the current study. Results also indicated that participants discriminated between the different types of intentions measured in the current study (i.e., recommendation, purchase, and acceptance intentions), as the measures were only moderately intercorrelated. Organizations attempting to attract and hire the best applicants need to consider all aspects of the selection system that may affect applicants’ attitudes toward the selection process and organization. While the current study and previous research (e.g., Ajzen & Fishbein, 1980) has shown the relationship of attitudes to intentions, it will be important for future research to consider actual behavioral outcomes as a result of these attitudes. Limitations The current study has some limitations that should be noted. One limitation of the study is the use of a simulated applicant setting. While researchers ideally would like to use real applicants in an actual selection setting, the lab setting does provide some advantages for addressing applicant reactions. By studying the issue of applicant 86 reactions in a laboratory setting, I was able to address many more questions of interest in order to gain a broader perspective on what may influence reactions to novel technologies. In addition, the laboratory allows a greater amount of control than is typically found in applied settings. It is highly unlikely that an organization would allow such a study to randomly assign participants to conditions and measure the types of variables, which may attune applicants to fairness issues smrounding the selection system. Recognizing the limitations of a laboratory setting, I attempted to insure that participants would be highly motivated to perform well on the in-basket test by using an incentive. As noted earlier, the use of a $15 incentive was successful in motivating participants to try and perform their best on the selection test. Another limitation of the current study was the manipulation of perceived technology level of the job (job type) was tainted by a main effect of mode of administration. Thus, the non-significant interaction predicted between mode of administration and perceived technology level of the job on perceived job relatedness was not surprising. As noted earlier, while this result was unexpected, it does shed light on how participants may form perceptions of the job relatedness of a selection system. Without a detailed job description, applicants may use aspects such as the technology involved in the selection tests as cues with which to judge the validity of the selection system. In order to strengthen the manipulation, I could have used a variety of means. One way would be to directly tell participants that the job either required a high level of technology or did not involve any technology at all. Another means of strengthening the manipulation would be to provide a much richer description of the job and to provide a 87 better explanation of the type of test they would be taking. I would suggest that making both types of changes would strengthen the manipulation by clearing up ambiguity that may have existed in the current study. The current study may also have been limited by the nature of the sample used. The sample consisted of college students who indicated a relatively substantial amount of basic computer experience (3.9 out of 5). In addition, the sample was young (average age was 20) and was 76% Female and 73% White. As age, gender, education have been shown to influence attitudes towards computers (Comber et al., 1997; Igbaria & Chakrabarti, 1990; Parasuraman & Igraria, 1990; Pope-Davis & Twing, 1991; Shashaani, 1997; Torkzadeh & Angulo, 1992), I may find stronger or different effects in a more diverse sample. Future Direction_s As a result of the current study, there are a number of areas that deserve further investigation. In researching the many issues surrounding computer perceptions, the general picture is that much of the research is outdated. While there is increasing research on computer-related issues, more research needs to be conducted to assess the impact that rapid changes in technology have on the relevance of the research that has been conducted. Sharing this concern, Dyck, Gee and Smither (1997) explored the changes in a measure of computer anxiety that was originally developed in the late 19808. They found that the factor structured changed over time as well as age differences in the factor loading of two of the twenty items. While not conclusive, these results suggest that peoples’ interactions with computers have changed along with the changes in the 88 technology. As the spread and advancement of computers will only keep increasing, it is important for researchers to attempt to keep up with the change. Another issue to pursue in future research is the relationship of the general test- taking measures and computer-related measures in predicting reactions and applicant performance. The current study suggests that both sets of measures are important in understanding reactions and that computer related provide unique variance in predicting participants’ success in the selection process. Important issues to pursue are to understand what each set of measures are capturing and what can be done to deal with these differences. Another area that deserves attention is the reactions applicants may have to the different types of selection tools being designed and implemented in organizations. While some researchers have begun to address such issues as web-based testing (Stanton, 1999), there are many other types of new selection tools such as the in-basket test used in the current study that applicants have likely not experienced. Future research needs to continue to examine new types of selection tools being developed and the determinants and outcomes of applicants’ reactions to them. As research has shown that applicants react differently across a number of selection tools, it is important to revisit these differences as new tools are implemented. While computer use is on the rise, there are still large sub-groups (e. g., minorities, age groups) in applicant populations that do not interact with computers. Future work should try to conduct studies in actual applicant populations in order to see how these variables affect reactions and applicant performance. Conducting studies with applicants will also allow researchers to see how these reactions affect applicants’ behaviors such as 89 withdrawing fiom the selection process, recommending the organization to others, and willingness to accept a job offer. Combining both lab and field studies will allow researchers to provide convergent evidence of the various factors that determine applicant reactions and to provide insights to guide future research when differences arise. While only qualitative in nature, the current study also indicated differences in the types of attributions participants may make. Due to the nature of the test used in the current study, it was not surprising that some of these attributions related to the content and presentation mode of the test. If these types of reactions can be predicted, it is important for future research to investigate the factors leading to these attributions and the actions organizations can take to deal with them. Suggestions for Implementation The current study suggests some issues that researchers and practitioners should attend to when implementing new types of selection tools. One suggestion is that designers need to consider the impact that individual differences may have on applicants’ perceptions of the selection test as well as their performance. In the current study, individual differences such as computer experience and computer anxiety were shown to be significant predictors of test performance and the selection outcome. The issue for organizations is that if important differences do exist, what can be done about them? It will be important for practitioners to understand which individual differences can be changed thru simple interventions and which individual differences can be targeted for selective recruitment or pre-selection. Igbraria and Chakrabarti (1990) suggest that making the system friendly to all applicants and/or providing training are some of the ways to reduce anxiety and increase attitudes towards the system. Practitioners should 90 continue to consider similar types of interventions when using novel selection tests. Another suggestion is to weigh the costs and benefits of the new types of selection tools before implementing them. Along with previous research, the current study highlighted the importance of the relationship between attitudes towards the selection test and participants’ intentions. Without considering these issues, organizations may design and implement costly selection tools that comprise the validity and utility of the selection system. In discussing the implementation of civil service applicant exams on the Internet, Coffee et a1. (1999) make a suggestion: A high level of communication with potential applicants is needed. Any time radical changes are made to a time worn traditional process, there is a need to provide detailed information to interested individuals. Persons who have grown accustomed to traditional tests will have numerous questions, complaints, and suggestions regarding the exam format, examining philosophy, technical problems, etc." (296) 91 REFERENCES Adams, J. S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in experimental social psychology, vol. 2: 267-299. New York: Academic Press. American Psychological Association (1986). Guidelines for Computer-Based Tests and Intgpretations. Washington, DC: Author. Angle, H. V., Ellinwood, E. H., Hay, W. M., Johnsen, T., & Hay, L. R. (1977). Computer-aided interviewing in comprehensive behavioral assessment. Behavior Therapy, 8, 747-754. Ametz, B. B. (1997). Technological stress: Psychophysiological aspects of working with modern information technology. Scandanavian J oum_al of Work Environment and Health 23 97-103. Arvey, R.D., Strickland, W., Drauden, G. & Martin, C. (1990). Motivational components of test taking. Personnel Psychology, 43, 695-716. Ashworth, S. D., & McHenry, J. J. (1992, September). Develmpment of a computerized in—basket to measure critical job skills. Paper presented at the fall meeting of the Personnel Testing Council of Southern California, Newport Beach. Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice-Hall. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review. 84. 191-215. 92 Bandura, A. (1997). Self-Efficacy: The Exercise of Control. New York: W.H. Freeman and Company. Barbera, K. M., Ryan, A. M., Burris Desmarais, L., & Dyer, P. J. (1995). Multimedia emploment tests: Effects of attitudes aad experiences on validity. Unpublished manuscript. Bartram, D., & Bayliss, R. (1984). Automated testing: Past, present, and future. Journal of Occupational Psychology, 57, 221-237. Bauer, T. N., Maertz, C. P. Jr., Dolen, M. R., & Campion, M. A. (1998). Longitudinal assessment of applicant reactions to employment testing and test outcome feedback. Journal of Applied Psychology, 83, 892-903. Bies, R. J ., & Moag, J. S. (1986). Interactional justice: Communication criteria of fairness. In R. J. Lewicki, B. H., Sheppard, & M. H. Bazerman (Eds.), Research on negotiation in organizations (pp. 43-55). Greenwich: JAI Press. Bloom, A. J ., & Hautaluoma, J. E. (1990). Anxiety management training as a strategy for enhancing computer user performance. Computers in Human Behavior. 6. 337-349. Brockner, J ., & Wiesenfeld, B. M. (1996). An integrative fi'arnework for explaining reactions to decisions: Interactive effects of outcomes and procedures. Psychological Bulletin, 120, 189-208. Brutus, S., & Ryan, A. M. (1996). Individual characteristics as determinants of the perceived job relatedness of selection procedures. Unpublished manuscript. 93 Burke, M. J ., & Normand, J. (1987). Computerized psychological testing: Overview and critique. Professional Psycholog: Researchd Practice. 18, 42-51. Burke, M. J ., Normand, J ., & Raju, N. S. (1987). Examinee attitudes toward computer-adrrrinistered ability testing. Computers in Human Behavior. 3. 95-107. Chan, D. & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 289-299. Chan, D., Schmitt, N., Deshon, R. P., Clause, C. S., & Delbridge, K. (1997). Reactions to cognitive ability tests: The relationships between race, test performance, face validity perceptions, and test-taking motivation. Journal of Applied Psychology, 82, 300-310. Chan, D., Schmitt, N., Jennings, D., Clause, C. S., & Delbridge, K. (1998). Applicant perceptions of test fairness: Integrating justice and self-serving bias perspectives. International J oumia of Selection and Assessment 6 232-239. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ/: Erlbaum. Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159. Comber, C., Colley, A., Hargreaves, D. J ., & Dom, L. (1997). The effects of age, gender, and computer experience upon computer attitudes. Educational Research 39 123-133. 94 Compeau, D. R. (1992). Individual reactions to computing technology: A social cognitive theory perspective (Doctoral dissertation, The University of Western Ontario, 1992). Dissertation Abstracts International 54 0-315-75324-2. Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly. 19. 189-211. Costa, P. T., & McCrae, R. R. (1992). Normal personality assessment in clinical practice: The NEO Personality Inventory. Psychological Assessment, 4, 5-13. Cropanzano, R., & Ambrose, M. L. (1996). Procedural and distributive justice are more similar than you think: A monistic perspective and a research agenda. Paper presented at the annual meeting of the Society for Industrial and Organizational Psychology, San Diego, CA. Cunningham, M. R. (1989). Test-taking motivations and outcomes on a standardized measure of on-the-job integrity. J om] of Business aad Psychology, 4, 119-127. Czaja, S. J. & Sharit, J. (1993). Age differences in the performance of computer- based work.. Psychology and Aging, 8, 59-67. Dailey, R. C., & Kirk, D. J. (1992). Distributive and procedural justice as antecedents of job dissatisfaction and intent to turnover. Human Relations 45 305-317. Daum, D. L. (1994). Examinee attitudes toward paper and pencil and computerized versions of cogpitive ability tests and biodafll inventoriea. Unpublished doctoral dissertation: Bowling Green State University. 95 Denner, S. (1977). Automated psychological testing: A review. British Joumayl of Social and Clinical Psychology, 16, 175-179. Dodd, W. E. (1977). Attitudes toward assessment center programs. In J. L. Moses & W. C. Byharn (Eds.), Applja'ng the assessment center method. New York: Pergamon Press. Dubois, P. H. (1970). A Histogy of Psychological Testing. Boston: Allyn & Bacon, Inc. Dyck, J. L., Gee, N. R., Smither, J. A. (1997). The changing construct of computer anxiety for younger and older adults. Computers in Human Behavior, 14, 61- 77. Elder, V. B., Gardner, E. P., & Ruth, S. R. (1987). Gender and age in technostress: Effects on white collar productivity. Government Finance Review. 3. 17- 21. Erdman, H. R, Klein, M. H., & Greist, J. H. (1985). Direct patient computer interviewing. Journal of Consulting and Clinical Psychology, 53, 760-773. Evan, W. M., & Miller, J. R. (1969). Differential effects on response bias of computer vs. conventional administration of a social science questionnaire: An exploratory methodological experiment. _Balaavioral Science. 14, 216-227. Faerstein, P. H. (1986). Fighting computer anxiety. Personnel 63 12-17. Farmer, S. J ., Beehr, T. A., & Love, K. G. (1998). Effect of selection syatem fairness on past-selection behavior and attitudes. Paper presented at the annual conference of the Society for Industrial and Organizational Psychology, Dallas, TX. 96 F leishman, E. A. (1988). Some new frontiers in personnel selection research. Personnel Psychology, 41, 679-701. Frank, P. D. (1999, April). Video based assessment. In Computer and Multimedia Testing for New Skillsand Abilities: Practical Issues. Symposium conducted at the 8‘“ Annual Conference of the Society for Industrial and Organizational Psychology, San Francisco, CA. Gardner, E., Render, B., Ruth, S., & Ross, J. (1985). Human-oriented implementation cures “cyberphobia.” Data Management, 24, 29-32. Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review. 18. 694-734. Gilliland, SW. (1994). Effects of procedural and distributive justice on reactions to a selection system. Journal of Applied Psychology, 79, 691-701. Gilliland, S. W., & Honig, H. (1994). Development of the selection fairness m Paper presented at the annual conference of the Society for Industrial and Organizational Psychology, Nashville, TN. Gilroy, F. D., & Desai, H. B. (1986). Computer anxiety: Sex, race, and age. International Joumja; of Man-Machine Studies. 25. 711-719. Gist, M. E., Schwoerer, C., & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and performance in computer software training. Journal of Applied Psychology, 74, 884-891. Greaud, V. A., & Green, B. F. (1986). Equivalence of conventional and computer presentation of speed tests. Applied Psychological MeasuremenL 10, 23-34. 97 Greenberg, J. (1982). Approaching equity and avoiding inequity in groups and organizations. In J. Greenberg & R. L. Cohen (Eds.), Research on negotiatingja organizations. (Vol. 1, pp. 25-41). Greenwich: JAI Press. Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16, 399-432. Greist, J. H., Klein, M. H., Van Cura, L. J. (1973). A computer interview for psychiatric patient target symptoms. Archives of General Psychiagy, 29, 247-253. Hedl, J. J ., O’Neil, H. H., & Hansen, D. N. (1973). Affective reactions toward computer-based intelligence testing. J oumal of Consulting and Clinical Psychology, 40, 217-222. Heinssen, R. K., Jr., Glass, C. R., & Knight, L. A. (1987). Assessing computer anxiety: Development and validation of the Computer Anxiety Rating Scale. Computers _ir_r_I_Iam_an Behavior. 3. 49-59. Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of Educational Research 58 47-77. Henderson, R. D., Deane, F. P., & Ward, M. J. (1995). Occupational differences in computer-related anxiety: Implications for the implementation of a computerized patient management information system. Behaviour and Information TechnolM 23-31. Henly, S. J ., Klebe, K. J ., McBride, J. R., & Cudeck, R. (1989). Adaptive and conventional versions of the DAT: The first complete test battery comparison. Applied Psychological Measurement, 13, 363-371. 98 Hesketh, B., & Neal, A. (1999). Technology and performance. In D. R. Ilgen, & E. D. Pulakos (Eds.), The Changing Nature of Performance: Implications for Staffing, Motivation, and Development (pp.21-55). San Francisco, CA: Jossey-Bass. Hill, T., Smith, N. D., & Mann, M. F. (1987). Role of efficacy expectations in predicting the decision to use advanced technologies: The case of computers. Journal of Applied Psychology, 72, 307-313. Hodapp, V., Glanzrnann, P. G., & Laux, L. (1995). In C. D. Spielberger, Vagg, P. R., et al. (Eds.), Test anxiety: Theog, Assessment, and Treatment: Series in Clinical and Communig Psychology (pp.47-58). Washington, DC: Taylor & Francis. Hofer, P. J., & Green, B. F. (1985). The challenge of competence and creativity in computerized psychological testing. J oumal of Consulting and Clinical Psychology, 5;, 826-838. Horvath, M., Ryan, A. M., & Stierwalt, S. L. (in press). Explanations for selection test use, outcome favorability and self-efficacy: What influences test-taker perceptions? Organizational Behavior and Human Decision Processes. Howard, G. S. (1986). Computer anxiey and the use of microcomputers in management. Ann Arbor, MI: UMI Research Press. Howard, G. S., & Smith, R. D. (1986). Computer anxiety in management: Myth or reality. Communications of the ACM 29 611-615. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98. Igbaria, M., & Chakrabarti, A. (1990). Computer anxiety and attitudes towards microcomputer use. Behaviour and Information Technology, 9, 229-241. 99 Jensen, A. R. (1980). Bias in mental testing. New York: Free Press. Johnson, D. F., & Mihal, W. L. (1973). Performance of blacks and whites in computerized versus manual testing environments. American Pwhmist. 28. 694-699. Karsten, R., & Roth, R. M. (1998). The relationship of computer experience and computer self-efficacy to performance in introductory computer literacy courses. Jami of Research on Computing in Education, 31, 14-24. Kerber, K. W. (1983). Attitudes towards specific uses of the computer: Quantitative, decision making, and record-keeping applications. Behaviour and Information Technology, 2, 197-209. King, W. C. Jr., Miles, E. W. (1995). A quasi-experimental assessment of the effect of computerizing noncognitive paper-and-pencil measurements: A test of measurement equivalence. Journal of Applied Psychology, 80, 643-651. Kleinmuntz, B., & McLean, R. S. (1968). Computers in behavioral science: Diagnostic interviewing by digital computer. Behavioral Science 13 75-80. Kluger, A. N., & Rothstein, H. R. (1993). The influence of selection test type on applicant reactions to employment testing. Journal of Business aad Psvchology,_8, 3-25. Korsgaard, M. A., Sapienza, H. J ., Tumley, W. H., & Diddams, M. (1996). I_h_e role of interactional justice when outcome§_ are uncleaa Paper presented at the 11th annual conference of the Society for Industrial and Organizational Psychology, San Diego, CA. Kravitz, D. A., Stinson, V., Chavez, T. L. (1994, April). Perceived fairness of tests used in making selection and promotion decisions. In S. W. Gilliland (Chair), Selection fi'om the applicants’ perspective: Justice and employee selection procedures. 100 Symposium conducted at the meeting of the Society for Industrial and Organizational Psychology, Nashville, TN. Landis, R. S., Davison, H. K., Maraist, C. C. (1998, May). The influence of test instructions on perceptions of test fairness: A comparision of paper-and-pencil, computer-administered, and computer adaptive formats. Poster presented at the 10th Annual Convention of the American Psychological Society. Washington, D. C. Lautenschlager, G. J ., & Flaherty, V. L. (1990). Computer administration of questions: More desirable or more social desirability? Journal of Applied Psychology, 7_5_, 310-314. Leventhal, G. S. (1980). What should be done with equity theory? In K. J. Gergen, M. S. Greenberg, & R. H. Willis (Eds.) Social Exchange: Advances in Theory and Research (pp. 27-55). New York: Plenum. Levine, T., & Donitsa-Schmidt, S. (1997). Computer use, confidence, attitudes, and knowledge: A causal analysis. Computers in Human Behavior, 14, 125-146. Linden, R. C., & Parsons, C. K. (1986). A field study of job applicant interview perceptions, alternative opportunities, and demographic characteristics. Personnel Psychology, 39, 109-122. Loyd, B. H., & Gressard. C. (1984). The effects of sex, age, and computer experience on computer attitudes. AEDS Journal and Psychological Measurement, 67- 76. Lucas, R. W. (1977). A study of patients’ attitudes to computer interrogation. International Journal of Man-Machine Studies 9 69-86. 101 Lucas, R. W., Mullins, P. J ., Luna, C. B., & McInroy, D. C. (1977). Psychiatrists and a computer as interrogators of patients with alcohol-related illnesses: A comparison. m Joumg of Psvchiatr1a131. 160-167. Macan, T. H., Avedon, M. J ., Paese, M., & Smith, D. E. (1994). The effects of applicants’ reactions to cognitive ability tests and an assessment center. Personnel Psychology, 47, 715-738. Mahar, D., Henderson, R., & Deane, F. (1997). The effects of computer anxiety, state anxiety, and computer experience on users’ performance of computer based tasks. Personalig and Individual Differences, 22, 683-692. Martin, C. L., & Nagao, D. H. (1989). Some effects of computerized interviewing on job applicants. Journal of Applied Psychology, 74, 72-80. Mazzeo, J ., & Harvey, A. L. (1988). The equivalence of scores from automated and conventional educational and psychological tests. College Board Report No. 88-8; ETS R No. 88-21. McClelland, G. H., & Judd, C. M. (1993). Statistical difficulties of detecting interactions and moderator effects. Psychological Bulletin, 114, 376-390. McFarland, L. A., Ryan, A. M., & Paul, K. B. (1998, April). Equivalence of an organizational attitude survey across administration modes. Paper presented at the 13th annual conference of the Society for Industrial and Organizational Psychology. Dallas, TX. McHenry, J. J ., & Schmitt, N. (1994). Multimedia testing. In M. G. Rumsey, C. B. Walker, & J. H. Harris (Eds.), Personnel Selection and Classification (pp.l93-232). Hillsdale, NJ: Erlbaum. 102 Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper- and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449- 458. Moreland, K. L. (1985). Computer-assisted psychological assessment in 1986: A practical guide. Computers in Human Behavior. 1. 221-233. Murphy, K. R. (1986). When your top choice turns you down: Effect of rejected offers on the utility of selection tests. Psychological Bulletin, 99, 133-138. Murphy, K. R., Thornton, G. C., & Prue, K. (1991). Influence of job characteristics on the acceptability of employee drug testing. J oumal of Applied Psychology, 76, 447-453. Murphy, K. R., Thornton, G. C., & Reynolds, D. H. (1990). College students’ attitudes toward employee drug testing programs. Personnel Psychology, 43, 615-631. O’Brien, T., & Dugdale, V. (1978). Questionnaire administration by computer. Journal of the Market Research Socieg, 20, 228-237. Ogilvie, R. W., Trusk, T. C., & Blue, A. V. (1999). Students' attitudes towards computer testing in a basic science course. Medical Education 33 828-831. Parasuraman, S., & Igbaria, M. (1990). An examination of gender differences in the determinants of computer anxiety and attitudes toward microcomputers among managers. International Journal of Man-Machine Studies 32 327-340. Pintrich, P. R., & DeGroot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Ps cholo 82 33-40. 103 Ployhart, R. E., & Ryan, A. M. (1997). Applicants’ reactions to the fairness of selection procedures: The effects of positive rule violations and time of measurement. Journal of Applied Psychology, 83, 3-16. Ployhart, R. E., & Ryan, A. M. (1998). Applicants’ reactions to the fairness of selection procedures: The effects of positive rule violations and time of measurement. Journal of Applied Psychology, 83, 3-16. Ployhart, R. E., Ryan, A. M., & Bennett, M. (1999). Explanations for selection decisions: Applicants’ reactions to informational and sensitivity features of explanations. Journal of Applied Psychology, 84, 87-106. Pope-Davis, D. B., & Twing, J. S. (1991). The effects of age, gender, and experience on measures of attitude regarding computers. Computers in HumaaaBeaavior. 1, 333-339. Popovich, P. M., Hyde, K. R., Zakrajsek, T., & Blumer, C. (1987). The development of the attitudes toward computer usage scale. Educational and Psychological Measurement, 47, 261-269. Potosky, D., & Bobko, P. (1998). The Computer Understanding and Experience Scale: A self-report measure of computer experience. Computers in Human Behavior, fl, 337-348. Rosen, L. D., Sears, D. C., & Weil, M. M. (1993). Treating technophobia: A longitudinal evaluation of the computerphobia reduction program. Computers in Human Behavior 9 27-50. Russell, D. (1982). The causal dimension scale: A measure of how individuals perceive causes. Journal of Personalitv and Social Psychology, 42, 1137-1145. 104 Ryan, A. M. (in press). Explaining the Black / White test score gap: The role of test perceptions. Human Performance. Ryan, A. M., Greguras, G. J ., & Ployhart, R. E. (1996). Perceived job relatedness of physical ability testing for firefighters: Exploring variations in reactions. Human Performance 9 219-240. Ryan, A. M., Ployhart, R. E., Greguras, G. J ., & Schnrit, M. J. (1998). Test preparation programs in selection contexts: Self-selection and program effectiveness. Personnel Psychology, 51, 599-621. Rynes, S. L. (1993). Who’s selecting whom? Effects of selection practices on applicant attitudes and behavior. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 240-274). San Francisco: Jossey-Bass. Rynes, S. L., & Barber, A. E. (1990). Applicant attraction strategies: An organizational perspective. Academy of Management Review. 15. 286-310. Rynes, S. L., & Connerley, M. L. (1993). Applicant reactions to alternative selection procedures. Journal of Business and Psychology, 7, 261-277. Sadri, G., & Robertson, 1. T. (1993). Self-efficacy and work-related behavior: A review and meta-analysis. Applied Psychology: An International Review. 42. 139-152. Sarason, LG. (1978). The Test Anxiety Scale: Concept and research. In C.D. Spielberger & I.G. Sarason (Eds.), Stress & Anxiegy (V 01. 5). New York: Hemisphere/Wiley. Sarason, I. G., & Ganzer, V. J. (1962). Anxiety, reinforcement, and experimental instructions in a free verbalization situation. Journal of Abnormal and Social Psychology, 65, 300-307. 105 Sarason, l. G., Pederson, A. M., & Nyman, B. (1968). Test anxiety and the observation of models. Joumf Personalitv. 36. 493-511. Schmidt, F. L., Urry, V. W., & Gugel, J. F. (1978). Educational and Psychological Measurement. 38. 265-273. Schmit, M. J ., & Ryan, A. M. (1997). Applicant withdrawal: The role of test- taking attitudes and racial differences. Personnel Psychology, 50, 855-876. Schmitt, N., Gilliland, S. W., Landis, R. S., & Devine, D. (1993). Computer- based testing applied to selection of secretarial applicants. Personnel Psvcholagydg, 149-165. Schmitt, N., & Pulakos, E. D. (1998). Biodata and differential prediction: Some reservations. In M. D. Hake] (Ed.), Bar/0nd Multiple Chaice: EvaluatirgAltematives to Traditional Testing for Selection (pp. 167-182). Mahwah: Lawrence Erlbaum Associates. Shashaani, L. (1997). Gender differences in computer attitudes and use among college students. Journal of Educational ComputinLResearch. 16. 37-51. Shermis, M. D., & Lombard, D. (1998). Effects of computer-based test administrations on test anxiety and performance. Computers in Human Beiravior. 14, 111-123. Shotland, A. B., & Alliger, G. M. (1999, April). The advantages of employing a face valid, multimedia selection device: Comparison of three measures. In F. Drasgow (Chair), Technology and Assessment: Qppprtunities and Challenges. Symposium conducted at the 14th Annual Conference of the Society for Industrial and Organizational Psychology, Atlanta, GA. 106 Shotland, A., Alliger, G. M., & Sales, T. (1998). Face validity in the context of personnel selection: A multimedia approach. Intematirmal Journal of Selection arfi Assessment, 6, 124-130. Skinner, H. A., & Allen, B. A. (1983). Does the computer make a difference? Computerized versus face-to-face versus self-report assessment of alcohol, drug, and tobacco use. Journal of Consulting and Clinical Psychology, 51, 267-275. Skinner, H. A., & Pakula, A. (1986). Challenge of computers in psychological assessment. Professional Psychology: Research and Practice. 17. 44-50. Slack, W. V., & Slack, C. W. (1977). Talking to a computer about emotional problems: A comparative study. Psychotherapy: Theog, Research, and Practiceal4. 1 56-164. Slack, W. V., & Van Cura, L. J. (1968). Patient reaction to computer-based medical interviewing. Computers and Biomedical Research, 1, 527-531. Smith, R. E. (1963). Examination by computer. Behavioral Science 8 76-79. Smither, J. W., Reilly, R. R., Millsap, R. E., Pearlrnan, K., & Stoffey, R. W. (1993). Applicant reactions to selection procedures. Personnel Psychology, 46, 49-76. Spacapan, S., & Oskamp, S. (1990). People’s reactions to technology. In S. Oskamp, & S. Spacapan (Eds.), People’s Reactions to Technology: In Factories, Offices, and Aerogpace (pp. 9-29). London: Sage. Space, L. G. (1981). The computer as psychometrician. Behavior Research Methods and Instrumentation 13 595-606. Spielberger, C. D. & Vagg, P. R. (1995). Text Anxiegy: Theog, Assessment, and Treatment. Washington: Taylor and Francis. 107 Stanton, J. A. (1999). Validity and related issues in web-based hiring. _Tae Industrial-Organizational Psychologist, 36, 69-77. Steiner, D. D., & Gilliland, S. W. (1996). Fairness reactions to personnel selection techniques in France and the United States. J oumal of Applied Psychology, 81, 134-141. Texas Instruments. (1999, September). [World-wide web-site]. Available: http://www.ti.com/recruit/doca/recruithtm. Thibaut, J ., & Walker, L. (1975). Procedural justice: A psychological analysis. Hillsdale, NJ: Erlbaum. Thompson, J. A., & Wilson, S. L. (1982). Automated psychological testing. International Journal of Man-Machine Studies. 17. 279-289. Thorsteinson, T. J ., & Ryan, A. M. (1997). The effect of selection ratio on perceptions of the fairness of a selection test battery. International Journal of Selection and Assessment, 5, 159-168. Tomkins, S. S., & Messick, S. (1963). Computer Simulation of Personality. New York: John Wiley. Torkzadeh, G., & Angulo, I. E. (1992). The concept and correlates of computer anxiety. Behaviour and Information Technologyfl, 99-108. Torkzadeh, G., & Koufieros, X. (1993). Computer user training and attitudes: A study of business undergraduates. Behaviour and Information Technology,l_2, 284-292. Tyler, T. R. (1984). The role of perceived injustice in defendants’ evaluations of their courtroom experience. Law and Society Review. 18. 51-74. 108 Webster, J ., & Martocchio, J. J. (1995). The differential effects of software training previews on training outcomes. Journal of ManagementaZ 1 . 757-787. Weiner, B. (1979). A theory of motivation for some classroom experiences. Journal of Educational Psychology, 71, 3-25. Zoltan, E., & Chapanis, A. (1982). What do professional persons think about computers? Behaviour and Information Technology,_l, 55-68. 109 ‘F QW—nt...’ . .uT—w APPENDD( A ADMINISTRATOR INSTRUCTIONS FOR PILOT STUDY Pilot-Test Administrator Instructions Hello, my name is _, and I will be your test administrator today. The purpose of this study is to examine a selection test for an organization. This study will last approximately three hours, for which you will receive credit points for your psychology class. I go into more detail shortly, but I first need to have everyone fill out a consent form. Fania-mm m .A- RIDE—Aw [Hand out consent form] Now, this consent forrrr states that you are participating in this study voluntarily and that all of your responses will remain confidential. You can leave this experiment any time you want for whatever reason, and you will receive 1 credit point for every 30 minutes you participate in this study. Please read it over and ask any questions if you have them. [Collect consent forms] Well, as I have briefly mentioned, the purpose of this study is to examine a selection test for an organization. You will each take a [CONDITION SPECIFIC TYPE OF TEST: ‘/2 GET THE PAPER-AND-PENCIL FIRST, ‘/2 GET THE COMPUTERIZED VERSION FIRST] in-basket examination, which is used to select applicants for the organization. The organization uses the examination to make sure that 110 you have the necessary skills before you could be hired. After taking this test, you will then take another test. After completing the second test, we will ask some questions to gather your impressions of both tests. You are about to begin work on a test that would indicate how well you would perform on the job to which the organization is hiring. Your test score would help determine whether your skills and abilities match those necessary on the job. Although parts of the test are timed, you will have the opportunity to view the Individual Test Instructions at your own pace. Please complete the tests independently and try to do your best. Do you have any questions? You may now begin the Priority Management Exercise by going through the tutorial provided. If you have any questions, please raise your hand to indicate you need assistance. After you finish the tutorial, please wait quietly for everyone else to finish. DO NOT begin the reviewing the policies until you are instructed to do so. [WATCH THEM TO INSURE THAT THEY DO NOT HIT THE “BEGIN REVIEW PERIOD” BUTTON OR BEGIN LOOKING AT THEIR POLICY BOOK!!! [After everyone is finished with the tutorial] You may now begin reviewing the policies. [After the review time is over] You may now begin the Priority Management Exercise. 111 [After the 30 minutes is up] Please stop. We will now take a five-minute break. Please do not discuss the test with anyone else during the break. [5 MINUTE BREAK] We will now move to a different room to conduct the second half of the experiment. [After everyone is situated at their new station] As was mentioned earlier, the purpose of this study is to examine a selection test for an organization. Now, you will each take a different version of the same test that you recently completed. Specifically, you will each take a [CONDITION SPECIFIC TYPE OF TEST: 1/2 GET THE PAPER-AND-PENCIL SECOND, ‘A GET THE COMPUTERIZED VERSION SECOND] in-basket examination, which is used to select applicants for the organization. The organization uses the examination to make sure that you have the necessary skills before you could be hired. After completing the test, we will ask some questions to gather your impressions of the test. I will then give everyone a debriefing as to the purpose of the study. You are about to begin work on a test that indicates how well you will perform on the job to which you are applying. Your test score will help determine whether your skills and abilities match those necessary on the job. Although parts of the test are timed, you will have the opportunity to view the Individual Test Instructions at your own pace. Please complete the tests independently and try to do your best. Do you have any questions? 112 You may now begin the Priority Management Exercise by going through the tutorial provided. If you have any questions, please raise your hand to indicate you need assistance. After you finish the tutorial, please wait quietly for everyone else to finish. DO NOT begin the reviewing the policies until you are instructed to do so. [WATCH THEM TO INSURE THAT THEY DO NOT HIT THE “BEGIN REVIEW PERIOD” BUTTON OR BEGIN LOOKING AT THEIR POLICY BOOK!!! [After everyone is finished with the tutorial] You may now begin reviewing the policies. [After the review time is over] You may now begin the Priority Management Exercise. [After the 30 minutes is up] The next thing we would like you to do gather your impressions of the selection test. In particular, were there any aspects of either test that you found to be unclear or confusing? Were the instructions clear? Were the task expectations clear? Were the tests equal in content and clarity? [Write down participant comments] 113 The last thing I will do is give you a debriefing about what the study was about. If you have any questions while I am reading it, feel free to stop me. [Read debriefing] Lastly, I need to sign all of forms so you get credit for the experiment. 114 APPENDIX B INFORMED CONSENT FOR PILOT STUDY INFORMED CONSENT FOR PILOT STUDY PERCEPTIONS OF APPLICATION PROCEDURES The purpose of this study is to examine a selection test for an organization. Your participation in this study requires 3 hours of your time. You will receive 6 Psychology subject pool credits for this 3-hour time commitment. There are no risks or discomforts associated with this procedure. Participation in this study is completely voluntary. You are free to discontinue the study at any time for any reason without penalty. Simply inform the investigator if you wish to withdraw. Your responses will be completely confidential. The information provided below will be used to identify you should you win an award. Your identity will be kept confidential, and it will not be associated with your responses for any data analyses. You are free to ask any questions you might have about this study at any time. At the end of your involvement, you will be provided with feedback explaining the purpose of this research in more detail. You may ask about the results of the study by contacting one of the investigators. If you have any additional questions or concerns please feel free to contact us at 432-7752 (Ann Marie Ryan or Darin Wiechmann). You have been fully informed of the above described procedure and its possible risks and benefits. You give permission for participation in this study. You know that the investigator and his/her associates will be available to answer any questions you may have. If at any time, you feel your questions have not been answered, you may speak with the Head of the Department (Gordon Wood, 355-9563), or the Committee on Research Involving Human Subjects (355-2180). You understand that you may decline to answer any item and are flee to withdraw this consent and discontinue participation in this project at any time without penalty. You are also aware that within one year of your participation, a copy of this Informed Consent form will be provided upon request. Date Print Name Signature 115 F-awclé i“ T‘T—T' APPENDIX C DEBRIEFING FOR PILOT STUDY Pilot Study of Perceptions of Application Procedures Now, I am going to tell you about the experiment you participated in. The study in which you just participated was designed to determine the equivalence of two versions of the same test. In addition to using your ratings on the various dimensions, we will also use your comments and suggestions to make the tests as equivalent as possible. After completing the pilot test, we will be running an experiment designed to assess participants’ reactions to these tests. Reactions to these tests are important as organizations who may be using them are faced with the consequences of applicants’ reactions (e.g., legal, social, economic issues). You will receive credit points for your time and effort in this experiment, and all of your answers will remain confidential. One final request is that you please do not talk about this experiment with other people who may want to participate. You can mention generally what you had to do (i.e., take a test, fill out questionnaires) but please do not tell them what we were studying. If they knew that, it would change how they act, and the experiment will be a waste of their time and ours. Thanks. 116 APPENDIX D TECHNOLOGY SURVEY TECHNOLOGY SURVEY Before filling out this survey, please fill in the following demographic information: Please fill in your name, instructor’s name, and course information in the box on the left side of the opscan sheet. Please write your PID number under the “PID” heading on the top right side of the sheet and fill in the appropriate circles. Please write your age in the under the “Section” heading with a “zero” before your age (i.e., 019) and fill in the appropriate circles. Please fill in the appropriate circle under the “Sex” section below the PID section. Please return to your instructor when completed. 117 TECHNOLOGY SURVEY For each of the following 20 jobs, please evaluate the level of technology that you perceive that the job involves. Specifically, technology can be defined as “new types of hardware (tools, equipment, and so on) and software (such as the Internet) which are developed. ” For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree Now, please read each of the following questions and fill in the appropriate circle on the opscan sheet that best describes how you feel. IMPORTANT — Make sure that the number of the statement matches the number on the opscan sheet. 0 Insurance account representative 1 The current job requires a lot of technology. 2 The current job could not be performed without technology. 3. The current job only requires basic, non-technical skills. 4. I would expect to work with a lot of technology on this job. 5 Working with technology is an important part of this job. 6 This job does n_o_t involve working with technology. 0 Electrician 7. The current job requires a lot of technology. 8. The current job could not be performed without technology. 9. The current job only requires basic, non-technical skills. 10. I would expect to work with a lot of technology on this job. 11. Working with technology is an important part of this job. 12. This job does mt involve working with technology. 0 Legal researcher/paralegal 13. The current job requires a lot of technology. 14. The current job could not be performed without technology. 15. The current job only requires basic, non-technical skills. 16. I would expect to work with a lot of technology on this job. 17. Working with technology is an important part of this job. 18. This job does n_ot involve working with technology. 118 19. 20. 21 . 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. Accountant The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does n_ot involve working with technology. Financial analyst The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does at); involve working with technology. Programmer The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does M involve working with technology. Executive secretary The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does n_o_t involve working with technology. Customer service representative The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does pa; involve working with technology. 119 . [Ta-m ‘fimfim 15'nb iii—.44- 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. Receptionist The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does mi involve working with technology. Records clerk The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does pat involve working with technology. Telecommunications technician The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does pg involve working with technology. Insurance fraud investigator The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does pp; involve working with technology. Plant/building mechanic The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does pp; involve working with technology. 120 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. Auditor The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does _nat involve working with technology. Business Analyst The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does ao_t involve working with technology. Sales support specialist The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does _rgt involve working with technology. Administrative assistant The current job requires a lot of technology. The current job could not be performed without technology. The current job only requires basic, non-technical skills. 100. I would expect to work with a lot of technology on this job. 101. Working with technology is an important part of this job. 102. This job does n_ot involve working with technology. Human resources representative 103. The current job requires a lot of technology. 104. The current job could not be performed without technology. 105. The current job only requires basic, non-technical skills. 106. I would expect to work with a lot of technology on this job. 107. Working with technology is an important part of this job. 108. This job does pat involve working with technology. 121 0 Insurance claims processor 109. The current job requires a lot of technology. 110. The current job could not be performed without technology. 111. The current job only requires basic, non-technical skills. 112. I would expect to work with a lot of technology on this job. 113. Working with technology is an important part of this job. 114. This job does n_o_t involve working with technology. 0 Instructional designer/training consultant 115. The current job requires a lot of technology. 116. The current job could not be performed without technology. 117. The current job only requires basic, non-technical skills. 118. I would expect to work with a lot of technology on this job. 119. Working with technology is an important part of this job. 120. This job does p_o_t involve working with technology. ”Finally, 121. Are you 1 = Afiican-American 2 = Asian 3 = Hispanic 4 = White 5 = Other 122 APPENDIX E TEST ADMINISTRATOR INSTRUCTIONS FOR MAIN STUDY Test Administrator Instructions Hello, my name is _, and I will be your test administrator today. The purpose of this study is to examine application procedures for an organization. This study is conducted in two parts; the first part is today and will last approximately 2 hours and the second part will take place one week later and will last approximately one half hour. For this time you will receive credit points for your psychology class. In addition, there will be an opportunity for everyone to receive a cash award based on their participation in both parts of the study. I go into more detail shortly, but I first need to have everyone fill out a consent form. [Hand out consent form] Now, this consent form states that you are participating in this study voluntarily and that all of your responses will remain confidential. You can leave this experiment any time you want for whatever reason, and you will receive 1 credit point for every 30 minutes you participate in this study. Please note that to receive the credit points, you must be able to meet at again next week at the same time. That is, if you do not return for the second session, you will receive ZERO credit points. Please read it over and ask any questions if you have them. [Collect consent forms] 123 Well, as I have briefly mentioned, the purpose of this study is to examine application procedures for an organization. The job that the organization would be hiring for is a (business analyst / customer service representative). Business analysts are responsible for evaluating the organization’s business needs. They assist with design, development, programming, and implementation of various software and hardware applications. They respond to and prioritize requests from employees for potential improvements to the existing computer systems. They work with computer users to understand and articulate their needs. They report to their immediate manager and conduct scheduled reporting on problems. Customer service representatives are responsible for processing and answering all customer requests and resolving any issues related to client records. They respond to and prioritize the requests from customers for efficient and timely handling of their problems. This includes contacting customers and tracking down pertinent information to complete the request. They report to their immediate manager and conduct scheduled reporting on problems. For the position of (business analyst / customer service representative), the organization would want to make sure that everyone has the ability to do the job. Thus, before an applicant can be hired for the (business analyst / customer service representative) position, they would have to pass a selection test to make sure that they have the necessary skills. In this study, you will be taking the same exercise in order for us to better understand the selection process. 124 The test you will be taking is a (computerized / paper-and-pencil) multi- tasking in-basket exercise. An in-basket exercise is designed to replicate administrative tasks of the job under consideration. Your task will be to review requests for marketing services, prioritize the requests, and then route them to the appropriate individuals for response to the requests. You will have 30 minutes to process as many messages as you can in your in-box as well as new messages which you will receive. I l" After taking the test, I will score the results of your (computerized / paper-and- E pencil) in-basket exercise. If your score indicates you have the same level of skills necessary for the (business analyst / customer service representative) position and l g, would be hired for the (business analyst I customer service representative) position, , you will receive a $15 cash award. So, there are two parts to this study. What we will do today is take the selection test and have you fill out some questionnaires. The second part will take place one week later and will last approximately one-half hour. At the second session you will find out if would have been hired or not hired for the (business analyst / customer service representative) position. NOTE, instead of meeting here next week, we will meet at . I will now pass around a reminder note with the date, time, and location of our meeting place next week. Are there any questions about what is involved with this study? Before you take the test, you will first fill out a questionnaire. Please fill it out completely and honestly. I am going to hand out the questionnaire and opscan sheet now, but PLEASE DO NOT START UNTIL I SAY TO DO S0. 125 [Hand out questionnaires and Opscan sheets now. Make sure people do not start prematurely] While filling out the questionnaire, make sure that the number of the question corresponds to the number on the Opscan sheet. When you are finished, please make sure you have filled in the correct number of questions on the opscan sheet. When you are done please turn it over and wait quietly for everyone else to finish. If you have any questions, just raise your hand and I will be by to help you. Are there any questions? OK, begin. [When they are finished, pick up the materials] You are about to begin work on a (computerized / paper-and-pencil) in-basket exercise used to select applicants for a (business analyst / customer service representative) position. Your test score will help determine whether your skills and abilities match those necessary on the job. Although parts of the test are timed, you will have the opportunity to view the Individual Test Instructions at your own pace. Please complete the tests independently and try to do your best. Again, cash awards will be given to those individuals whose test scores were high enough that they would have been selected for the (business analyst / customer service representative) position. Do you have any questions? You may now begin the Priority Management Exercise by going through the tutorial provided. If you have any questions, please raise your hand to indicate you need assistance. After you finish the tutorial, please wait quietly for everyone else to finish. DO NOT begin the reviewing the policies until you are instructed to do so. 126 [WATCH THEM TO INSURE THAT THEY DO NOT (HIT THE “BEGIN REVIEW PERIOD” BUTTON!! / BEGIN LOOKING AT THEIR POLICY BOOK BEFORE THE REVIEW PERIOD!!!” [After everyone is finished with the tutorial] You will now have 10 minutes to review the policies. You will have the policies during the testing session but reviewing them may better prepare you for the Priority Management Exercise. [After the review time is over] You will now have 30 minutes to work on the Priority Management Exercise. After the 30 minutes are over, you will come to a screen, which indicates a second session. When you see this screen, please do not touch anything and raise your hand to signal the test administrator. After completing the test, please wait quietly for everyone else to finish. please stop and wait for the administrator’s instructions. [After everyone has completed the PME] The last thing you will do today is complete another questionnaire. While filling out the questionnaire, make sure that the number of the question corresponds to the number on the opscan sheet. When you are finished, please make sure you have filled in the correct number of questions on the opscan sheet. After filling out the questionnaire, you may quietly leave. Again, we will meet at the same time on in Room 127 SESSION TWO [Everyone’s test will be corrected and their selection decision packet will be ready to go. In order to pass out their packet, use the subject sheet to match the subject number to their name and PID.] First, let me thank everyone for returning and taking the time to come here today. As you recall, last week you took an in-basket exercise, which is used to select applicants for a (business analyst / customer service representative) position. Again, business analysts are responsible for evaluating the organization’s business needs. They assist with design, development, programming, and implementation of various software and hardware applications. They respond to and prioritize requests from employees for potential improvements to the existing computer systems. They work with computer users to understand and articulate their needs. They report to their immediate manager and conduct scheduled reporting on problems customer service representatives are responsible for processing and answering all customer requests and resolving any issues related to client records. They respond to and prioritize the requests from customers for efficient and timely handling of their problems. This includes contacting customers and tracking down pertinent information to complete the request. They report to their immediate manager and conduct scheduled reporting on problems. For the position of (business analyst / customer service representative), the organization would want to make sure that everyone has the ability to do the job. Thus, 128 WM?€"—fiu‘ 'fi 's I, before an applicant can be hired for the (business analyst / customer service representative) position, they would have to pass a selection test to make sure that they have the necessary skills. Your score on the exercise would have been used to determine if you would be hired or not hired for the (business analyst / customer service representative) position. I have corrected all your exercises and determined who would have been hired or not hired based on those scores. Now I will pass out everyone’s decision. Along with the selection decision is a questionnaire. Please read the selection decision, which is written on the first page of your packet. NOTE, there are no names printed on the selection decision, which you will receive. This is only to conserve paper, but it is important that you realize whatever the selection decision says is your selection decision. After reading the selection decision, please go ahead and fill out the attached questionnaire using the opscan sheet. It is very important that while filling out the questionnaire, you make sure that the number of the question you are reading corresponds to the number you are filling out on the opscan sheet. Feel flee to ask me any questions you might have about the questionnaire. When you are done please wait quietly for everyone else to finish. [When everyone is f'msihed] All right, could everyone hand in the questionnaire and opscan sheet now? I will now give you a debriefing about what the study was about. If you have any questions while I am reading it, feel free to stop me. [After reading the debriefing] 129 Lastly, I need to sign all of forms so you get credit for the experiment. For those of you who would have been selected for the (business analyst / customer service representative) position, I also need to have you sign a form so you can receive your $15 cash award. 130 APPENDIX F INFORMED CONSENT FOR MAIN STUDY INFORMED CONSENT FOR PERCEPTIONS OF APPLICATION PROCEDURES The purpose of this study is to examine application procedures for an organization. The study has two parts: 1) You will complete measures typically used in selection I processes, and 2) you will come back one week later to learn of the selection decision and complete a final questionnaire. Your participation in this study requires 2 ‘/2 hours of your time. This study consists of two sessions, which are separated by one week. The first session will take approximately a 2 hours and the second session will take approximately 1/2 hour. You will receive 5 lb Psychology subject pool credits for this 2 l/2 hour time commitment. There are no risks or discomforts associated with this procedure. Cash awards of $ 1 5 are available for those participants who would have been selected for the position described. Winners will be selected on their performance on the selection test given during the study. Awards will be given after you are informed of your selection decision during the second session of the experiment. Participation in this study is completely voluntary. You are free to discontinue the study at any time for any reason without penalty. Simply inform the investigator if you wish to withdraw. Your responses will be completely confidential. The information provided below will be used to identify you should you win an award. Your identity will be kept confidential, and it will not be associated with your responses for any data analyses. You are free to ask any questions you might have about this study at any time. At the end of your involvement, you will be provided with feedback explaining the purpose of this research in more detail. You may ask about the results of the study by contacting one of the investigators. If you have any additional questions or concerns please feel free to contact us at 432-7752 (Ann Marie Ryan or Darin Wiechmann). I understand the procedures and agree to participate in this study. Date Print Name Signature PID 131 APPENDIX G PRE-TEST QUESTIONNAIRE PRE-TEST QUESTIONNAIRE As part of our efforts to improve this employment selection test, we are interested in your reactions to the selection process. We are providing you with a description of the test so you can tell us about how well you think you will do on this type of test. After you have read the test description, please answer the reaction questions below. Priority Management Exercise (PME) The PME is a multi-tasking in-basket exercise. An in-basket exercise is designed to replicate administrative tasks of the job under consideration. Your task will be to review requests for marketing services, prioritize the requests, and then route them to the appropriate individuals for response to the requests. You will have 30 minutes to process as many messages as you can in your in-box as well as new messages which you will receive. Now that you have a description of the test, you should have an idea of the type of test you will take in a few minutes. For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree Now, please read each of the following questions and fill in the appropriate circle on the opscan sheet that best describes how you feel. IMPORTANT — Make sure that the number of the statement matches the number on the opscan sheet. The job for which I am taking this test requires a lot of technology. The job for which I am taking this test could not be performed without technology. The job for which I am taking this test only requires basic, non-technical skills. I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. QMPPP?‘ This job does pat involve working with technology. PLEASE GO ON TO THE NEXT PAGE 132 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree Doing well on this test is important to me. 8. I want to do well on this test. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. I will try my best on this test. I will try to do the very best I can on this test. While taking the test, I will try to concentrate and try to do well. I want to be among the top scorers on this test. I push myself to work hard on tests. I am extremely motivated to do well on this test. I just @afi care how I do on this test. I will mat put much effort into this test. I fiequently read computer magazines or other sources of information that describe new computer technology. I know how to recover deleted or “lost data” on a computer or PC. I know what a LAN is. I know what an operating system is. I know how to write computer programs. I know how to install software on a personal computer. I know what e-mail is. I know what a database is. I am computer literate. I regularly use a PC for word processing. 1 often use a mainframe computer system. PLEASE GO ON TO THE NEXT PAGE 133 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 28. I am good at using computers. 29. I have taken tests similar to this test. 30. I am familiar with the types of questions asked in this test. 31. I am familiar with this type of test. 32. I have not seen this type of test before. --:7 Jun-Lina" 33. I find using the computer easy. 34. It would be hard for me to learn to use a computer. 35. I learn new computer programs easily. 36. I hope I never have a job which requires me to use a computer. 37. I get confused with all the different keys and computer commands. 38. I feel uneasy when people talk about computers. 39. I feel comfortable working with computers. 40. I get anxious each time I need to learn something new about computers. 41. I believe I will have _na problems on this test. 42. I think I will do very well on this test. 43. Compared with other applicants taking this test, I expect to do well. 44. I am confident that I will receive a high score on this test. 45. I’m confident I can solve the problems presented in this test. 46. I am confident that I could learn computer skills. 47. I am sure of my ability to learn a computer programming language. 48. I will be able to keep up with important technological advances in computers. 49. I feel apprehensive about using a computer terminal. PLEASE GO ON TO THE NEXT PAGE 134 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 50. If given the opportunity to use a computer, I am afiaid that I might damage it in some way. 51. I have avoided computers because they are unfamiliar to me. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. I hesitate to use a computer for fear of making mistakes that I cannot correct. I am sure of my ability to interpret a computer printout. I have difficulty understanding most technical matters. Computer terminology sounds like confusing jargon to me. I don’t like to waste my time daydreaming. Once I find the right way to do something, I stick to it. I am intrigued by the patterns I find in art and nature. I believe letting students hear controversial speakers can only confuse and mislead them. Poetry has little or no effect on me. I often try new and foreign foods. I seldom notice the moods or feelings that different environments produce. I believe we should look to our religious authorities for decisions on moral issues. Sometimes when I am reading poetry or looking at a work of art, I feel a chill or wave of excitement. I have little interest in speculating on the nature of the universe of the human condition. I have a lot of intellectual curiosity. I often enjoy playing with theories or abstract ideas. PLEASE GO ON TO THE NEXT PAGE 135 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 68. 69. 70. 71 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 1 = False 2 = True While taking an important examination, I perspire a great deal. I get to feel very panicky when I have to take a surprise exam. During tests, I find myself thinking of the consequences of failing. . After important tests I am frequently so tense that my stomach gets upset. 72. While taking an important exam I find myself thinking of how much brighter the other students are than I am. 1 freeze up on things like intelligence tests and final exams. If I were to take an intelligence test I would worry a great deal before taking it. During course examinations, I find myself thinking of things unrelated to the actual course material. During a course examination, I frequently get so nervous that I forget facts I really know. If I knew I was going to take an intelligence test, I would feel confident and relaxed beforehand. I usually get depressed after taking a test. I have an uneasy, upset feeling before taking a final examination. When taking a test, my emotional feelings do not interfere with my performance. Getting a good grade on one test doesn’t seem to increase my confidence on the second. After taking a test I always feel I could have done better than I actually did. - I sometimes feel my heart beating very fast during important tests. PLEASE GO ON TO THE NEXT PAGE 136 84. Are you 1 = Male 2 = Female 85. Are you 1 = Afiican-American 2 = Asian 3 = Hispanic 4 = White 5 = Other 86. What is your age? “Please fill in this answer on the third column to the right of the PID section. Please fill out the boxes at the top with your age and fill in the corresponding circles below. If you have any questions, please raise your hand. 137 APPENDIX H POST-TEST QUESIONNAIRE POST-TEST QUESTIONNAIRE Please respond to the following questions as carefully and truthfully as possible. For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree Now, please read each of the following questions and fill in the appropriate circle on the opscan sheet that best describes how you feel. IMPORTANT -— Make sure that the number of the statement matches the number on the opscan sheet. 1. Whether or not I would get the job, I feel the selection m was fair. 2. Whether or not I would get the job, the procedures used to select people for this job are fair. Whether or not I would get the job, I am satisfied with the selection was, Overall, I feel dissatisfied with the lay people would be selected for the job. I did not understand what the test had to do with the job. I could not see any relationship between the test and what is required on the job. It would be obvious to anyone that the test is related to the job. The actual content of the test was clearly related to the job. 99°89‘99“!” There was no real connection between the test that I went through and the job. 10. Failing to pass the test clearly indicates that you can’t do the job. 11. I am confident that the test can predict how well an applicant will perform on the job. 12. My performance on the test was a good indicator of my ability to do the job. 13. Applicants who perform well on this type of test are more likely to perform well on the job than applicants who perform poorly. PLEASE GO ON TO THE NEXT PAGE 138 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 14. 15. 16. 17. 18. 19. 20. 21. 22 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree The employer can tell a lot about the applicant’s ability to do the job from the results of the test. The selection process was standardized and systematically administered. As far as I know, the selection tests were administered the same way to all applicants. Some people were treated differently than I was. As far as I know, everyone received the same treatment during the selection process. I felt the test was impersonal. I felt like just another number while taking the test. I did M care for my interaction with the test. . The test administrator cared how I felt during testing. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. I felt I was respected during the testing. I felt I was treated coldly during testing. I thought that the test was difficult. I thought this test was easy. This test was m1 difficult. This test was too easy for me. I found this test too simple. I liked taking this type of test. I prefer this type of test than others I have taken. I would prefer taking a different type of test. Compared to other tests I have taken, I didn’t like this test. I did well on the test. PLEASE GO ON TO THE NEXT PAGE 139 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 35. My performance on the test was not as good as it could have been. 36. I could have done a lot better on the test under different circumstances. 37. I am satisfied with my performance on the test. 38. I did poorly on the test. 140 ‘1-." In.” To. - APPENDIX I POST-DECISION QUESTIONNAIRE — SELECT CONDITION 141 POST-DECISION QUESTIONNAIRE Dear , You scored high enough to be offered a position as a (business analyst l customer service representative) with this organization. Therefore, you will receive the $15 award. 9 Human Resource Manager Please respond to the following questions as carefully and truthfully as possible. For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree Now, please read each of the following questions and fill in the appropriate circle on the opscan sheet that best describes how you feel. IMPORTANT — Make sure that the number of the statement matches the number on the opscan sheet. ”>195“?pr Whether or not I got the job, I feel the selection M was fair. Whether or not I got the job, the procedures used to select people for this job are fair. Whether or not I got the job, I am satisfied with the selection pm. Overall, I feel dissatisfied with the fly people were selected for the job. Overall, I feel the results of the selection process were Lnfair. I feel the hiring decision (accept/reject) was fair. Overall, I am satisfied with the hiring decision. I am dissatisfied with the test administrator's decision about whether or not to hire me. I did not understand what the test had to do with the job. PLEASE GO ON TO THE NEXT PAGE (page 1 of 5) 142 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 10. I could not see any relationship between the test and what is required on the job. 11. It would be obvious to anyone that the test is related to the job. 12. The actual content of the test was clearly related to the job. 13. There was no real connection between the test that I went through and the job. 14. Failing to pass the test clearly indicates that you can’t do the job. 15. I am confident that the test can predict how well an applicant will perform on the job. 16. My performance on the test was a good indicator of my ability to do the job. 17. Applicants who perform well on this type of test are more likely to perform well on the job than applicants who perform poorly. 18. The employer can tell a lot about the applicant’s ability to do the job from the results of the test. 19. The selection process was standardized and systematically administered. 20. As far as I know, the selection test was administered the same way to all applicants. 21. Some people were treated differently than I was. 22. As far as I know, everyone received the same treatment during the selection process. 23. I felt the test was impersonal. 24. I felt like just another number while taking the test. 25. I did po_t care for my interaction with the test. 26. The test administrator cared how I felt during testing. 27. I felt I was respected during the testing. 28. I felt I was treated coldly during testing. 29. I thought that the test was difficult. 30. This test was harder than other tests I have taken. 31. I thought this test was easy. 32. This test was M difficult. 143 PLEASE GO ON TO THE NEXT PAGE (page 2 of 5) For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 33. 34. 35. 36. 37. 38. 39. 4o. 41. 42. 43. 44. 45. 4o. 47. 48. 49. so. 51 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree I liked taking this type of test. I prefer this type of test than others I have taken. I would prefer taking a different type of test. Compared to other tests I have taken, I didn’t like this test. My performance was due to the type of test used. My performance was due to the type of procedure used. My performance was due to the way the test was presented. My performance was due to my capabilities. My performance was due to my abilities. My performance was due to how hard I tried. I would recommend this experiment to my fiiends. I would tell my fiiends to participate in this experiment. I would tell others this is a good experiment in which to participate. I think other people should know this is a good experiment. I would n_ot use this organization's products or services. I would like to use this organization’s products or services. I am interested in using this organization’s products or services. I would accept the job. I would be willing to take this job. PLEASE GO ON TO THE NEXT PAGE (page 3 of 5) 144 m be a e02: 008 exmz 05. 0e 20 00 88.: .5» 0e one: a w e e m e m N _ a; E 8030 ”mm 005 mew—0088 38:8 05 E .m 0.000: 0050 ..0 0300: 0050 00 so» 3 8225:: _ N m a. m e a w a so» .3 8225 “we—£0058 A3880 0:: 2 .v bangs—0h _ N m v m o N. w 0 00000803 E 005 @2888 38:8 05 E .m 0300: 0050 0.000: 0050 00 00 :0» .3 0323:0000: _ N m v. m o \- w e :0» .3 032—00000 ”38:8 05 m_ .N 002036 28.—:0» .«0 05 m0 00080 :0 3005”“ H N m .0 m o n w e 000%: :0 900:3— :05 mew—.088 38:8 05 fl .— 8208 wE30=£ 05 .«0 :08 00.“ 0038:: 0:0 20.00 080050 0:0» ..0 88:8 .5 8:8 £5 m0 8:01:00 00 €280..qu 0:0.» E00000 3200 2:0: 05. .0800 005;: 0.6: :0.» E88.— ..0 :88.— 05 0:00: 0101:. 800200.85 .00m 05 00.“ 00000—8 0003 :0.» Ew:05 :0» .33 0880 88c: 05 m: 03m 883 .30—09 00280: 000% :— 145 03800800 mm 0:00E0m wcmwcwficb 8050 “:00: mat—H085 0E: :26 039m Am 0.. m 002: 020 c w v o n v v m w c w v N _ 03800800 8_ 0:0 02 £035 00.» $50088 38:8 05 m— m _ Q1:30:20 “mm .05 wE50E8 $888 05 m— m o :0.» “:00: w:_£080m “00:08:00 05 m— N _ 08: :26 033:; ”mm :05 @3888 38:8 05 m— 146 APPENDIX J POST-DECISION QUESTIONNAIRE - REJECT CONDITION 147 POST-DECISION QUESTIONNAIRE I—7——=—-=—-r = Dear , Sorry, you did not score high enough to be offered a position as a customer service representative with this organization. Therefore, you will not receive the $15 award. 9 Human Resource Manager Please respond to the following questions as carefully and truthfully as possible. For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree Now, please read each of the following questions and fill in the appropriate circle on the opscan sheet that best describes how you feel. IMPORTANT — Make sure that the number of the statement matches the number on the opscan sheet. Whether or not I got the job, I feel the selection ppm was fair. Whether or not I got the job, the procedures used to select people for this job are fair. Whether or not I got the job, I am satisfied with the selection w. Overall, I feel dissatisfied with the m people were selected for the job. Overall, I feel the results of the selection process were _upfair. I feel the hiring decision (accept/reject) was fair. Overall, I am satisfied with the hiring decision. I am dissatisfied with the test administrator's decision about whether or not to hire 98999983?" me. 9. I did not understand what the test had to do with the job. PLEASE GO ON TO THE NEXT PAGE (page 1 of 5) 148 For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4 = Agree 5 = Strongly Agree 10. I could not see any relationship between the test and what is required on the job. 11. It would be obvious to anyone that the test is related to the job. 12. The actual content of the test was clearly related to the job. 13. There was no real connection between the test that I went through and the job. 14. Failing to pass the test clearly indicates that you can’t do the job. 15. I am confident that the test can predict how well an applicant will perform on the job. 16. My performance on the test was a good indicator of my ability to do the job. 17. Applicants who perform well on this type of test are more likely to perform well on the job than applicants who perform poorly. 18. The employer can tell a lot about the applicant’s ability to do the job fiom the results of the test. 19. The selection process was standardized and systematically administered. 20. As far as I know, the selection test was administered the same way to all applicants. 21. Some people were treated differently than I was. 22. As far as I know, everyone received the same treatment during the selection process. 23. I felt the test was impersonal. 24. I felt like just another number while taking the test. 25. I did n_ot care for my interaction with the test. 26. The test administrator cared how I felt during testing. 27. I felt I was respected during the testing. 28. I felt I was treated coldly during testing. 29. I thought that the test was difficult. 30. This test was harder than other tests I have taken. 31. I thought this test was easy. 32. This test was n_ot difficult. 149 PLEASE GO ON TO THE NEXT PAGE (page 2 of 5) For each of the following questions, fill in the appropriate circle on the opscan sheet using the following scale: 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 1 = Strongly Disagree 2 = Disagree 3 = Neither Agree nor Disagree 4=Agw 5 = Strongly Agree I liked taking this type of test. I prefer this type of test than others I have taken. I would prefer taking a different type of test. Compared to other tests I have taken, I didn’t like this test. My performance was due to the type of test used. My performance was due to the type of procedure used. My performance was due to the way the test was presented. My performance was due to my capabilities. My performance was due to my abilities. My performance was due to how hard I tried. I would recommend this experiment to my friends. . I would tell my friends to participate in this experiment. 45. 46. 47. 48. 49. 50. 51. I would tell others this is a good experiment in which to participate. I think other people should know this is a good experiment. I would n_ot use this organization's products or services. I would like to use this organization’s products or services. I am interested in using this organization’s products or services. Even if I would be offered the job now, I would not accept it. I would not be interested in this job even if I would be offered it now. PLEASE GO ON To THE NEXT PAGE (page 3 of 5) 150 Am :0 v 003: 003: .502 00,—. 00 7:0 00 8400.: 8: :o 08:: o 0 N 0 m e N N _ .5: 8 022:0 ”2 :05 w0_50E8 38:00 05 2 .m 0300: 0050 :0 20000 0050 :0 so: 3 822:0: _ N N v m 0 N N N .5: B 8225 $550.08 8:00:00 05 2 .0 b08903. : N m 0 m o A. w 0 5000000: ”2 :05 w0_50E8 38:00 05 2 .m 0.000: 0050 0.0000 0050 :0 :0 :0.» .3 030:0:00003 _ N m 0 m 0 N. w 0 :0.» .3 032—00000 A2808 05 2 .N 005038 280:0.» :0 05 :0 :0080 00 20053: _ N m 0 m o N. w o :0080 00 200:0”— :05 3:50:08 @808 05 2 .— 8208 w0_30=0.: 05 :0 :08 00.: 0050:: 000 2020 8000::0 .50.» :0 88:00 :0 8:00 25 :0 8002500 :0 80080000: .50.» E00000 30:00 8000:: 0:... 0.600 00505 02:: :0.» 00800.: 00 0800: 05 :0000 0.05:. 80050880— 30“ 05 00.: 00:00.8 mud 0:03 :0.» :nw:05 :0.» .33 0800: E050: 05 m: 0.2» 800—: .30—00 00030.:— 0008 0— 151 0380080: 2 00000:0m 33000003 2050 5000 w0_50E0m 0:5 00>0 038m Am :0 m «use 07:: o w v o n v v m m: o w v m N _ 0380080.: 2 000 02 2255 00.: w0m50008 38:8 05 2 .0 N N _ 2:80.08 2 :05 w0m5088 @8000 05 2 .0 N. w 0 :0.» 5000 w0500:0m ”38:00 05 2 N m N _ 005 00>0 0E0t0> 2 :05 w0m50E8 38:00 05 2 .0 152 APPENDIX K DEBRIEFING FORM FOR MAIN STUDY Perceptions of Application Procedures Now, I am going to tell you about the experiment you participated in. The study in which you just participated was designed to examine how people react to novel selection procedures. We did this by having some participants take a computerized version and some take a paper-and-pencil version of an in-basket test used to select applicants in an organization. In addition, we looked at how changing the perceived level of technology that the job involved changed how participants viewed the test. That is some people were told they were applying for a job that is perceived as highly technical while others were told they were applying for a job that is perceived as not involving much technology. Looking at reactions to novel selection procedures is important as more organizations are beginning to use computerized, web-based, multimedia, and similar high technology selection procedures. As people’s perceptions of selection procedures may affect how they view the organization, it is important to understand what determines these perceptions. You will receive credit points for your time and effort in this experiment, and all of your . answers will remain confidential. One final request is that you please do not talk about this experiment with other people who may want to participate. You can mention generally what you had to do (i.e., take a test, go to two sessions) but please do not tell them what we were studying. If they knew that, it would change how they act, and the experiment will be a waste of their time and ours. Thanks. 153 APPENDIX L COMPLETE LIST OF MEASURES AND SOURCES Pre-test manipulation check Perceived technology level of the job The job for which I am taking this test requires a lot of technology. The job for which I am taking this test could not be performed without technology. The job for which I am taking this test only requires basic, non-technical skills (R). I would expect to work with a lot of technology on this job. Working with technology is an important part of this job. This job does not involve working with technology. (R) Pretest motivation check Motivation — adopted fi'om Arvey, Strickland, Drauden, and Martin (1990) Doing well on this test is important to me. I want to do well on this test. I will try my best on this test. I will try to do the very best I can on this test. While taking the test, I will try to concentrate and try to do well. I want to be among the top scorers on this test. I push myself to work hard on tests. I am extremely motivated to do well on this test. I just 51% care how I do on this test. (R) I will not put much effort into this test. (R) Pre-test person characteristics measures Computer experience (Potosky and Bobko, 1997) I frequently read computer magazines or other sources of information that describe new computer technology. I know how to recover deleted or “lost data” on a computer or PC. I know what a LAN is. I know what an operating system is. I know how to write computer programs. I know how to install software on a personal computer. I know what e-mail is. I know what a database is. I am computer literate. I regularly use a PC for word processing. 154 I often use a mainframe computer system. I am good at using computers. Test-taking experience I have taken tests similar to this test. I am familiar with the types of questions asked in this test. I am familiar with this type of test. I have not seen this type of test before. (R) Computer self-efficacy (Levine & Donitsa-Schmidt, 1997) I find using the computer easy. It would be hard for me to learn to use a computer. (R) I learn new computer programs easily. I hope I never have a job which requires me to use a computer. (R) I get confused with all the different keys and computer commands. (R) I feel uneasy when people talk about computers. (R) I feel comfortable working with computers. I get anxious each time I need to learn something new about computers. (R) Test-taking self efficacy I believe I will have n_o_ problems on this test. I think I will do very well on this test. Compared with other applicants taking this test, I expect to do well. I am confident that I will receive a high score on this test. I’m confident I can solve the problems presented in this test. Computer anxiety (Igbaria and Chakrabarti, 1990) I am confident that I could learn computer skills. (R) I am sure of my ability to learn a computer programming language. (R) I will be able to keep up with important technological advances in computers. (R) I feel apprehensive about using a computer terminal. If given the opportunity to use a computer, I am afraid that I might damage it in same way. I have avoided computers because they are unfamiliar to me. I hesitate to use a computer for fear of making mistakes that I cannot correct. I am sure of my ability to interpret a computer printout. (R) I have difficulty understanding most technical matters. Computer terminology sounds like confusing jargon to me. Test-taking anxiety (Sarason & Ganzer, 1962) While taking an important examination, I perspire a great deal. I get to feel very panicky when I have to take a surprise exam. During tests, I find myself thinking of the consequences of failing. After important tests I am frequently so tense that my stomach gets upset. While taking an important exam I find myself thinking of how much brighter the other students are than 1 am. 155 I freeze up on things like intelligence tests and final exams. If I were to take an intelligence test I would worry a great deal before taking it. During course examinations, I find myself thinking of things unrelated to the actual course material. During a course examination, I frequently get so nervous that I forget facts I really know. If I knew I was going to take an intelligence test, I would feel confident and relaxed beforehand. (R) I usually get depressed after taking a test. I have an uneasy, upset feeling before taking a final examination. When taking a test, my emotional feelings do not interfere with my performance. (R) Getting a good grade on one test doesn’t seem to increase my confidence on the second. After taking a test I always feel I could have done better than I actually did. I sometimes feel my heart beating very fast during important tests. Openness to experience (Costa & McCrae, 1989) I don’t like to waste my time daydreaming. (R) Once I find the right way to do something, I stick to it. (R) I am intrigued by the patterns I find in art and nature. I believe letting students hear controversial speakers can only confuse and mislead them. (R) Poetry has little or no effect on me. (R) I often try new and foreign foods. I seldom notice the moods or feelings that different environments produce. (R) I believe we should look to our religious authorities for decisions on moral issues. Sometimes when I am reading poetry or looking at a work of art, I feel a chill or wave of excitement. I have little interest in speculating on the nature of the universe of the human condition. (R) I have a lot of intellectual curiosity. I often enjoy playing with theories or abstract ideas. Post-tesg pre-feedback measures Process Fairness (Gilliland, 1994) Whether or not I would get the job, I feel the selection process was fair. Whether or not I would get the job, the procedures used to select people for this job are fair. Whether or not I would get the job, I am satisfied with the selection process. Overall, I feel dissatisfied with the m people would be selected for the job.(R) Perceived job relatedness (Smither, Reilly, Millsap, Pearlman, & Stoffey, 1993) Face validigy I did not understand what the test had to do with the job. (R) 156 I could not see any relationship between the test and what is required on the job. (R) It would be obvious to anyone that the test is related to the job. The actual content of the test was clearly related to the job. There was no real connection between the test that I went through and the job. (R) Perceived predictive validity Failing to pass the test clearly indicates that you can’t do the job. I am confident that the test can predict how well an applicant will perform on the job. My performance on the test was a good indicator of my ability to do the job. Applicants who perform well on this type of test are more likely to perform well on the job than applicants who perform poorly. The employer can tell a lot about the applicant’s ability to do the job from the results of the test. Consistency (Ployhart and Ryan, 1997; Gilliland and Honig,'1994) The selection process was standardized and systematically administered. As far as I know, the selection tests were administered the same way to all applicants. Some people were treated differently than I was. (R) As far as I know, everyone received the same treatment during the selection process. Interpersonal treatment I felt the test was impersonal. I felt like just another number while taking the test. I did m1 care for my interaction with the test. The test administrator cared how I felt during testing. (R) I felt I was respected dming the testing. (R) I felt I was treated coldly during testing. Test Ease I thought that the test was difficult. (R) I thought this test was easy. This test was ppt difficult. This test was too easy for me. I found this test too simple. I liked taking this type of test. I prefer this type of test than others I have taken. I would prefer taking a different type of test (R). Compared to other tests I have taken, I didn’t like this test. (R) 157 Self-assessed performance (Brutus & Ryan, 1996) I did well on the test. My performance on the test was not as good as it could have been. (R) I could have done a lot better on the test under different circumstances. (R) I am satisfied with my performance on the test. I did poorly on the test. (R) Post-test, post-feedback measures Process Fairness (Gilliland, 1994) Whether or not I got the job, I feel the selection process was fair. Whether or not I got the job, the procedures used to select people for this job are I fair. Whether or not I got the job, I am satisfied with the selection process. Overall, I feel dissatisfied with the m people were selected for the job.(R) Outcome Fairness (Gilliland, 1994) Overall, I feel the results of the selection process were mfair. (R) I feel the hiring decision (accept/reject) was fair. 13 Overall, I am satisfied with the hiring decision. I am dissatisfied with the test administrator's decision about whether or not to hire me. Perceived job relatedness (Smither, Reilly, Millsap, Pearlman, & Stoffey, 1993) Face validity I did not understand what the test had to do with the job. (R) I could not see any relationship between the test and what is required on the job. (R) It would be obvious to anyone that the test is related to the job. The actual content of the test was clearly related to the job. There was no real connection between the test that I went through and the job. (R) Perceived predictive validity Failing to pass the test clearly indicates that you can’t do the job. I am confident that the test can predict how well an applicant will perform on the job. My performance on the test was a good indicator of my ability to do the job. Applicants who perform well on this type of test are more likely to perform well on the job than applicants who perform poorly. The employer can tell a lot about the applicant’s ability to do the job fiom the results of the test. Consistency (Ployhart and Ryan, 1997; Gilliland and Honig, 1994) The selection process was standardized and systematically administered. As far as I know, the selection tests were administered the same way to all applicants. 158 Some people were treated differently than I was. (R) As far as I know, everyone received the same treatment during the selection process. Interpersonal treatment I felt the test was impersonal. I felt like just another number while taking the test. I did n_ot care for my interaction with the test. The test administrator cared how I felt during testing. (R) I felt I was respected during the testing. (R) I felt I was treated coldly during testing. Test Ease I thought that the test was difficult. (R) This test was harder than other tests I have taken. (R) I thought this test was easy. This test was my; difficult. Preference/Liking I liked taking this type of test. I prefer this type of test than others I have taken. I would prefer taking a different type of test (R). Compared to other tests I have taken, I didn’t like this test. (R) Performance attributions My performance was due to the type of test used. My performance was due to the type of procedure used. My performance was due to the way the test was presented. My performance was due to my capabilities. My performance was due to my abilities. My performance was due to how hard I tried. Causal Dimension Scale (Russell, 1982) Is the cause(s) something that: Reflects an aspect-Reflects an aspect of the situation Is the cause(s): Controllable by you or other people-Uncontrollable by you or other people Is the cause(s) something that is: Permanent-Temporary Is the cause(s) something: Intended by you or other people-Unintended by you or other people Is the cause(s) something that is: Outside of you-Inside of you Is the cause(s) something that is: Variable over time-Stable over time Is the cause(s): Something about you-Something about others Is the cause(s) something that is: Changeable-Unchanging Is the cause(s) something for which: No one is responsible-Someone is responsible 159 Acceptance intentions (Ployhart and Ryan, 1997) Hired I will accept the job. I am willing to take this job. Rejected Even if I was now offered the job, I would not accept it. I would not be interested in this job even if I was now offered it. Recommendation intentions (Gilliland, 1994) I would recommend this project to my fiiends. I would tell my fiiends to participate in this project. I would tell others this is a good experiment in which to participate. I think other people should know this is a good experiment. Purchase intentions I would not use this organization's products or services. (R) I would like to use this organization’s products or services. I am interested in using this organization’s products or services. 160